1 | 34 patches for repository http://tahoe-lafs.org/source/tahoe/trunk: |
---|
2 | |
---|
3 | Thu Aug 25 01:32:17 BST 2011 david-sarah@jacaranda.org |
---|
4 | * interfaces.py: 'which -> that' grammar cleanup. |
---|
5 | |
---|
6 | Tue Sep 20 00:29:26 BST 2011 david-sarah@jacaranda.org |
---|
7 | * Pluggable backends -- new and moved files, changes to moved files. refs #999 |
---|
8 | |
---|
9 | Tue Sep 20 00:32:56 BST 2011 david-sarah@jacaranda.org |
---|
10 | * Pluggable backends -- all other changes. refs #999 |
---|
11 | |
---|
12 | Tue Sep 20 04:38:03 BST 2011 david-sarah@jacaranda.org |
---|
13 | * Work-in-progress, includes fix to bug involving BucketWriter. refs #999 |
---|
14 | |
---|
15 | Tue Sep 20 18:17:37 BST 2011 david-sarah@jacaranda.org |
---|
16 | * docs/backends: document the configuration options for the pluggable backends scheme. refs #999 |
---|
17 | |
---|
18 | Wed Sep 21 04:12:07 BST 2011 david-sarah@jacaranda.org |
---|
19 | * Fix some incorrect attribute accesses. refs #999 |
---|
20 | |
---|
21 | Wed Sep 21 04:16:25 BST 2011 david-sarah@jacaranda.org |
---|
22 | * docs/backends/S3.rst: remove Issues section. refs #999 |
---|
23 | |
---|
24 | Wed Sep 21 04:17:05 BST 2011 david-sarah@jacaranda.org |
---|
25 | * docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999 |
---|
26 | |
---|
27 | Wed Sep 21 19:46:49 BST 2011 david-sarah@jacaranda.org |
---|
28 | * More fixes to tests needed for pluggable backends. refs #999 |
---|
29 | |
---|
30 | Wed Sep 21 23:14:21 BST 2011 david-sarah@jacaranda.org |
---|
31 | * Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999 |
---|
32 | |
---|
33 | Wed Sep 21 23:20:38 BST 2011 david-sarah@jacaranda.org |
---|
34 | * uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999 |
---|
35 | |
---|
36 | Thu Sep 22 05:54:51 BST 2011 david-sarah@jacaranda.org |
---|
37 | * Fix some more test failures. refs #999 |
---|
38 | |
---|
39 | Thu Sep 22 19:30:08 BST 2011 david-sarah@jacaranda.org |
---|
40 | * Fix most of the crawler tests. refs #999 |
---|
41 | |
---|
42 | Thu Sep 22 19:33:23 BST 2011 david-sarah@jacaranda.org |
---|
43 | * Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999 |
---|
44 | |
---|
45 | Fri Sep 23 02:20:44 BST 2011 david-sarah@jacaranda.org |
---|
46 | * Blank line cleanups. |
---|
47 | |
---|
48 | Fri Sep 23 05:08:25 BST 2011 david-sarah@jacaranda.org |
---|
49 | * mutable/publish.py: elements should not be removed from a dictionary while it is being iterated over. refs #393 |
---|
50 | |
---|
51 | Fri Sep 23 05:10:03 BST 2011 david-sarah@jacaranda.org |
---|
52 | * A few comment cleanups. refs #999 |
---|
53 | |
---|
54 | Fri Sep 23 05:11:15 BST 2011 david-sarah@jacaranda.org |
---|
55 | * Move advise_corrupt_share to allmydata/storage/backends/base.py, since it will be common to the disk and S3 backends. refs #999 |
---|
56 | |
---|
57 | Fri Sep 23 05:13:14 BST 2011 david-sarah@jacaranda.org |
---|
58 | * Add incomplete S3 backend. refs #999 |
---|
59 | |
---|
60 | Fri Sep 23 21:37:23 BST 2011 david-sarah@jacaranda.org |
---|
61 | * interfaces.py: add fill_in_space_stats method to IStorageBackend. refs #999 |
---|
62 | |
---|
63 | Fri Sep 23 21:44:25 BST 2011 david-sarah@jacaranda.org |
---|
64 | * Remove redundant si_s argument from check_write_enabler. refs #999 |
---|
65 | |
---|
66 | Fri Sep 23 21:46:11 BST 2011 david-sarah@jacaranda.org |
---|
67 | * Implement readv for immutable shares. refs #999 |
---|
68 | |
---|
69 | Fri Sep 23 21:49:14 BST 2011 david-sarah@jacaranda.org |
---|
70 | * The cancel secret needs to be unique, even if it isn't explicitly provided. refs #999 |
---|
71 | |
---|
72 | Fri Sep 23 21:49:45 BST 2011 david-sarah@jacaranda.org |
---|
73 | * Make EmptyShare.check_testv a simple function. refs #999 |
---|
74 | |
---|
75 | Fri Sep 23 21:52:19 BST 2011 david-sarah@jacaranda.org |
---|
76 | * Update the null backend to take into account interface changes. Also, it now records which shares are present, but not their contents. refs #999 |
---|
77 | |
---|
78 | Fri Sep 23 21:53:45 BST 2011 david-sarah@jacaranda.org |
---|
79 | * Update the S3 backend. refs #999 |
---|
80 | |
---|
81 | Fri Sep 23 21:55:10 BST 2011 david-sarah@jacaranda.org |
---|
82 | * Minor cleanup to disk backend. refs #999 |
---|
83 | |
---|
84 | Fri Sep 23 23:09:35 BST 2011 david-sarah@jacaranda.org |
---|
85 | * Add 'has-immutable-readv' to server version information. refs #999 |
---|
86 | |
---|
87 | Tue Sep 27 08:09:47 BST 2011 david-sarah@jacaranda.org |
---|
88 | * util/deferredutil.py: add some utilities for asynchronous iteration. refs #999 |
---|
89 | |
---|
90 | Tue Sep 27 08:14:03 BST 2011 david-sarah@jacaranda.org |
---|
91 | * test_storage.py: fix test_status_bad_disk_stats. refs #999 |
---|
92 | |
---|
93 | Tue Sep 27 08:15:44 BST 2011 david-sarah@jacaranda.org |
---|
94 | * Cleanups to disk backend. refs #999 |
---|
95 | |
---|
96 | Tue Sep 27 08:18:55 BST 2011 david-sarah@jacaranda.org |
---|
97 | * Cleanups to S3 backend (not including Deferred changes). refs #999 |
---|
98 | |
---|
99 | Tue Sep 27 08:28:48 BST 2011 david-sarah@jacaranda.org |
---|
100 | * test_storage.py: fix test_no_st_blocks. refs #999 |
---|
101 | |
---|
102 | Tue Sep 27 08:35:30 BST 2011 david-sarah@jacaranda.org |
---|
103 | * mutable/publish.py: resolve conflicting patches. refs #999 |
---|
104 | |
---|
105 | New patches: |
---|
106 | |
---|
107 | [interfaces.py: 'which -> that' grammar cleanup. |
---|
108 | david-sarah@jacaranda.org**20110825003217 |
---|
109 | Ignore-this: a3e15f3676de1b346ad78aabdfb8cac6 |
---|
110 | ] { |
---|
111 | hunk ./src/allmydata/interfaces.py 38 |
---|
112 | the StubClient. This object doesn't actually offer any services, but the |
---|
113 | announcement helps the Introducer keep track of which clients are |
---|
114 | subscribed (so the grid admin can keep track of things like the size of |
---|
115 | - the grid and the client versions in use. This is the (empty) |
---|
116 | + the grid and the client versions in use). This is the (empty) |
---|
117 | RemoteInterface for the StubClient.""" |
---|
118 | |
---|
119 | class RIBucketWriter(RemoteInterface): |
---|
120 | hunk ./src/allmydata/interfaces.py 276 |
---|
121 | (binary) storage index string, and 'shnum' is the integer share |
---|
122 | number. 'reason' is a human-readable explanation of the problem, |
---|
123 | probably including some expected hash values and the computed ones |
---|
124 | - which did not match. Corruption advisories for mutable shares should |
---|
125 | + that did not match. Corruption advisories for mutable shares should |
---|
126 | include a hash of the public key (the same value that appears in the |
---|
127 | mutable-file verify-cap), since the current share format does not |
---|
128 | store that on disk. |
---|
129 | hunk ./src/allmydata/interfaces.py 413 |
---|
130 | remote_host: the IAddress, if connected, otherwise None |
---|
131 | |
---|
132 | This method is intended for monitoring interfaces, such as a web page |
---|
133 | - which describes connecting and connected peers. |
---|
134 | + that describes connecting and connected peers. |
---|
135 | """ |
---|
136 | |
---|
137 | def get_all_peerids(): |
---|
138 | hunk ./src/allmydata/interfaces.py 515 |
---|
139 | |
---|
140 | # TODO: rename to get_read_cap() |
---|
141 | def get_readonly(): |
---|
142 | - """Return another IURI instance, which represents a read-only form of |
---|
143 | + """Return another IURI instance that represents a read-only form of |
---|
144 | this one. If is_readonly() is True, this returns self.""" |
---|
145 | |
---|
146 | def get_verify_cap(): |
---|
147 | hunk ./src/allmydata/interfaces.py 542 |
---|
148 | passing into init_from_string.""" |
---|
149 | |
---|
150 | class IDirnodeURI(Interface): |
---|
151 | - """I am a URI which represents a dirnode.""" |
---|
152 | + """I am a URI that represents a dirnode.""" |
---|
153 | |
---|
154 | class IFileURI(Interface): |
---|
155 | hunk ./src/allmydata/interfaces.py 545 |
---|
156 | - """I am a URI which represents a filenode.""" |
---|
157 | + """I am a URI that represents a filenode.""" |
---|
158 | def get_size(): |
---|
159 | """Return the length (in bytes) of the file that I represent.""" |
---|
160 | |
---|
161 | hunk ./src/allmydata/interfaces.py 553 |
---|
162 | pass |
---|
163 | |
---|
164 | class IMutableFileURI(Interface): |
---|
165 | - """I am a URI which represents a mutable filenode.""" |
---|
166 | + """I am a URI that represents a mutable filenode.""" |
---|
167 | def get_extension_params(): |
---|
168 | """Return the extension parameters in the URI""" |
---|
169 | |
---|
170 | hunk ./src/allmydata/interfaces.py 856 |
---|
171 | """ |
---|
172 | |
---|
173 | class IFileNode(IFilesystemNode): |
---|
174 | - """I am a node which represents a file: a sequence of bytes. I am not a |
---|
175 | + """I am a node that represents a file: a sequence of bytes. I am not a |
---|
176 | container, like IDirectoryNode.""" |
---|
177 | def get_best_readable_version(): |
---|
178 | """Return a Deferred that fires with an IReadable for the 'best' |
---|
179 | hunk ./src/allmydata/interfaces.py 905 |
---|
180 | multiple versions of a file present in the grid, some of which might be |
---|
181 | unrecoverable (i.e. have fewer than 'k' shares). These versions are |
---|
182 | loosely ordered: each has a sequence number and a hash, and any version |
---|
183 | - with seqnum=N was uploaded by a node which has seen at least one version |
---|
184 | + with seqnum=N was uploaded by a node that has seen at least one version |
---|
185 | with seqnum=N-1. |
---|
186 | |
---|
187 | The 'servermap' (an instance of IMutableFileServerMap) is used to |
---|
188 | hunk ./src/allmydata/interfaces.py 1014 |
---|
189 | as a guide to where the shares are located. |
---|
190 | |
---|
191 | I return a Deferred that fires with the requested contents, or |
---|
192 | - errbacks with UnrecoverableFileError. Note that a servermap which was |
---|
193 | + errbacks with UnrecoverableFileError. Note that a servermap that was |
---|
194 | updated with MODE_ANYTHING or MODE_READ may not know about shares for |
---|
195 | all versions (those modes stop querying servers as soon as they can |
---|
196 | fulfil their goals), so you may want to use MODE_CHECK (which checks |
---|
197 | hunk ./src/allmydata/interfaces.py 1073 |
---|
198 | """Upload was unable to satisfy 'servers_of_happiness'""" |
---|
199 | |
---|
200 | class UnableToFetchCriticalDownloadDataError(Exception): |
---|
201 | - """I was unable to fetch some piece of critical data which is supposed to |
---|
202 | + """I was unable to fetch some piece of critical data that is supposed to |
---|
203 | be identically present in all shares.""" |
---|
204 | |
---|
205 | class NoServersError(Exception): |
---|
206 | hunk ./src/allmydata/interfaces.py 1085 |
---|
207 | exists, and overwrite= was set to False.""" |
---|
208 | |
---|
209 | class NoSuchChildError(Exception): |
---|
210 | - """A directory node was asked to fetch a child which does not exist.""" |
---|
211 | + """A directory node was asked to fetch a child that does not exist.""" |
---|
212 | |
---|
213 | class ChildOfWrongTypeError(Exception): |
---|
214 | """An operation was attempted on a child of the wrong type (file or directory).""" |
---|
215 | hunk ./src/allmydata/interfaces.py 1403 |
---|
216 | if you initially thought you were going to use 10 peers, started |
---|
217 | encoding, and then two of the peers dropped out: you could use |
---|
218 | desired_share_ids= to skip the work (both memory and CPU) of |
---|
219 | - producing shares for the peers which are no longer available. |
---|
220 | + producing shares for the peers that are no longer available. |
---|
221 | |
---|
222 | """ |
---|
223 | |
---|
224 | hunk ./src/allmydata/interfaces.py 1478 |
---|
225 | if you initially thought you were going to use 10 peers, started |
---|
226 | encoding, and then two of the peers dropped out: you could use |
---|
227 | desired_share_ids= to skip the work (both memory and CPU) of |
---|
228 | - producing shares for the peers which are no longer available. |
---|
229 | + producing shares for the peers that are no longer available. |
---|
230 | |
---|
231 | For each call, encode() will return a Deferred that fires with two |
---|
232 | lists, one containing shares and the other containing the shareids. |
---|
233 | hunk ./src/allmydata/interfaces.py 1535 |
---|
234 | required to be of the same length. The i'th element of their_shareids |
---|
235 | is required to be the shareid of the i'th buffer in some_shares. |
---|
236 | |
---|
237 | - This returns a Deferred which fires with a sequence of buffers. This |
---|
238 | + This returns a Deferred that fires with a sequence of buffers. This |
---|
239 | sequence will contain all of the segments of the original data, in |
---|
240 | order. The sum of the lengths of all of the buffers will be the |
---|
241 | 'data_size' value passed into the original ICodecEncode.set_params() |
---|
242 | hunk ./src/allmydata/interfaces.py 1582 |
---|
243 | Encoding parameters can be set in three ways. 1: The Encoder class |
---|
244 | provides defaults (3/7/10). 2: the Encoder can be constructed with |
---|
245 | an 'options' dictionary, in which the |
---|
246 | - needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3: |
---|
247 | + 'needed_and_happy_and_total_shares' key can be a (k,d,n) tuple. 3: |
---|
248 | set_params((k,d,n)) can be called. |
---|
249 | |
---|
250 | If you intend to use set_params(), you must call it before |
---|
251 | hunk ./src/allmydata/interfaces.py 1780 |
---|
252 | produced, so that the segment hashes can be generated with only a |
---|
253 | single pass. |
---|
254 | |
---|
255 | - This returns a Deferred which fires with a sequence of hashes, using: |
---|
256 | + This returns a Deferred that fires with a sequence of hashes, using: |
---|
257 | |
---|
258 | tuple(segment_hashes[first:last]) |
---|
259 | |
---|
260 | hunk ./src/allmydata/interfaces.py 1796 |
---|
261 | def get_plaintext_hash(): |
---|
262 | """OBSOLETE; Get the hash of the whole plaintext. |
---|
263 | |
---|
264 | - This returns a Deferred which fires with a tagged SHA-256 hash of the |
---|
265 | + This returns a Deferred that fires with a tagged SHA-256 hash of the |
---|
266 | whole plaintext, obtained from hashutil.plaintext_hash(data). |
---|
267 | """ |
---|
268 | |
---|
269 | hunk ./src/allmydata/interfaces.py 1856 |
---|
270 | be used to encrypt the data. The key will also be hashed to derive |
---|
271 | the StorageIndex. |
---|
272 | |
---|
273 | - Uploadables which want to achieve convergence should hash their file |
---|
274 | + Uploadables that want to achieve convergence should hash their file |
---|
275 | contents and the serialized_encoding_parameters to form the key |
---|
276 | (which of course requires a full pass over the data). Uploadables can |
---|
277 | use the upload.ConvergentUploadMixin class to achieve this |
---|
278 | hunk ./src/allmydata/interfaces.py 1862 |
---|
279 | automatically. |
---|
280 | |
---|
281 | - Uploadables which do not care about convergence (or do not wish to |
---|
282 | + Uploadables that do not care about convergence (or do not wish to |
---|
283 | make multiple passes over the data) can simply return a |
---|
284 | strongly-random 16 byte string. |
---|
285 | |
---|
286 | hunk ./src/allmydata/interfaces.py 1872 |
---|
287 | |
---|
288 | def read(length): |
---|
289 | """Return a Deferred that fires with a list of strings (perhaps with |
---|
290 | - only a single element) which, when concatenated together, contain the |
---|
291 | + only a single element) that, when concatenated together, contain the |
---|
292 | next 'length' bytes of data. If EOF is near, this may provide fewer |
---|
293 | than 'length' bytes. The total number of bytes provided by read() |
---|
294 | before it signals EOF must equal the size provided by get_size(). |
---|
295 | hunk ./src/allmydata/interfaces.py 1919 |
---|
296 | |
---|
297 | def read(length): |
---|
298 | """ |
---|
299 | - Returns a list of strings which, when concatenated, are the next |
---|
300 | + Returns a list of strings that, when concatenated, are the next |
---|
301 | length bytes of the file, or fewer if there are fewer bytes |
---|
302 | between the current location and the end of the file. |
---|
303 | """ |
---|
304 | hunk ./src/allmydata/interfaces.py 1932 |
---|
305 | |
---|
306 | class IUploadResults(Interface): |
---|
307 | """I am returned by upload() methods. I contain a number of public |
---|
308 | - attributes which can be read to determine the results of the upload. Some |
---|
309 | + attributes that can be read to determine the results of the upload. Some |
---|
310 | of these are functional, some are timing information. All of these may be |
---|
311 | None. |
---|
312 | |
---|
313 | hunk ./src/allmydata/interfaces.py 1965 |
---|
314 | |
---|
315 | class IDownloadResults(Interface): |
---|
316 | """I am created internally by download() methods. I contain a number of |
---|
317 | - public attributes which contain details about the download process.:: |
---|
318 | + public attributes that contain details about the download process.:: |
---|
319 | |
---|
320 | .file_size : the size of the file, in bytes |
---|
321 | .servers_used : set of server peerids that were used during download |
---|
322 | hunk ./src/allmydata/interfaces.py 1991 |
---|
323 | class IUploader(Interface): |
---|
324 | def upload(uploadable): |
---|
325 | """Upload the file. 'uploadable' must impement IUploadable. This |
---|
326 | - returns a Deferred which fires with an IUploadResults instance, from |
---|
327 | + returns a Deferred that fires with an IUploadResults instance, from |
---|
328 | which the URI of the file can be obtained as results.uri .""" |
---|
329 | |
---|
330 | def upload_ssk(write_capability, new_version, uploadable): |
---|
331 | hunk ./src/allmydata/interfaces.py 2041 |
---|
332 | kind of lease that is obtained (which account number to claim, etc). |
---|
333 | |
---|
334 | TODO: any problems seen during checking will be reported to the |
---|
335 | - health-manager.furl, a centralized object which is responsible for |
---|
336 | + health-manager.furl, a centralized object that is responsible for |
---|
337 | figuring out why files are unhealthy so corrective action can be |
---|
338 | taken. |
---|
339 | """ |
---|
340 | hunk ./src/allmydata/interfaces.py 2056 |
---|
341 | will be put in the check-and-repair results. The Deferred will not |
---|
342 | fire until the repair is complete. |
---|
343 | |
---|
344 | - This returns a Deferred which fires with an instance of |
---|
345 | + This returns a Deferred that fires with an instance of |
---|
346 | ICheckAndRepairResults.""" |
---|
347 | |
---|
348 | class IDeepCheckable(Interface): |
---|
349 | hunk ./src/allmydata/interfaces.py 2141 |
---|
350 | that was found to be corrupt. Each share |
---|
351 | locator is a list of (serverid, storage_index, |
---|
352 | sharenum). |
---|
353 | - count-incompatible-shares: the number of shares which are of a share |
---|
354 | + count-incompatible-shares: the number of shares that are of a share |
---|
355 | format unknown to this checker |
---|
356 | list-incompatible-shares: a list of 'share locators', one for each |
---|
357 | share that was found to be of an unknown |
---|
358 | hunk ./src/allmydata/interfaces.py 2148 |
---|
359 | format. Each share locator is a list of |
---|
360 | (serverid, storage_index, sharenum). |
---|
361 | servers-responding: list of (binary) storage server identifiers, |
---|
362 | - one for each server which responded to the share |
---|
363 | + one for each server that responded to the share |
---|
364 | query (even if they said they didn't have |
---|
365 | shares, and even if they said they did have |
---|
366 | shares but then didn't send them when asked, or |
---|
367 | hunk ./src/allmydata/interfaces.py 2345 |
---|
368 | will use the data in the checker results to guide the repair process, |
---|
369 | such as which servers provided bad data and should therefore be |
---|
370 | avoided. The ICheckResults object is inside the |
---|
371 | - ICheckAndRepairResults object, which is returned by the |
---|
372 | + ICheckAndRepairResults object that is returned by the |
---|
373 | ICheckable.check() method:: |
---|
374 | |
---|
375 | d = filenode.check(repair=False) |
---|
376 | hunk ./src/allmydata/interfaces.py 2436 |
---|
377 | methods to create new objects. I return synchronously.""" |
---|
378 | |
---|
379 | def create_mutable_file(contents=None, keysize=None): |
---|
380 | - """I create a new mutable file, and return a Deferred which will fire |
---|
381 | + """I create a new mutable file, and return a Deferred that will fire |
---|
382 | with the IMutableFileNode instance when it is ready. If contents= is |
---|
383 | provided (a bytestring), it will be used as the initial contents of |
---|
384 | the new file, otherwise the file will contain zero bytes. keysize= is |
---|
385 | hunk ./src/allmydata/interfaces.py 2444 |
---|
386 | usual.""" |
---|
387 | |
---|
388 | def create_new_mutable_directory(initial_children={}): |
---|
389 | - """I create a new mutable directory, and return a Deferred which will |
---|
390 | + """I create a new mutable directory, and return a Deferred that will |
---|
391 | fire with the IDirectoryNode instance when it is ready. If |
---|
392 | initial_children= is provided (a dict mapping unicode child name to |
---|
393 | (childnode, metadata_dict) tuples), the directory will be populated |
---|
394 | hunk ./src/allmydata/interfaces.py 2452 |
---|
395 | |
---|
396 | class IClientStatus(Interface): |
---|
397 | def list_all_uploads(): |
---|
398 | - """Return a list of uploader objects, one for each upload which |
---|
399 | + """Return a list of uploader objects, one for each upload that |
---|
400 | currently has an object available (tracked with weakrefs). This is |
---|
401 | intended for debugging purposes.""" |
---|
402 | def list_active_uploads(): |
---|
403 | hunk ./src/allmydata/interfaces.py 2462 |
---|
404 | started uploads.""" |
---|
405 | |
---|
406 | def list_all_downloads(): |
---|
407 | - """Return a list of downloader objects, one for each download which |
---|
408 | + """Return a list of downloader objects, one for each download that |
---|
409 | currently has an object available (tracked with weakrefs). This is |
---|
410 | intended for debugging purposes.""" |
---|
411 | def list_active_downloads(): |
---|
412 | hunk ./src/allmydata/interfaces.py 2689 |
---|
413 | |
---|
414 | def provide(provider=RIStatsProvider, nickname=str): |
---|
415 | """ |
---|
416 | - @param provider: a stats collector instance which should be polled |
---|
417 | + @param provider: a stats collector instance that should be polled |
---|
418 | periodically by the gatherer to collect stats. |
---|
419 | @param nickname: a name useful to identify the provided client |
---|
420 | """ |
---|
421 | hunk ./src/allmydata/interfaces.py 2722 |
---|
422 | |
---|
423 | class IValidatedThingProxy(Interface): |
---|
424 | def start(): |
---|
425 | - """ Acquire a thing and validate it. Return a deferred which is |
---|
426 | + """ Acquire a thing and validate it. Return a deferred that is |
---|
427 | eventually fired with self if the thing is valid or errbacked if it |
---|
428 | can't be acquired or validated.""" |
---|
429 | |
---|
430 | } |
---|
431 | [Pluggable backends -- new and moved files, changes to moved files. refs #999 |
---|
432 | david-sarah@jacaranda.org**20110919232926 |
---|
433 | Ignore-this: ec5d2d1362a092d919e84327d3092424 |
---|
434 | ] { |
---|
435 | adddir ./src/allmydata/storage/backends |
---|
436 | adddir ./src/allmydata/storage/backends/disk |
---|
437 | move ./src/allmydata/storage/immutable.py ./src/allmydata/storage/backends/disk/immutable.py |
---|
438 | move ./src/allmydata/storage/mutable.py ./src/allmydata/storage/backends/disk/mutable.py |
---|
439 | adddir ./src/allmydata/storage/backends/null |
---|
440 | addfile ./src/allmydata/storage/backends/__init__.py |
---|
441 | addfile ./src/allmydata/storage/backends/base.py |
---|
442 | hunk ./src/allmydata/storage/backends/base.py 1 |
---|
443 | + |
---|
444 | +from twisted.application import service |
---|
445 | + |
---|
446 | +from allmydata.storage.common import si_b2a |
---|
447 | +from allmydata.storage.lease import LeaseInfo |
---|
448 | +from allmydata.storage.bucket import BucketReader |
---|
449 | + |
---|
450 | + |
---|
451 | +class Backend(service.MultiService): |
---|
452 | + def __init__(self): |
---|
453 | + service.MultiService.__init__(self) |
---|
454 | + |
---|
455 | + |
---|
456 | +class ShareSet(object): |
---|
457 | + """ |
---|
458 | + This class implements shareset logic that could work for all backends, but |
---|
459 | + might be useful to override for efficiency. |
---|
460 | + """ |
---|
461 | + |
---|
462 | + def __init__(self, storageindex): |
---|
463 | + self.storageindex = storageindex |
---|
464 | + |
---|
465 | + def get_storage_index(self): |
---|
466 | + return self.storageindex |
---|
467 | + |
---|
468 | + def get_storage_index_string(self): |
---|
469 | + return si_b2a(self.storageindex) |
---|
470 | + |
---|
471 | + def renew_lease(self, renew_secret, new_expiration_time): |
---|
472 | + found_shares = False |
---|
473 | + for share in self.get_shares(): |
---|
474 | + found_shares = True |
---|
475 | + share.renew_lease(renew_secret, new_expiration_time) |
---|
476 | + |
---|
477 | + if not found_shares: |
---|
478 | + raise IndexError("no such lease to renew") |
---|
479 | + |
---|
480 | + def get_leases(self): |
---|
481 | + # Since all shares get the same lease data, we just grab the leases |
---|
482 | + # from the first share. |
---|
483 | + try: |
---|
484 | + sf = self.get_shares().next() |
---|
485 | + return sf.get_leases() |
---|
486 | + except StopIteration: |
---|
487 | + return iter([]) |
---|
488 | + |
---|
489 | + def add_or_renew_lease(self, lease_info): |
---|
490 | + # This implementation assumes that lease data is duplicated in |
---|
491 | + # all shares of a shareset, which might not be true for all backends. |
---|
492 | + for share in self.get_shares(): |
---|
493 | + share.add_or_renew_lease(lease_info) |
---|
494 | + |
---|
495 | + def make_bucket_reader(self, storageserver, share): |
---|
496 | + return BucketReader(storageserver, share) |
---|
497 | + |
---|
498 | + def testv_and_readv_and_writev(self, storageserver, secrets, |
---|
499 | + test_and_write_vectors, read_vector, |
---|
500 | + expiration_time): |
---|
501 | + # The implementation here depends on the following helper methods, |
---|
502 | + # which must be provided by subclasses: |
---|
503 | + # |
---|
504 | + # def _clean_up_after_unlink(self): |
---|
505 | + # """clean up resources associated with the shareset after some |
---|
506 | + # shares might have been deleted""" |
---|
507 | + # |
---|
508 | + # def _create_mutable_share(self, storageserver, shnum, write_enabler): |
---|
509 | + # """create a mutable share with the given shnum and write_enabler""" |
---|
510 | + |
---|
511 | + # secrets might be a triple with cancel_secret in secrets[2], but if |
---|
512 | + # so we ignore the cancel_secret. |
---|
513 | + write_enabler = secrets[0] |
---|
514 | + renew_secret = secrets[1] |
---|
515 | + |
---|
516 | + si_s = self.get_storage_index_string() |
---|
517 | + shares = {} |
---|
518 | + for share in self.get_shares(): |
---|
519 | + # XXX is it correct to ignore immutable shares? Maybe get_shares should |
---|
520 | + # have a parameter saying what type it's expecting. |
---|
521 | + if share.sharetype == "mutable": |
---|
522 | + share.check_write_enabler(write_enabler, si_s) |
---|
523 | + shares[share.get_shnum()] = share |
---|
524 | + |
---|
525 | + # write_enabler is good for all existing shares |
---|
526 | + |
---|
527 | + # now evaluate test vectors |
---|
528 | + testv_is_good = True |
---|
529 | + for sharenum in test_and_write_vectors: |
---|
530 | + (testv, datav, new_length) = test_and_write_vectors[sharenum] |
---|
531 | + if sharenum in shares: |
---|
532 | + if not shares[sharenum].check_testv(testv): |
---|
533 | + self.log("testv failed: [%d]: %r" % (sharenum, testv)) |
---|
534 | + testv_is_good = False |
---|
535 | + break |
---|
536 | + else: |
---|
537 | + # compare the vectors against an empty share, in which all |
---|
538 | + # reads return empty strings |
---|
539 | + if not EmptyShare().check_testv(testv): |
---|
540 | + self.log("testv failed (empty): [%d] %r" % (sharenum, |
---|
541 | + testv)) |
---|
542 | + testv_is_good = False |
---|
543 | + break |
---|
544 | + |
---|
545 | + # gather the read vectors, before we do any writes |
---|
546 | + read_data = {} |
---|
547 | + for shnum, share in shares.items(): |
---|
548 | + read_data[shnum] = share.readv(read_vector) |
---|
549 | + |
---|
550 | + ownerid = 1 # TODO |
---|
551 | + lease_info = LeaseInfo(ownerid, renew_secret, |
---|
552 | + expiration_time, storageserver.get_serverid()) |
---|
553 | + |
---|
554 | + if testv_is_good: |
---|
555 | + # now apply the write vectors |
---|
556 | + for shnum in test_and_write_vectors: |
---|
557 | + (testv, datav, new_length) = test_and_write_vectors[shnum] |
---|
558 | + if new_length == 0: |
---|
559 | + if shnum in shares: |
---|
560 | + shares[shnum].unlink() |
---|
561 | + else: |
---|
562 | + if shnum not in shares: |
---|
563 | + # allocate a new share |
---|
564 | + share = self._create_mutable_share(storageserver, shnum, write_enabler) |
---|
565 | + shares[shnum] = share |
---|
566 | + shares[shnum].writev(datav, new_length) |
---|
567 | + # and update the lease |
---|
568 | + shares[shnum].add_or_renew_lease(lease_info) |
---|
569 | + |
---|
570 | + if new_length == 0: |
---|
571 | + self._clean_up_after_unlink() |
---|
572 | + |
---|
573 | + return (testv_is_good, read_data) |
---|
574 | + |
---|
575 | + def readv(self, wanted_shnums, read_vector): |
---|
576 | + """ |
---|
577 | + Read a vector from the numbered shares in this shareset. An empty |
---|
578 | + shares list means to return data from all known shares. |
---|
579 | + |
---|
580 | + @param wanted_shnums=ListOf(int) |
---|
581 | + @param read_vector=ReadVector |
---|
582 | + @return DictOf(int, ReadData): shnum -> results, with one key per share |
---|
583 | + """ |
---|
584 | + datavs = {} |
---|
585 | + for share in self.get_shares(): |
---|
586 | + shnum = share.get_shnum() |
---|
587 | + if not wanted_shnums or shnum in wanted_shnums: |
---|
588 | + datavs[shnum] = share.readv(read_vector) |
---|
589 | + |
---|
590 | + return datavs |
---|
591 | + |
---|
592 | + |
---|
593 | +def testv_compare(a, op, b): |
---|
594 | + assert op in ("lt", "le", "eq", "ne", "ge", "gt") |
---|
595 | + if op == "lt": |
---|
596 | + return a < b |
---|
597 | + if op == "le": |
---|
598 | + return a <= b |
---|
599 | + if op == "eq": |
---|
600 | + return a == b |
---|
601 | + if op == "ne": |
---|
602 | + return a != b |
---|
603 | + if op == "ge": |
---|
604 | + return a >= b |
---|
605 | + if op == "gt": |
---|
606 | + return a > b |
---|
607 | + # never reached |
---|
608 | + |
---|
609 | + |
---|
610 | +class EmptyShare: |
---|
611 | + def check_testv(self, testv): |
---|
612 | + test_good = True |
---|
613 | + for (offset, length, operator, specimen) in testv: |
---|
614 | + data = "" |
---|
615 | + if not testv_compare(data, operator, specimen): |
---|
616 | + test_good = False |
---|
617 | + break |
---|
618 | + return test_good |
---|
619 | + |
---|
620 | addfile ./src/allmydata/storage/backends/disk/__init__.py |
---|
621 | addfile ./src/allmydata/storage/backends/disk/disk_backend.py |
---|
622 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 1 |
---|
623 | + |
---|
624 | +import re |
---|
625 | + |
---|
626 | +from twisted.python.filepath import UnlistableError |
---|
627 | + |
---|
628 | +from zope.interface import implements |
---|
629 | +from allmydata.interfaces import IStorageBackend, IShareSet |
---|
630 | +from allmydata.util import fileutil, log, time_format |
---|
631 | +from allmydata.storage.common import si_b2a, si_a2b |
---|
632 | +from allmydata.storage.bucket import BucketWriter |
---|
633 | +from allmydata.storage.backends.base import Backend, ShareSet |
---|
634 | +from allmydata.storage.backends.disk.immutable import ImmutableDiskShare |
---|
635 | +from allmydata.storage.backends.disk.mutable import MutableDiskShare, create_mutable_disk_share |
---|
636 | + |
---|
637 | +# storage/ |
---|
638 | +# storage/shares/incoming |
---|
639 | +# incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will |
---|
640 | +# be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success |
---|
641 | +# storage/shares/$START/$STORAGEINDEX |
---|
642 | +# storage/shares/$START/$STORAGEINDEX/$SHARENUM |
---|
643 | + |
---|
644 | +# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2 |
---|
645 | +# base-32 chars). |
---|
646 | +# $SHARENUM matches this regex: |
---|
647 | +NUM_RE=re.compile("^[0-9]+$") |
---|
648 | + |
---|
649 | + |
---|
650 | +def si_si2dir(startfp, storageindex): |
---|
651 | + sia = si_b2a(storageindex) |
---|
652 | + newfp = startfp.child(sia[:2]) |
---|
653 | + return newfp.child(sia) |
---|
654 | + |
---|
655 | + |
---|
656 | +def get_share(fp): |
---|
657 | + f = fp.open('rb') |
---|
658 | + try: |
---|
659 | + prefix = f.read(32) |
---|
660 | + finally: |
---|
661 | + f.close() |
---|
662 | + |
---|
663 | + if prefix == MutableDiskShare.MAGIC: |
---|
664 | + return MutableDiskShare(fp) |
---|
665 | + else: |
---|
666 | + # assume it's immutable |
---|
667 | + return ImmutableDiskShare(fp) |
---|
668 | + |
---|
669 | + |
---|
670 | +class DiskBackend(Backend): |
---|
671 | + implements(IStorageBackend) |
---|
672 | + |
---|
673 | + def __init__(self, storedir, readonly=False, reserved_space=0, discard_storage=False): |
---|
674 | + Backend.__init__(self) |
---|
675 | + self._setup_storage(storedir, readonly, reserved_space, discard_storage) |
---|
676 | + self._setup_corruption_advisory() |
---|
677 | + |
---|
678 | + def _setup_storage(self, storedir, readonly, reserved_space, discard_storage): |
---|
679 | + self._storedir = storedir |
---|
680 | + self._readonly = readonly |
---|
681 | + self._reserved_space = int(reserved_space) |
---|
682 | + self._discard_storage = discard_storage |
---|
683 | + self._sharedir = self._storedir.child("shares") |
---|
684 | + fileutil.fp_make_dirs(self._sharedir) |
---|
685 | + self._incomingdir = self._sharedir.child('incoming') |
---|
686 | + self._clean_incomplete() |
---|
687 | + if self._reserved_space and (self.get_available_space() is None): |
---|
688 | + log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored", |
---|
689 | + umid="0wZ27w", level=log.UNUSUAL) |
---|
690 | + |
---|
691 | + def _clean_incomplete(self): |
---|
692 | + fileutil.fp_remove(self._incomingdir) |
---|
693 | + fileutil.fp_make_dirs(self._incomingdir) |
---|
694 | + |
---|
695 | + def _setup_corruption_advisory(self): |
---|
696 | + # we don't actually create the corruption-advisory dir until necessary |
---|
697 | + self._corruption_advisory_dir = self._storedir.child("corruption-advisories") |
---|
698 | + |
---|
699 | + def _make_shareset(self, sharehomedir): |
---|
700 | + return self.get_shareset(si_a2b(sharehomedir.basename())) |
---|
701 | + |
---|
702 | + def get_sharesets_for_prefix(self, prefix): |
---|
703 | + prefixfp = self._sharedir.child(prefix) |
---|
704 | + try: |
---|
705 | + sharesets = map(self._make_shareset, prefixfp.children()) |
---|
706 | + def _by_base32si(b): |
---|
707 | + return b.get_storage_index_string() |
---|
708 | + sharesets.sort(key=_by_base32si) |
---|
709 | + except EnvironmentError: |
---|
710 | + sharesets = [] |
---|
711 | + return sharesets |
---|
712 | + |
---|
713 | + def get_shareset(self, storageindex): |
---|
714 | + sharehomedir = si_si2dir(self._sharedir, storageindex) |
---|
715 | + incominghomedir = si_si2dir(self._incomingdir, storageindex) |
---|
716 | + return DiskShareSet(storageindex, sharehomedir, incominghomedir, discard_storage=self._discard_storage) |
---|
717 | + |
---|
718 | + def fill_in_space_stats(self, stats): |
---|
719 | + stats['storage_server.reserved_space'] = self._reserved_space |
---|
720 | + try: |
---|
721 | + disk = fileutil.get_disk_stats(self._sharedir, self._reserved_space) |
---|
722 | + writeable = disk['avail'] > 0 |
---|
723 | + |
---|
724 | + # spacetime predictors should use disk_avail / (d(disk_used)/dt) |
---|
725 | + stats['storage_server.disk_total'] = disk['total'] |
---|
726 | + stats['storage_server.disk_used'] = disk['used'] |
---|
727 | + stats['storage_server.disk_free_for_root'] = disk['free_for_root'] |
---|
728 | + stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot'] |
---|
729 | + stats['storage_server.disk_avail'] = disk['avail'] |
---|
730 | + except AttributeError: |
---|
731 | + writeable = True |
---|
732 | + except EnvironmentError: |
---|
733 | + log.msg("OS call to get disk statistics failed", level=log.UNUSUAL) |
---|
734 | + writeable = False |
---|
735 | + |
---|
736 | + if self._readonly: |
---|
737 | + stats['storage_server.disk_avail'] = 0 |
---|
738 | + writeable = False |
---|
739 | + |
---|
740 | + stats['storage_server.accepting_immutable_shares'] = int(writeable) |
---|
741 | + |
---|
742 | + def get_available_space(self): |
---|
743 | + if self._readonly: |
---|
744 | + return 0 |
---|
745 | + return fileutil.get_available_space(self._sharedir, self._reserved_space) |
---|
746 | + |
---|
747 | + def advise_corrupt_share(self, sharetype, storageindex, shnum, reason): |
---|
748 | + fileutil.fp_make_dirs(self._corruption_advisory_dir) |
---|
749 | + now = time_format.iso_utc(sep="T") |
---|
750 | + si_s = si_b2a(storageindex) |
---|
751 | + |
---|
752 | + # Windows can't handle colons in the filename. |
---|
753 | + name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "") |
---|
754 | + f = self._corruption_advisory_dir.child(name).open("w") |
---|
755 | + try: |
---|
756 | + f.write("report: Share Corruption\n") |
---|
757 | + f.write("type: %s\n" % sharetype) |
---|
758 | + f.write("storage_index: %s\n" % si_s) |
---|
759 | + f.write("share_number: %d\n" % shnum) |
---|
760 | + f.write("\n") |
---|
761 | + f.write(reason) |
---|
762 | + f.write("\n") |
---|
763 | + finally: |
---|
764 | + f.close() |
---|
765 | + |
---|
766 | + log.msg(format=("client claims corruption in (%(share_type)s) " + |
---|
767 | + "%(si)s-%(shnum)d: %(reason)s"), |
---|
768 | + share_type=sharetype, si=si_s, shnum=shnum, reason=reason, |
---|
769 | + level=log.SCARY, umid="SGx2fA") |
---|
770 | + |
---|
771 | + |
---|
772 | +class DiskShareSet(ShareSet): |
---|
773 | + implements(IShareSet) |
---|
774 | + |
---|
775 | + def __init__(self, storageindex, sharehomedir, incominghomedir=None, discard_storage=False): |
---|
776 | + ShareSet.__init__(self, storageindex) |
---|
777 | + self._sharehomedir = sharehomedir |
---|
778 | + self._incominghomedir = incominghomedir |
---|
779 | + self._discard_storage = discard_storage |
---|
780 | + |
---|
781 | + def get_overhead(self): |
---|
782 | + return (fileutil.get_disk_usage(self._sharehomedir) + |
---|
783 | + fileutil.get_disk_usage(self._incominghomedir)) |
---|
784 | + |
---|
785 | + def get_shares(self): |
---|
786 | + """ |
---|
787 | + Generate IStorageBackendShare objects for shares we have for this storage index. |
---|
788 | + ("Shares we have" means completed ones, excluding incoming ones.) |
---|
789 | + """ |
---|
790 | + try: |
---|
791 | + for fp in self._sharehomedir.children(): |
---|
792 | + shnumstr = fp.basename() |
---|
793 | + if not NUM_RE.match(shnumstr): |
---|
794 | + continue |
---|
795 | + sharehome = self._sharehomedir.child(shnumstr) |
---|
796 | + yield self.get_share(sharehome) |
---|
797 | + except UnlistableError: |
---|
798 | + # There is no shares directory at all. |
---|
799 | + pass |
---|
800 | + |
---|
801 | + def has_incoming(self, shnum): |
---|
802 | + if self._incominghomedir is None: |
---|
803 | + return False |
---|
804 | + return self._incominghomedir.child(str(shnum)).exists() |
---|
805 | + |
---|
806 | + def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary): |
---|
807 | + sharehome = self._sharehomedir.child(str(shnum)) |
---|
808 | + incominghome = self._incominghomedir.child(str(shnum)) |
---|
809 | + immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome, |
---|
810 | + max_size=max_space_per_bucket, create=True) |
---|
811 | + bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary) |
---|
812 | + if self._discard_storage: |
---|
813 | + bw.throw_out_all_data = True |
---|
814 | + return bw |
---|
815 | + |
---|
816 | + def _create_mutable_share(self, storageserver, shnum, write_enabler): |
---|
817 | + fileutil.fp_make_dirs(self._sharehomedir) |
---|
818 | + sharehome = self._sharehomedir.child(str(shnum)) |
---|
819 | + serverid = storageserver.get_serverid() |
---|
820 | + return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver) |
---|
821 | + |
---|
822 | + def _clean_up_after_unlink(self): |
---|
823 | + fileutil.fp_rmdir_if_empty(self._sharehomedir) |
---|
824 | + |
---|
825 | hunk ./src/allmydata/storage/backends/disk/immutable.py 1 |
---|
826 | -import os, stat, struct, time |
---|
827 | |
---|
828 | hunk ./src/allmydata/storage/backends/disk/immutable.py 2 |
---|
829 | -from foolscap.api import Referenceable |
---|
830 | +import struct |
---|
831 | |
---|
832 | from zope.interface import implements |
---|
833 | hunk ./src/allmydata/storage/backends/disk/immutable.py 5 |
---|
834 | -from allmydata.interfaces import RIBucketWriter, RIBucketReader |
---|
835 | -from allmydata.util import base32, fileutil, log |
---|
836 | + |
---|
837 | +from allmydata.interfaces import IStoredShare |
---|
838 | +from allmydata.util import fileutil |
---|
839 | from allmydata.util.assertutil import precondition |
---|
840 | hunk ./src/allmydata/storage/backends/disk/immutable.py 9 |
---|
841 | +from allmydata.util.fileutil import fp_make_dirs |
---|
842 | from allmydata.util.hashutil import constant_time_compare |
---|
843 | hunk ./src/allmydata/storage/backends/disk/immutable.py 11 |
---|
844 | +from allmydata.util.encodingutil import quote_filepath |
---|
845 | +from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError |
---|
846 | from allmydata.storage.lease import LeaseInfo |
---|
847 | hunk ./src/allmydata/storage/backends/disk/immutable.py 14 |
---|
848 | -from allmydata.storage.common import UnknownImmutableContainerVersionError, \ |
---|
849 | - DataTooLargeError |
---|
850 | + |
---|
851 | |
---|
852 | # each share file (in storage/shares/$SI/$SHNUM) contains lease information |
---|
853 | # and share data. The share data is accessed by RIBucketWriter.write and |
---|
854 | hunk ./src/allmydata/storage/backends/disk/immutable.py 41 |
---|
855 | # then the value stored in this field will be the actual share data length |
---|
856 | # modulo 2**32. |
---|
857 | |
---|
858 | -class ShareFile: |
---|
859 | - LEASE_SIZE = struct.calcsize(">L32s32sL") |
---|
860 | +class ImmutableDiskShare(object): |
---|
861 | + implements(IStoredShare) |
---|
862 | + |
---|
863 | sharetype = "immutable" |
---|
864 | hunk ./src/allmydata/storage/backends/disk/immutable.py 45 |
---|
865 | + LEASE_SIZE = struct.calcsize(">L32s32sL") |
---|
866 | + |
---|
867 | |
---|
868 | hunk ./src/allmydata/storage/backends/disk/immutable.py 48 |
---|
869 | - def __init__(self, filename, max_size=None, create=False): |
---|
870 | - """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """ |
---|
871 | + def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False): |
---|
872 | + """ If max_size is not None then I won't allow more than |
---|
873 | + max_size to be written to me. If create=True then max_size |
---|
874 | + must not be None. """ |
---|
875 | precondition((max_size is not None) or (not create), max_size, create) |
---|
876 | hunk ./src/allmydata/storage/backends/disk/immutable.py 53 |
---|
877 | - self.home = filename |
---|
878 | + self._storageindex = storageindex |
---|
879 | self._max_size = max_size |
---|
880 | hunk ./src/allmydata/storage/backends/disk/immutable.py 55 |
---|
881 | + self._incominghome = incominghome |
---|
882 | + self._home = finalhome |
---|
883 | + self._shnum = shnum |
---|
884 | if create: |
---|
885 | # touch the file, so later callers will see that we're working on |
---|
886 | # it. Also construct the metadata. |
---|
887 | hunk ./src/allmydata/storage/backends/disk/immutable.py 61 |
---|
888 | - assert not os.path.exists(self.home) |
---|
889 | - fileutil.make_dirs(os.path.dirname(self.home)) |
---|
890 | - f = open(self.home, 'wb') |
---|
891 | + assert not finalhome.exists() |
---|
892 | + fp_make_dirs(self._incominghome.parent()) |
---|
893 | # The second field -- the four-byte share data length -- is no |
---|
894 | # longer used as of Tahoe v1.3.0, but we continue to write it in |
---|
895 | # there in case someone downgrades a storage server from >= |
---|
896 | hunk ./src/allmydata/storage/backends/disk/immutable.py 72 |
---|
897 | # the largest length that can fit into the field. That way, even |
---|
898 | # if this does happen, the old < v1.3.0 server will still allow |
---|
899 | # clients to read the first part of the share. |
---|
900 | - f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0)) |
---|
901 | - f.close() |
---|
902 | + self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) ) |
---|
903 | self._lease_offset = max_size + 0x0c |
---|
904 | self._num_leases = 0 |
---|
905 | else: |
---|
906 | hunk ./src/allmydata/storage/backends/disk/immutable.py 76 |
---|
907 | - f = open(self.home, 'rb') |
---|
908 | - filesize = os.path.getsize(self.home) |
---|
909 | - (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
910 | - f.close() |
---|
911 | + f = self._home.open(mode='rb') |
---|
912 | + try: |
---|
913 | + (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
914 | + finally: |
---|
915 | + f.close() |
---|
916 | + filesize = self._home.getsize() |
---|
917 | if version != 1: |
---|
918 | msg = "sharefile %s had version %d but we wanted 1" % \ |
---|
919 | hunk ./src/allmydata/storage/backends/disk/immutable.py 84 |
---|
920 | - (filename, version) |
---|
921 | + (self._home, version) |
---|
922 | raise UnknownImmutableContainerVersionError(msg) |
---|
923 | self._num_leases = num_leases |
---|
924 | self._lease_offset = filesize - (num_leases * self.LEASE_SIZE) |
---|
925 | hunk ./src/allmydata/storage/backends/disk/immutable.py 90 |
---|
926 | self._data_offset = 0xc |
---|
927 | |
---|
928 | + def __repr__(self): |
---|
929 | + return ("<ImmutableDiskShare %s:%r at %s>" |
---|
930 | + % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home))) |
---|
931 | + |
---|
932 | + def close(self): |
---|
933 | + fileutil.fp_make_dirs(self._home.parent()) |
---|
934 | + self._incominghome.moveTo(self._home) |
---|
935 | + try: |
---|
936 | + # self._incominghome is like storage/shares/incoming/ab/abcde/4 . |
---|
937 | + # We try to delete the parent (.../ab/abcde) to avoid leaving |
---|
938 | + # these directories lying around forever, but the delete might |
---|
939 | + # fail if we're working on another share for the same storage |
---|
940 | + # index (like ab/abcde/5). The alternative approach would be to |
---|
941 | + # use a hierarchy of objects (PrefixHolder, BucketHolder, |
---|
942 | + # ShareWriter), each of which is responsible for a single |
---|
943 | + # directory on disk, and have them use reference counting of |
---|
944 | + # their children to know when they should do the rmdir. This |
---|
945 | + # approach is simpler, but relies on os.rmdir refusing to delete |
---|
946 | + # a non-empty directory. Do *not* use fileutil.fp_remove() here! |
---|
947 | + fileutil.fp_rmdir_if_empty(self._incominghome.parent()) |
---|
948 | + # we also delete the grandparent (prefix) directory, .../ab , |
---|
949 | + # again to avoid leaving directories lying around. This might |
---|
950 | + # fail if there is another bucket open that shares a prefix (like |
---|
951 | + # ab/abfff). |
---|
952 | + fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent()) |
---|
953 | + # we leave the great-grandparent (incoming/) directory in place. |
---|
954 | + except EnvironmentError: |
---|
955 | + # ignore the "can't rmdir because the directory is not empty" |
---|
956 | + # exceptions, those are normal consequences of the |
---|
957 | + # above-mentioned conditions. |
---|
958 | + pass |
---|
959 | + pass |
---|
960 | + |
---|
961 | + def get_used_space(self): |
---|
962 | + return (fileutil.get_used_space(self._home) + |
---|
963 | + fileutil.get_used_space(self._incominghome)) |
---|
964 | + |
---|
965 | + def get_storage_index(self): |
---|
966 | + return self._storageindex |
---|
967 | + |
---|
968 | + def get_shnum(self): |
---|
969 | + return self._shnum |
---|
970 | + |
---|
971 | def unlink(self): |
---|
972 | hunk ./src/allmydata/storage/backends/disk/immutable.py 134 |
---|
973 | - os.unlink(self.home) |
---|
974 | + self._home.remove() |
---|
975 | + |
---|
976 | + def get_size(self): |
---|
977 | + return self._home.getsize() |
---|
978 | + |
---|
979 | + def get_data_length(self): |
---|
980 | + return self._lease_offset - self._data_offset |
---|
981 | + |
---|
982 | + #def readv(self, read_vector): |
---|
983 | + # ... |
---|
984 | |
---|
985 | def read_share_data(self, offset, length): |
---|
986 | precondition(offset >= 0) |
---|
987 | hunk ./src/allmydata/storage/backends/disk/immutable.py 147 |
---|
988 | - # reads beyond the end of the data are truncated. Reads that start |
---|
989 | + |
---|
990 | + # Reads beyond the end of the data are truncated. Reads that start |
---|
991 | # beyond the end of the data return an empty string. |
---|
992 | seekpos = self._data_offset+offset |
---|
993 | actuallength = max(0, min(length, self._lease_offset-seekpos)) |
---|
994 | hunk ./src/allmydata/storage/backends/disk/immutable.py 154 |
---|
995 | if actuallength == 0: |
---|
996 | return "" |
---|
997 | - f = open(self.home, 'rb') |
---|
998 | - f.seek(seekpos) |
---|
999 | - return f.read(actuallength) |
---|
1000 | + f = self._home.open(mode='rb') |
---|
1001 | + try: |
---|
1002 | + f.seek(seekpos) |
---|
1003 | + sharedata = f.read(actuallength) |
---|
1004 | + finally: |
---|
1005 | + f.close() |
---|
1006 | + return sharedata |
---|
1007 | |
---|
1008 | def write_share_data(self, offset, data): |
---|
1009 | length = len(data) |
---|
1010 | hunk ./src/allmydata/storage/backends/disk/immutable.py 167 |
---|
1011 | precondition(offset >= 0, offset) |
---|
1012 | if self._max_size is not None and offset+length > self._max_size: |
---|
1013 | raise DataTooLargeError(self._max_size, offset, length) |
---|
1014 | - f = open(self.home, 'rb+') |
---|
1015 | - real_offset = self._data_offset+offset |
---|
1016 | - f.seek(real_offset) |
---|
1017 | - assert f.tell() == real_offset |
---|
1018 | - f.write(data) |
---|
1019 | - f.close() |
---|
1020 | + f = self._incominghome.open(mode='rb+') |
---|
1021 | + try: |
---|
1022 | + real_offset = self._data_offset+offset |
---|
1023 | + f.seek(real_offset) |
---|
1024 | + assert f.tell() == real_offset |
---|
1025 | + f.write(data) |
---|
1026 | + finally: |
---|
1027 | + f.close() |
---|
1028 | |
---|
1029 | def _write_lease_record(self, f, lease_number, lease_info): |
---|
1030 | offset = self._lease_offset + lease_number * self.LEASE_SIZE |
---|
1031 | hunk ./src/allmydata/storage/backends/disk/immutable.py 184 |
---|
1032 | |
---|
1033 | def _read_num_leases(self, f): |
---|
1034 | f.seek(0x08) |
---|
1035 | - (num_leases,) = struct.unpack(">L", f.read(4)) |
---|
1036 | + ro = f.read(4) |
---|
1037 | + (num_leases,) = struct.unpack(">L", ro) |
---|
1038 | return num_leases |
---|
1039 | |
---|
1040 | def _write_num_leases(self, f, num_leases): |
---|
1041 | hunk ./src/allmydata/storage/backends/disk/immutable.py 195 |
---|
1042 | def _truncate_leases(self, f, num_leases): |
---|
1043 | f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE) |
---|
1044 | |
---|
1045 | + # These lease operations are intended for use by disk_backend.py. |
---|
1046 | + # Other clients should not depend on the fact that the disk backend |
---|
1047 | + # stores leases in share files. |
---|
1048 | + |
---|
1049 | def get_leases(self): |
---|
1050 | """Yields a LeaseInfo instance for all leases.""" |
---|
1051 | hunk ./src/allmydata/storage/backends/disk/immutable.py 201 |
---|
1052 | - f = open(self.home, 'rb') |
---|
1053 | - (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
1054 | - f.seek(self._lease_offset) |
---|
1055 | - for i in range(num_leases): |
---|
1056 | - data = f.read(self.LEASE_SIZE) |
---|
1057 | - if data: |
---|
1058 | - yield LeaseInfo().from_immutable_data(data) |
---|
1059 | + f = self._home.open(mode='rb') |
---|
1060 | + try: |
---|
1061 | + (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
1062 | + f.seek(self._lease_offset) |
---|
1063 | + for i in range(num_leases): |
---|
1064 | + data = f.read(self.LEASE_SIZE) |
---|
1065 | + if data: |
---|
1066 | + yield LeaseInfo().from_immutable_data(data) |
---|
1067 | + finally: |
---|
1068 | + f.close() |
---|
1069 | |
---|
1070 | def add_lease(self, lease_info): |
---|
1071 | hunk ./src/allmydata/storage/backends/disk/immutable.py 213 |
---|
1072 | - f = open(self.home, 'rb+') |
---|
1073 | - num_leases = self._read_num_leases(f) |
---|
1074 | - self._write_lease_record(f, num_leases, lease_info) |
---|
1075 | - self._write_num_leases(f, num_leases+1) |
---|
1076 | - f.close() |
---|
1077 | + f = self._incominghome.open(mode='rb') |
---|
1078 | + try: |
---|
1079 | + num_leases = self._read_num_leases(f) |
---|
1080 | + finally: |
---|
1081 | + f.close() |
---|
1082 | + f = self._home.open(mode='wb+') |
---|
1083 | + try: |
---|
1084 | + self._write_lease_record(f, num_leases, lease_info) |
---|
1085 | + self._write_num_leases(f, num_leases+1) |
---|
1086 | + finally: |
---|
1087 | + f.close() |
---|
1088 | |
---|
1089 | def renew_lease(self, renew_secret, new_expire_time): |
---|
1090 | hunk ./src/allmydata/storage/backends/disk/immutable.py 226 |
---|
1091 | - for i,lease in enumerate(self.get_leases()): |
---|
1092 | - if constant_time_compare(lease.renew_secret, renew_secret): |
---|
1093 | - # yup. See if we need to update the owner time. |
---|
1094 | - if new_expire_time > lease.expiration_time: |
---|
1095 | - # yes |
---|
1096 | - lease.expiration_time = new_expire_time |
---|
1097 | - f = open(self.home, 'rb+') |
---|
1098 | - self._write_lease_record(f, i, lease) |
---|
1099 | - f.close() |
---|
1100 | - return |
---|
1101 | + try: |
---|
1102 | + for i, lease in enumerate(self.get_leases()): |
---|
1103 | + if constant_time_compare(lease.renew_secret, renew_secret): |
---|
1104 | + # yup. See if we need to update the owner time. |
---|
1105 | + if new_expire_time > lease.expiration_time: |
---|
1106 | + # yes |
---|
1107 | + lease.expiration_time = new_expire_time |
---|
1108 | + f = self._home.open('rb+') |
---|
1109 | + try: |
---|
1110 | + self._write_lease_record(f, i, lease) |
---|
1111 | + finally: |
---|
1112 | + f.close() |
---|
1113 | + return |
---|
1114 | + except IndexError, e: |
---|
1115 | + raise Exception("IndexError: %s" % (e,)) |
---|
1116 | raise IndexError("unable to renew non-existent lease") |
---|
1117 | |
---|
1118 | def add_or_renew_lease(self, lease_info): |
---|
1119 | hunk ./src/allmydata/storage/backends/disk/immutable.py 249 |
---|
1120 | lease_info.expiration_time) |
---|
1121 | except IndexError: |
---|
1122 | self.add_lease(lease_info) |
---|
1123 | - |
---|
1124 | - |
---|
1125 | - def cancel_lease(self, cancel_secret): |
---|
1126 | - """Remove a lease with the given cancel_secret. If the last lease is |
---|
1127 | - cancelled, the file will be removed. Return the number of bytes that |
---|
1128 | - were freed (by truncating the list of leases, and possibly by |
---|
1129 | - deleting the file. Raise IndexError if there was no lease with the |
---|
1130 | - given cancel_secret. |
---|
1131 | - """ |
---|
1132 | - |
---|
1133 | - leases = list(self.get_leases()) |
---|
1134 | - num_leases_removed = 0 |
---|
1135 | - for i,lease in enumerate(leases): |
---|
1136 | - if constant_time_compare(lease.cancel_secret, cancel_secret): |
---|
1137 | - leases[i] = None |
---|
1138 | - num_leases_removed += 1 |
---|
1139 | - if not num_leases_removed: |
---|
1140 | - raise IndexError("unable to find matching lease to cancel") |
---|
1141 | - if num_leases_removed: |
---|
1142 | - # pack and write out the remaining leases. We write these out in |
---|
1143 | - # the same order as they were added, so that if we crash while |
---|
1144 | - # doing this, we won't lose any non-cancelled leases. |
---|
1145 | - leases = [l for l in leases if l] # remove the cancelled leases |
---|
1146 | - f = open(self.home, 'rb+') |
---|
1147 | - for i,lease in enumerate(leases): |
---|
1148 | - self._write_lease_record(f, i, lease) |
---|
1149 | - self._write_num_leases(f, len(leases)) |
---|
1150 | - self._truncate_leases(f, len(leases)) |
---|
1151 | - f.close() |
---|
1152 | - space_freed = self.LEASE_SIZE * num_leases_removed |
---|
1153 | - if not len(leases): |
---|
1154 | - space_freed += os.stat(self.home)[stat.ST_SIZE] |
---|
1155 | - self.unlink() |
---|
1156 | - return space_freed |
---|
1157 | - |
---|
1158 | - |
---|
1159 | -class BucketWriter(Referenceable): |
---|
1160 | - implements(RIBucketWriter) |
---|
1161 | - |
---|
1162 | - def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary): |
---|
1163 | - self.ss = ss |
---|
1164 | - self.incominghome = incominghome |
---|
1165 | - self.finalhome = finalhome |
---|
1166 | - self._max_size = max_size # don't allow the client to write more than this |
---|
1167 | - self._canary = canary |
---|
1168 | - self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected) |
---|
1169 | - self.closed = False |
---|
1170 | - self.throw_out_all_data = False |
---|
1171 | - self._sharefile = ShareFile(incominghome, create=True, max_size=max_size) |
---|
1172 | - # also, add our lease to the file now, so that other ones can be |
---|
1173 | - # added by simultaneous uploaders |
---|
1174 | - self._sharefile.add_lease(lease_info) |
---|
1175 | - |
---|
1176 | - def allocated_size(self): |
---|
1177 | - return self._max_size |
---|
1178 | - |
---|
1179 | - def remote_write(self, offset, data): |
---|
1180 | - start = time.time() |
---|
1181 | - precondition(not self.closed) |
---|
1182 | - if self.throw_out_all_data: |
---|
1183 | - return |
---|
1184 | - self._sharefile.write_share_data(offset, data) |
---|
1185 | - self.ss.add_latency("write", time.time() - start) |
---|
1186 | - self.ss.count("write") |
---|
1187 | - |
---|
1188 | - def remote_close(self): |
---|
1189 | - precondition(not self.closed) |
---|
1190 | - start = time.time() |
---|
1191 | - |
---|
1192 | - fileutil.make_dirs(os.path.dirname(self.finalhome)) |
---|
1193 | - fileutil.rename(self.incominghome, self.finalhome) |
---|
1194 | - try: |
---|
1195 | - # self.incominghome is like storage/shares/incoming/ab/abcde/4 . |
---|
1196 | - # We try to delete the parent (.../ab/abcde) to avoid leaving |
---|
1197 | - # these directories lying around forever, but the delete might |
---|
1198 | - # fail if we're working on another share for the same storage |
---|
1199 | - # index (like ab/abcde/5). The alternative approach would be to |
---|
1200 | - # use a hierarchy of objects (PrefixHolder, BucketHolder, |
---|
1201 | - # ShareWriter), each of which is responsible for a single |
---|
1202 | - # directory on disk, and have them use reference counting of |
---|
1203 | - # their children to know when they should do the rmdir. This |
---|
1204 | - # approach is simpler, but relies on os.rmdir refusing to delete |
---|
1205 | - # a non-empty directory. Do *not* use fileutil.rm_dir() here! |
---|
1206 | - os.rmdir(os.path.dirname(self.incominghome)) |
---|
1207 | - # we also delete the grandparent (prefix) directory, .../ab , |
---|
1208 | - # again to avoid leaving directories lying around. This might |
---|
1209 | - # fail if there is another bucket open that shares a prefix (like |
---|
1210 | - # ab/abfff). |
---|
1211 | - os.rmdir(os.path.dirname(os.path.dirname(self.incominghome))) |
---|
1212 | - # we leave the great-grandparent (incoming/) directory in place. |
---|
1213 | - except EnvironmentError: |
---|
1214 | - # ignore the "can't rmdir because the directory is not empty" |
---|
1215 | - # exceptions, those are normal consequences of the |
---|
1216 | - # above-mentioned conditions. |
---|
1217 | - pass |
---|
1218 | - self._sharefile = None |
---|
1219 | - self.closed = True |
---|
1220 | - self._canary.dontNotifyOnDisconnect(self._disconnect_marker) |
---|
1221 | - |
---|
1222 | - filelen = os.stat(self.finalhome)[stat.ST_SIZE] |
---|
1223 | - self.ss.bucket_writer_closed(self, filelen) |
---|
1224 | - self.ss.add_latency("close", time.time() - start) |
---|
1225 | - self.ss.count("close") |
---|
1226 | - |
---|
1227 | - def _disconnected(self): |
---|
1228 | - if not self.closed: |
---|
1229 | - self._abort() |
---|
1230 | - |
---|
1231 | - def remote_abort(self): |
---|
1232 | - log.msg("storage: aborting sharefile %s" % self.incominghome, |
---|
1233 | - facility="tahoe.storage", level=log.UNUSUAL) |
---|
1234 | - if not self.closed: |
---|
1235 | - self._canary.dontNotifyOnDisconnect(self._disconnect_marker) |
---|
1236 | - self._abort() |
---|
1237 | - self.ss.count("abort") |
---|
1238 | - |
---|
1239 | - def _abort(self): |
---|
1240 | - if self.closed: |
---|
1241 | - return |
---|
1242 | - |
---|
1243 | - os.remove(self.incominghome) |
---|
1244 | - # if we were the last share to be moved, remove the incoming/ |
---|
1245 | - # directory that was our parent |
---|
1246 | - parentdir = os.path.split(self.incominghome)[0] |
---|
1247 | - if not os.listdir(parentdir): |
---|
1248 | - os.rmdir(parentdir) |
---|
1249 | - self._sharefile = None |
---|
1250 | - |
---|
1251 | - # We are now considered closed for further writing. We must tell |
---|
1252 | - # the storage server about this so that it stops expecting us to |
---|
1253 | - # use the space it allocated for us earlier. |
---|
1254 | - self.closed = True |
---|
1255 | - self.ss.bucket_writer_closed(self, 0) |
---|
1256 | - |
---|
1257 | - |
---|
1258 | -class BucketReader(Referenceable): |
---|
1259 | - implements(RIBucketReader) |
---|
1260 | - |
---|
1261 | - def __init__(self, ss, sharefname, storage_index=None, shnum=None): |
---|
1262 | - self.ss = ss |
---|
1263 | - self._share_file = ShareFile(sharefname) |
---|
1264 | - self.storage_index = storage_index |
---|
1265 | - self.shnum = shnum |
---|
1266 | - |
---|
1267 | - def __repr__(self): |
---|
1268 | - return "<%s %s %s>" % (self.__class__.__name__, |
---|
1269 | - base32.b2a_l(self.storage_index[:8], 60), |
---|
1270 | - self.shnum) |
---|
1271 | - |
---|
1272 | - def remote_read(self, offset, length): |
---|
1273 | - start = time.time() |
---|
1274 | - data = self._share_file.read_share_data(offset, length) |
---|
1275 | - self.ss.add_latency("read", time.time() - start) |
---|
1276 | - self.ss.count("read") |
---|
1277 | - return data |
---|
1278 | - |
---|
1279 | - def remote_advise_corrupt_share(self, reason): |
---|
1280 | - return self.ss.remote_advise_corrupt_share("immutable", |
---|
1281 | - self.storage_index, |
---|
1282 | - self.shnum, |
---|
1283 | - reason) |
---|
1284 | hunk ./src/allmydata/storage/backends/disk/mutable.py 1 |
---|
1285 | -import os, stat, struct |
---|
1286 | |
---|
1287 | hunk ./src/allmydata/storage/backends/disk/mutable.py 2 |
---|
1288 | -from allmydata.interfaces import BadWriteEnablerError |
---|
1289 | -from allmydata.util import idlib, log |
---|
1290 | +import struct |
---|
1291 | + |
---|
1292 | +from zope.interface import implements |
---|
1293 | + |
---|
1294 | +from allmydata.interfaces import IStoredMutableShare, BadWriteEnablerError |
---|
1295 | +from allmydata.util import fileutil, idlib, log |
---|
1296 | from allmydata.util.assertutil import precondition |
---|
1297 | from allmydata.util.hashutil import constant_time_compare |
---|
1298 | hunk ./src/allmydata/storage/backends/disk/mutable.py 10 |
---|
1299 | -from allmydata.storage.lease import LeaseInfo |
---|
1300 | -from allmydata.storage.common import UnknownMutableContainerVersionError, \ |
---|
1301 | +from allmydata.util.encodingutil import quote_filepath |
---|
1302 | +from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \ |
---|
1303 | DataTooLargeError |
---|
1304 | hunk ./src/allmydata/storage/backends/disk/mutable.py 13 |
---|
1305 | +from allmydata.storage.lease import LeaseInfo |
---|
1306 | +from allmydata.storage.backends.base import testv_compare |
---|
1307 | |
---|
1308 | hunk ./src/allmydata/storage/backends/disk/mutable.py 16 |
---|
1309 | -# the MutableShareFile is like the ShareFile, but used for mutable data. It |
---|
1310 | -# has a different layout. See docs/mutable.txt for more details. |
---|
1311 | + |
---|
1312 | +# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data. |
---|
1313 | +# It has a different layout. See docs/mutable.rst for more details. |
---|
1314 | |
---|
1315 | # # offset size name |
---|
1316 | # 1 0 32 magic verstr "tahoe mutable container v1" plus binary |
---|
1317 | hunk ./src/allmydata/storage/backends/disk/mutable.py 31 |
---|
1318 | # 4 4 expiration timestamp |
---|
1319 | # 8 32 renewal token |
---|
1320 | # 40 32 cancel token |
---|
1321 | -# 72 20 nodeid which accepted the tokens |
---|
1322 | +# 72 20 nodeid that accepted the tokens |
---|
1323 | # 7 468 (a) data |
---|
1324 | # 8 ?? 4 count of extra leases |
---|
1325 | # 9 ?? n*92 extra leases |
---|
1326 | hunk ./src/allmydata/storage/backends/disk/mutable.py 37 |
---|
1327 | |
---|
1328 | |
---|
1329 | -# The struct module doc says that L's are 4 bytes in size., and that Q's are |
---|
1330 | +# The struct module doc says that L's are 4 bytes in size, and that Q's are |
---|
1331 | # 8 bytes in size. Since compatibility depends upon this, double-check it. |
---|
1332 | assert struct.calcsize(">L") == 4, struct.calcsize(">L") |
---|
1333 | assert struct.calcsize(">Q") == 8, struct.calcsize(">Q") |
---|
1334 | hunk ./src/allmydata/storage/backends/disk/mutable.py 42 |
---|
1335 | |
---|
1336 | -class MutableShareFile: |
---|
1337 | + |
---|
1338 | +class MutableDiskShare(object): |
---|
1339 | + implements(IStoredMutableShare) |
---|
1340 | |
---|
1341 | sharetype = "mutable" |
---|
1342 | DATA_LENGTH_OFFSET = struct.calcsize(">32s20s32s") |
---|
1343 | hunk ./src/allmydata/storage/backends/disk/mutable.py 54 |
---|
1344 | assert LEASE_SIZE == 92 |
---|
1345 | DATA_OFFSET = HEADER_SIZE + 4*LEASE_SIZE |
---|
1346 | assert DATA_OFFSET == 468, DATA_OFFSET |
---|
1347 | + |
---|
1348 | # our sharefiles share with a recognizable string, plus some random |
---|
1349 | # binary data to reduce the chance that a regular text file will look |
---|
1350 | # like a sharefile. |
---|
1351 | hunk ./src/allmydata/storage/backends/disk/mutable.py 63 |
---|
1352 | MAX_SIZE = 2*1000*1000*1000 # 2GB, kind of arbitrary |
---|
1353 | # TODO: decide upon a policy for max share size |
---|
1354 | |
---|
1355 | - def __init__(self, filename, parent=None): |
---|
1356 | - self.home = filename |
---|
1357 | - if os.path.exists(self.home): |
---|
1358 | + def __init__(self, storageindex, shnum, home, parent=None): |
---|
1359 | + self._storageindex = storageindex |
---|
1360 | + self._shnum = shnum |
---|
1361 | + self._home = home |
---|
1362 | + if self._home.exists(): |
---|
1363 | # we don't cache anything, just check the magic |
---|
1364 | hunk ./src/allmydata/storage/backends/disk/mutable.py 69 |
---|
1365 | - f = open(self.home, 'rb') |
---|
1366 | - data = f.read(self.HEADER_SIZE) |
---|
1367 | - (magic, |
---|
1368 | - write_enabler_nodeid, write_enabler, |
---|
1369 | - data_length, extra_least_offset) = \ |
---|
1370 | - struct.unpack(">32s20s32sQQ", data) |
---|
1371 | - if magic != self.MAGIC: |
---|
1372 | - msg = "sharefile %s had magic '%r' but we wanted '%r'" % \ |
---|
1373 | - (filename, magic, self.MAGIC) |
---|
1374 | - raise UnknownMutableContainerVersionError(msg) |
---|
1375 | + f = self._home.open('rb') |
---|
1376 | + try: |
---|
1377 | + data = f.read(self.HEADER_SIZE) |
---|
1378 | + (magic, |
---|
1379 | + write_enabler_nodeid, write_enabler, |
---|
1380 | + data_length, extra_least_offset) = \ |
---|
1381 | + struct.unpack(">32s20s32sQQ", data) |
---|
1382 | + if magic != self.MAGIC: |
---|
1383 | + msg = "sharefile %s had magic '%r' but we wanted '%r'" % \ |
---|
1384 | + (quote_filepath(self._home), magic, self.MAGIC) |
---|
1385 | + raise UnknownMutableContainerVersionError(msg) |
---|
1386 | + finally: |
---|
1387 | + f.close() |
---|
1388 | self.parent = parent # for logging |
---|
1389 | |
---|
1390 | def log(self, *args, **kwargs): |
---|
1391 | hunk ./src/allmydata/storage/backends/disk/mutable.py 87 |
---|
1392 | return self.parent.log(*args, **kwargs) |
---|
1393 | |
---|
1394 | - def create(self, my_nodeid, write_enabler): |
---|
1395 | - assert not os.path.exists(self.home) |
---|
1396 | + def create(self, serverid, write_enabler): |
---|
1397 | + assert not self._home.exists() |
---|
1398 | data_length = 0 |
---|
1399 | extra_lease_offset = (self.HEADER_SIZE |
---|
1400 | + 4 * self.LEASE_SIZE |
---|
1401 | hunk ./src/allmydata/storage/backends/disk/mutable.py 95 |
---|
1402 | + data_length) |
---|
1403 | assert extra_lease_offset == self.DATA_OFFSET # true at creation |
---|
1404 | num_extra_leases = 0 |
---|
1405 | - f = open(self.home, 'wb') |
---|
1406 | - header = struct.pack(">32s20s32sQQ", |
---|
1407 | - self.MAGIC, my_nodeid, write_enabler, |
---|
1408 | - data_length, extra_lease_offset, |
---|
1409 | - ) |
---|
1410 | - leases = ("\x00"*self.LEASE_SIZE) * 4 |
---|
1411 | - f.write(header + leases) |
---|
1412 | - # data goes here, empty after creation |
---|
1413 | - f.write(struct.pack(">L", num_extra_leases)) |
---|
1414 | - # extra leases go here, none at creation |
---|
1415 | - f.close() |
---|
1416 | + f = self._home.open('wb') |
---|
1417 | + try: |
---|
1418 | + header = struct.pack(">32s20s32sQQ", |
---|
1419 | + self.MAGIC, serverid, write_enabler, |
---|
1420 | + data_length, extra_lease_offset, |
---|
1421 | + ) |
---|
1422 | + leases = ("\x00"*self.LEASE_SIZE) * 4 |
---|
1423 | + f.write(header + leases) |
---|
1424 | + # data goes here, empty after creation |
---|
1425 | + f.write(struct.pack(">L", num_extra_leases)) |
---|
1426 | + # extra leases go here, none at creation |
---|
1427 | + finally: |
---|
1428 | + f.close() |
---|
1429 | + |
---|
1430 | + def __repr__(self): |
---|
1431 | + return ("<MutableDiskShare %s:%r at %s>" |
---|
1432 | + % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home))) |
---|
1433 | + |
---|
1434 | + def get_used_space(self): |
---|
1435 | + return fileutil.get_used_space(self._home) |
---|
1436 | + |
---|
1437 | + def get_storage_index(self): |
---|
1438 | + return self._storageindex |
---|
1439 | + |
---|
1440 | + def get_shnum(self): |
---|
1441 | + return self._shnum |
---|
1442 | |
---|
1443 | def unlink(self): |
---|
1444 | hunk ./src/allmydata/storage/backends/disk/mutable.py 123 |
---|
1445 | - os.unlink(self.home) |
---|
1446 | + self._home.remove() |
---|
1447 | |
---|
1448 | def _read_data_length(self, f): |
---|
1449 | f.seek(self.DATA_LENGTH_OFFSET) |
---|
1450 | hunk ./src/allmydata/storage/backends/disk/mutable.py 291 |
---|
1451 | |
---|
1452 | def get_leases(self): |
---|
1453 | """Yields a LeaseInfo instance for all leases.""" |
---|
1454 | - f = open(self.home, 'rb') |
---|
1455 | - for i, lease in self._enumerate_leases(f): |
---|
1456 | - yield lease |
---|
1457 | - f.close() |
---|
1458 | + f = self._home.open('rb') |
---|
1459 | + try: |
---|
1460 | + for i, lease in self._enumerate_leases(f): |
---|
1461 | + yield lease |
---|
1462 | + finally: |
---|
1463 | + f.close() |
---|
1464 | |
---|
1465 | def _enumerate_leases(self, f): |
---|
1466 | for i in range(self._get_num_lease_slots(f)): |
---|
1467 | hunk ./src/allmydata/storage/backends/disk/mutable.py 303 |
---|
1468 | try: |
---|
1469 | data = self._read_lease_record(f, i) |
---|
1470 | if data is not None: |
---|
1471 | - yield i,data |
---|
1472 | + yield i, data |
---|
1473 | except IndexError: |
---|
1474 | return |
---|
1475 | |
---|
1476 | hunk ./src/allmydata/storage/backends/disk/mutable.py 307 |
---|
1477 | + # These lease operations are intended for use by disk_backend.py. |
---|
1478 | + # Other non-test clients should not depend on the fact that the disk |
---|
1479 | + # backend stores leases in share files. |
---|
1480 | + |
---|
1481 | def add_lease(self, lease_info): |
---|
1482 | precondition(lease_info.owner_num != 0) # 0 means "no lease here" |
---|
1483 | hunk ./src/allmydata/storage/backends/disk/mutable.py 313 |
---|
1484 | - f = open(self.home, 'rb+') |
---|
1485 | - num_lease_slots = self._get_num_lease_slots(f) |
---|
1486 | - empty_slot = self._get_first_empty_lease_slot(f) |
---|
1487 | - if empty_slot is not None: |
---|
1488 | - self._write_lease_record(f, empty_slot, lease_info) |
---|
1489 | - else: |
---|
1490 | - self._write_lease_record(f, num_lease_slots, lease_info) |
---|
1491 | - f.close() |
---|
1492 | + f = self._home.open('rb+') |
---|
1493 | + try: |
---|
1494 | + num_lease_slots = self._get_num_lease_slots(f) |
---|
1495 | + empty_slot = self._get_first_empty_lease_slot(f) |
---|
1496 | + if empty_slot is not None: |
---|
1497 | + self._write_lease_record(f, empty_slot, lease_info) |
---|
1498 | + else: |
---|
1499 | + self._write_lease_record(f, num_lease_slots, lease_info) |
---|
1500 | + finally: |
---|
1501 | + f.close() |
---|
1502 | |
---|
1503 | def renew_lease(self, renew_secret, new_expire_time): |
---|
1504 | accepting_nodeids = set() |
---|
1505 | hunk ./src/allmydata/storage/backends/disk/mutable.py 326 |
---|
1506 | - f = open(self.home, 'rb+') |
---|
1507 | - for (leasenum,lease) in self._enumerate_leases(f): |
---|
1508 | - if constant_time_compare(lease.renew_secret, renew_secret): |
---|
1509 | - # yup. See if we need to update the owner time. |
---|
1510 | - if new_expire_time > lease.expiration_time: |
---|
1511 | - # yes |
---|
1512 | - lease.expiration_time = new_expire_time |
---|
1513 | - self._write_lease_record(f, leasenum, lease) |
---|
1514 | - f.close() |
---|
1515 | - return |
---|
1516 | - accepting_nodeids.add(lease.nodeid) |
---|
1517 | - f.close() |
---|
1518 | + f = self._home.open('rb+') |
---|
1519 | + try: |
---|
1520 | + for (leasenum, lease) in self._enumerate_leases(f): |
---|
1521 | + if constant_time_compare(lease.renew_secret, renew_secret): |
---|
1522 | + # yup. See if we need to update the owner time. |
---|
1523 | + if new_expire_time > lease.expiration_time: |
---|
1524 | + # yes |
---|
1525 | + lease.expiration_time = new_expire_time |
---|
1526 | + self._write_lease_record(f, leasenum, lease) |
---|
1527 | + return |
---|
1528 | + accepting_nodeids.add(lease.nodeid) |
---|
1529 | + finally: |
---|
1530 | + f.close() |
---|
1531 | # Return the accepting_nodeids set, to give the client a chance to |
---|
1532 | hunk ./src/allmydata/storage/backends/disk/mutable.py 340 |
---|
1533 | - # update the leases on a share which has been migrated from its |
---|
1534 | + # update the leases on a share that has been migrated from its |
---|
1535 | # original server to a new one. |
---|
1536 | msg = ("Unable to renew non-existent lease. I have leases accepted by" |
---|
1537 | " nodeids: ") |
---|
1538 | hunk ./src/allmydata/storage/backends/disk/mutable.py 357 |
---|
1539 | except IndexError: |
---|
1540 | self.add_lease(lease_info) |
---|
1541 | |
---|
1542 | - def cancel_lease(self, cancel_secret): |
---|
1543 | - """Remove any leases with the given cancel_secret. If the last lease |
---|
1544 | - is cancelled, the file will be removed. Return the number of bytes |
---|
1545 | - that were freed (by truncating the list of leases, and possibly by |
---|
1546 | - deleting the file. Raise IndexError if there was no lease with the |
---|
1547 | - given cancel_secret.""" |
---|
1548 | - |
---|
1549 | - accepting_nodeids = set() |
---|
1550 | - modified = 0 |
---|
1551 | - remaining = 0 |
---|
1552 | - blank_lease = LeaseInfo(owner_num=0, |
---|
1553 | - renew_secret="\x00"*32, |
---|
1554 | - cancel_secret="\x00"*32, |
---|
1555 | - expiration_time=0, |
---|
1556 | - nodeid="\x00"*20) |
---|
1557 | - f = open(self.home, 'rb+') |
---|
1558 | - for (leasenum,lease) in self._enumerate_leases(f): |
---|
1559 | - accepting_nodeids.add(lease.nodeid) |
---|
1560 | - if constant_time_compare(lease.cancel_secret, cancel_secret): |
---|
1561 | - self._write_lease_record(f, leasenum, blank_lease) |
---|
1562 | - modified += 1 |
---|
1563 | - else: |
---|
1564 | - remaining += 1 |
---|
1565 | - if modified: |
---|
1566 | - freed_space = self._pack_leases(f) |
---|
1567 | - f.close() |
---|
1568 | - if not remaining: |
---|
1569 | - freed_space += os.stat(self.home)[stat.ST_SIZE] |
---|
1570 | - self.unlink() |
---|
1571 | - return freed_space |
---|
1572 | - |
---|
1573 | - msg = ("Unable to cancel non-existent lease. I have leases " |
---|
1574 | - "accepted by nodeids: ") |
---|
1575 | - msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid)) |
---|
1576 | - for anid in accepting_nodeids]) |
---|
1577 | - msg += " ." |
---|
1578 | - raise IndexError(msg) |
---|
1579 | - |
---|
1580 | - def _pack_leases(self, f): |
---|
1581 | - # TODO: reclaim space from cancelled leases |
---|
1582 | - return 0 |
---|
1583 | - |
---|
1584 | def _read_write_enabler_and_nodeid(self, f): |
---|
1585 | f.seek(0) |
---|
1586 | data = f.read(self.HEADER_SIZE) |
---|
1587 | hunk ./src/allmydata/storage/backends/disk/mutable.py 369 |
---|
1588 | |
---|
1589 | def readv(self, readv): |
---|
1590 | datav = [] |
---|
1591 | - f = open(self.home, 'rb') |
---|
1592 | - for (offset, length) in readv: |
---|
1593 | - datav.append(self._read_share_data(f, offset, length)) |
---|
1594 | - f.close() |
---|
1595 | + f = self._home.open('rb') |
---|
1596 | + try: |
---|
1597 | + for (offset, length) in readv: |
---|
1598 | + datav.append(self._read_share_data(f, offset, length)) |
---|
1599 | + finally: |
---|
1600 | + f.close() |
---|
1601 | return datav |
---|
1602 | |
---|
1603 | hunk ./src/allmydata/storage/backends/disk/mutable.py 377 |
---|
1604 | -# def remote_get_length(self): |
---|
1605 | -# f = open(self.home, 'rb') |
---|
1606 | -# data_length = self._read_data_length(f) |
---|
1607 | -# f.close() |
---|
1608 | -# return data_length |
---|
1609 | + def get_size(self): |
---|
1610 | + return self._home.getsize() |
---|
1611 | + |
---|
1612 | + def get_data_length(self): |
---|
1613 | + f = self._home.open('rb') |
---|
1614 | + try: |
---|
1615 | + data_length = self._read_data_length(f) |
---|
1616 | + finally: |
---|
1617 | + f.close() |
---|
1618 | + return data_length |
---|
1619 | |
---|
1620 | def check_write_enabler(self, write_enabler, si_s): |
---|
1621 | hunk ./src/allmydata/storage/backends/disk/mutable.py 389 |
---|
1622 | - f = open(self.home, 'rb+') |
---|
1623 | - (real_write_enabler, write_enabler_nodeid) = \ |
---|
1624 | - self._read_write_enabler_and_nodeid(f) |
---|
1625 | - f.close() |
---|
1626 | + f = self._home.open('rb+') |
---|
1627 | + try: |
---|
1628 | + (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f) |
---|
1629 | + finally: |
---|
1630 | + f.close() |
---|
1631 | # avoid a timing attack |
---|
1632 | #if write_enabler != real_write_enabler: |
---|
1633 | if not constant_time_compare(write_enabler, real_write_enabler): |
---|
1634 | hunk ./src/allmydata/storage/backends/disk/mutable.py 410 |
---|
1635 | |
---|
1636 | def check_testv(self, testv): |
---|
1637 | test_good = True |
---|
1638 | - f = open(self.home, 'rb+') |
---|
1639 | - for (offset, length, operator, specimen) in testv: |
---|
1640 | - data = self._read_share_data(f, offset, length) |
---|
1641 | - if not testv_compare(data, operator, specimen): |
---|
1642 | - test_good = False |
---|
1643 | - break |
---|
1644 | - f.close() |
---|
1645 | + f = self._home.open('rb+') |
---|
1646 | + try: |
---|
1647 | + for (offset, length, operator, specimen) in testv: |
---|
1648 | + data = self._read_share_data(f, offset, length) |
---|
1649 | + if not testv_compare(data, operator, specimen): |
---|
1650 | + test_good = False |
---|
1651 | + break |
---|
1652 | + finally: |
---|
1653 | + f.close() |
---|
1654 | return test_good |
---|
1655 | |
---|
1656 | def writev(self, datav, new_length): |
---|
1657 | hunk ./src/allmydata/storage/backends/disk/mutable.py 422 |
---|
1658 | - f = open(self.home, 'rb+') |
---|
1659 | - for (offset, data) in datav: |
---|
1660 | - self._write_share_data(f, offset, data) |
---|
1661 | - if new_length is not None: |
---|
1662 | - cur_length = self._read_data_length(f) |
---|
1663 | - if new_length < cur_length: |
---|
1664 | - self._write_data_length(f, new_length) |
---|
1665 | - # TODO: if we're going to shrink the share file when the |
---|
1666 | - # share data has shrunk, then call |
---|
1667 | - # self._change_container_size() here. |
---|
1668 | - f.close() |
---|
1669 | - |
---|
1670 | -def testv_compare(a, op, b): |
---|
1671 | - assert op in ("lt", "le", "eq", "ne", "ge", "gt") |
---|
1672 | - if op == "lt": |
---|
1673 | - return a < b |
---|
1674 | - if op == "le": |
---|
1675 | - return a <= b |
---|
1676 | - if op == "eq": |
---|
1677 | - return a == b |
---|
1678 | - if op == "ne": |
---|
1679 | - return a != b |
---|
1680 | - if op == "ge": |
---|
1681 | - return a >= b |
---|
1682 | - if op == "gt": |
---|
1683 | - return a > b |
---|
1684 | - # never reached |
---|
1685 | + f = self._home.open('rb+') |
---|
1686 | + try: |
---|
1687 | + for (offset, data) in datav: |
---|
1688 | + self._write_share_data(f, offset, data) |
---|
1689 | + if new_length is not None: |
---|
1690 | + cur_length = self._read_data_length(f) |
---|
1691 | + if new_length < cur_length: |
---|
1692 | + self._write_data_length(f, new_length) |
---|
1693 | + # TODO: if we're going to shrink the share file when the |
---|
1694 | + # share data has shrunk, then call |
---|
1695 | + # self._change_container_size() here. |
---|
1696 | + finally: |
---|
1697 | + f.close() |
---|
1698 | |
---|
1699 | hunk ./src/allmydata/storage/backends/disk/mutable.py 436 |
---|
1700 | -class EmptyShare: |
---|
1701 | + def close(self): |
---|
1702 | + pass |
---|
1703 | |
---|
1704 | hunk ./src/allmydata/storage/backends/disk/mutable.py 439 |
---|
1705 | - def check_testv(self, testv): |
---|
1706 | - test_good = True |
---|
1707 | - for (offset, length, operator, specimen) in testv: |
---|
1708 | - data = "" |
---|
1709 | - if not testv_compare(data, operator, specimen): |
---|
1710 | - test_good = False |
---|
1711 | - break |
---|
1712 | - return test_good |
---|
1713 | |
---|
1714 | hunk ./src/allmydata/storage/backends/disk/mutable.py 440 |
---|
1715 | -def create_mutable_sharefile(filename, my_nodeid, write_enabler, parent): |
---|
1716 | - ms = MutableShareFile(filename, parent) |
---|
1717 | - ms.create(my_nodeid, write_enabler) |
---|
1718 | +def create_mutable_disk_share(fp, serverid, write_enabler, parent): |
---|
1719 | + ms = MutableDiskShare(fp, parent) |
---|
1720 | + ms.create(serverid, write_enabler) |
---|
1721 | del ms |
---|
1722 | hunk ./src/allmydata/storage/backends/disk/mutable.py 444 |
---|
1723 | - return MutableShareFile(filename, parent) |
---|
1724 | - |
---|
1725 | + return MutableDiskShare(fp, parent) |
---|
1726 | addfile ./src/allmydata/storage/backends/null/__init__.py |
---|
1727 | addfile ./src/allmydata/storage/backends/null/null_backend.py |
---|
1728 | hunk ./src/allmydata/storage/backends/null/null_backend.py 2 |
---|
1729 | |
---|
1730 | +import os, struct |
---|
1731 | + |
---|
1732 | +from zope.interface import implements |
---|
1733 | + |
---|
1734 | +from allmydata.interfaces import IStorageBackend, IShareSet, IStoredShare, IStoredMutableShare |
---|
1735 | +from allmydata.util.assertutil import precondition |
---|
1736 | +from allmydata.util.hashutil import constant_time_compare |
---|
1737 | +from allmydata.storage.backends.base import Backend, ShareSet |
---|
1738 | +from allmydata.storage.bucket import BucketWriter |
---|
1739 | +from allmydata.storage.common import si_b2a |
---|
1740 | +from allmydata.storage.lease import LeaseInfo |
---|
1741 | + |
---|
1742 | + |
---|
1743 | +class NullBackend(Backend): |
---|
1744 | + implements(IStorageBackend) |
---|
1745 | + |
---|
1746 | + def __init__(self): |
---|
1747 | + Backend.__init__(self) |
---|
1748 | + |
---|
1749 | + def get_available_space(self, reserved_space): |
---|
1750 | + return None |
---|
1751 | + |
---|
1752 | + def get_sharesets_for_prefix(self, prefix): |
---|
1753 | + pass |
---|
1754 | + |
---|
1755 | + def get_shareset(self, storageindex): |
---|
1756 | + return NullShareSet(storageindex) |
---|
1757 | + |
---|
1758 | + def fill_in_space_stats(self, stats): |
---|
1759 | + pass |
---|
1760 | + |
---|
1761 | + def set_storage_server(self, ss): |
---|
1762 | + self.ss = ss |
---|
1763 | + |
---|
1764 | + def advise_corrupt_share(self, sharetype, storageindex, shnum, reason): |
---|
1765 | + pass |
---|
1766 | + |
---|
1767 | + |
---|
1768 | +class NullShareSet(ShareSet): |
---|
1769 | + implements(IShareSet) |
---|
1770 | + |
---|
1771 | + def __init__(self, storageindex): |
---|
1772 | + self.storageindex = storageindex |
---|
1773 | + |
---|
1774 | + def get_overhead(self): |
---|
1775 | + return 0 |
---|
1776 | + |
---|
1777 | + def get_incoming_shnums(self): |
---|
1778 | + return frozenset() |
---|
1779 | + |
---|
1780 | + def get_shares(self): |
---|
1781 | + pass |
---|
1782 | + |
---|
1783 | + def get_share(self, shnum): |
---|
1784 | + return None |
---|
1785 | + |
---|
1786 | + def get_storage_index(self): |
---|
1787 | + return self.storageindex |
---|
1788 | + |
---|
1789 | + def get_storage_index_string(self): |
---|
1790 | + return si_b2a(self.storageindex) |
---|
1791 | + |
---|
1792 | + def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary): |
---|
1793 | + immutableshare = ImmutableNullShare() |
---|
1794 | + return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary) |
---|
1795 | + |
---|
1796 | + def _create_mutable_share(self, storageserver, shnum, write_enabler): |
---|
1797 | + return MutableNullShare() |
---|
1798 | + |
---|
1799 | + def _clean_up_after_unlink(self): |
---|
1800 | + pass |
---|
1801 | + |
---|
1802 | + |
---|
1803 | +class ImmutableNullShare: |
---|
1804 | + implements(IStoredShare) |
---|
1805 | + sharetype = "immutable" |
---|
1806 | + |
---|
1807 | + def __init__(self): |
---|
1808 | + """ If max_size is not None then I won't allow more than |
---|
1809 | + max_size to be written to me. If create=True then max_size |
---|
1810 | + must not be None. """ |
---|
1811 | + pass |
---|
1812 | + |
---|
1813 | + def get_shnum(self): |
---|
1814 | + return self.shnum |
---|
1815 | + |
---|
1816 | + def unlink(self): |
---|
1817 | + os.unlink(self.fname) |
---|
1818 | + |
---|
1819 | + def read_share_data(self, offset, length): |
---|
1820 | + precondition(offset >= 0) |
---|
1821 | + # Reads beyond the end of the data are truncated. Reads that start |
---|
1822 | + # beyond the end of the data return an empty string. |
---|
1823 | + seekpos = self._data_offset+offset |
---|
1824 | + fsize = os.path.getsize(self.fname) |
---|
1825 | + actuallength = max(0, min(length, fsize-seekpos)) # XXX #1528 |
---|
1826 | + if actuallength == 0: |
---|
1827 | + return "" |
---|
1828 | + f = open(self.fname, 'rb') |
---|
1829 | + f.seek(seekpos) |
---|
1830 | + return f.read(actuallength) |
---|
1831 | + |
---|
1832 | + def write_share_data(self, offset, data): |
---|
1833 | + pass |
---|
1834 | + |
---|
1835 | + def _write_lease_record(self, f, lease_number, lease_info): |
---|
1836 | + offset = self._lease_offset + lease_number * self.LEASE_SIZE |
---|
1837 | + f.seek(offset) |
---|
1838 | + assert f.tell() == offset |
---|
1839 | + f.write(lease_info.to_immutable_data()) |
---|
1840 | + |
---|
1841 | + def _read_num_leases(self, f): |
---|
1842 | + f.seek(0x08) |
---|
1843 | + (num_leases,) = struct.unpack(">L", f.read(4)) |
---|
1844 | + return num_leases |
---|
1845 | + |
---|
1846 | + def _write_num_leases(self, f, num_leases): |
---|
1847 | + f.seek(0x08) |
---|
1848 | + f.write(struct.pack(">L", num_leases)) |
---|
1849 | + |
---|
1850 | + def _truncate_leases(self, f, num_leases): |
---|
1851 | + f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE) |
---|
1852 | + |
---|
1853 | + def get_leases(self): |
---|
1854 | + """Yields a LeaseInfo instance for all leases.""" |
---|
1855 | + f = open(self.fname, 'rb') |
---|
1856 | + (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
1857 | + f.seek(self._lease_offset) |
---|
1858 | + for i in range(num_leases): |
---|
1859 | + data = f.read(self.LEASE_SIZE) |
---|
1860 | + if data: |
---|
1861 | + yield LeaseInfo().from_immutable_data(data) |
---|
1862 | + |
---|
1863 | + def add_lease(self, lease): |
---|
1864 | + pass |
---|
1865 | + |
---|
1866 | + def renew_lease(self, renew_secret, new_expire_time): |
---|
1867 | + for i,lease in enumerate(self.get_leases()): |
---|
1868 | + if constant_time_compare(lease.renew_secret, renew_secret): |
---|
1869 | + # yup. See if we need to update the owner time. |
---|
1870 | + if new_expire_time > lease.expiration_time: |
---|
1871 | + # yes |
---|
1872 | + lease.expiration_time = new_expire_time |
---|
1873 | + f = open(self.fname, 'rb+') |
---|
1874 | + self._write_lease_record(f, i, lease) |
---|
1875 | + f.close() |
---|
1876 | + return |
---|
1877 | + raise IndexError("unable to renew non-existent lease") |
---|
1878 | + |
---|
1879 | + def add_or_renew_lease(self, lease_info): |
---|
1880 | + try: |
---|
1881 | + self.renew_lease(lease_info.renew_secret, |
---|
1882 | + lease_info.expiration_time) |
---|
1883 | + except IndexError: |
---|
1884 | + self.add_lease(lease_info) |
---|
1885 | + |
---|
1886 | + |
---|
1887 | +class MutableNullShare: |
---|
1888 | + implements(IStoredMutableShare) |
---|
1889 | + sharetype = "mutable" |
---|
1890 | + |
---|
1891 | + """ XXX: TODO """ |
---|
1892 | addfile ./src/allmydata/storage/bucket.py |
---|
1893 | hunk ./src/allmydata/storage/bucket.py 1 |
---|
1894 | + |
---|
1895 | +import time |
---|
1896 | + |
---|
1897 | +from foolscap.api import Referenceable |
---|
1898 | + |
---|
1899 | +from zope.interface import implements |
---|
1900 | +from allmydata.interfaces import RIBucketWriter, RIBucketReader |
---|
1901 | +from allmydata.util import base32, log |
---|
1902 | +from allmydata.util.assertutil import precondition |
---|
1903 | + |
---|
1904 | + |
---|
1905 | +class BucketWriter(Referenceable): |
---|
1906 | + implements(RIBucketWriter) |
---|
1907 | + |
---|
1908 | + def __init__(self, ss, immutableshare, max_size, lease_info, canary): |
---|
1909 | + self.ss = ss |
---|
1910 | + self._max_size = max_size # don't allow the client to write more than this |
---|
1911 | + self._canary = canary |
---|
1912 | + self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected) |
---|
1913 | + self.closed = False |
---|
1914 | + self.throw_out_all_data = False |
---|
1915 | + self._share = immutableshare |
---|
1916 | + # also, add our lease to the file now, so that other ones can be |
---|
1917 | + # added by simultaneous uploaders |
---|
1918 | + self._share.add_lease(lease_info) |
---|
1919 | + |
---|
1920 | + def allocated_size(self): |
---|
1921 | + return self._max_size |
---|
1922 | + |
---|
1923 | + def remote_write(self, offset, data): |
---|
1924 | + start = time.time() |
---|
1925 | + precondition(not self.closed) |
---|
1926 | + if self.throw_out_all_data: |
---|
1927 | + return |
---|
1928 | + self._share.write_share_data(offset, data) |
---|
1929 | + self.ss.add_latency("write", time.time() - start) |
---|
1930 | + self.ss.count("write") |
---|
1931 | + |
---|
1932 | + def remote_close(self): |
---|
1933 | + precondition(not self.closed) |
---|
1934 | + start = time.time() |
---|
1935 | + |
---|
1936 | + self._share.close() |
---|
1937 | + filelen = self._share.stat() |
---|
1938 | + self._share = None |
---|
1939 | + |
---|
1940 | + self.closed = True |
---|
1941 | + self._canary.dontNotifyOnDisconnect(self._disconnect_marker) |
---|
1942 | + |
---|
1943 | + self.ss.bucket_writer_closed(self, filelen) |
---|
1944 | + self.ss.add_latency("close", time.time() - start) |
---|
1945 | + self.ss.count("close") |
---|
1946 | + |
---|
1947 | + def _disconnected(self): |
---|
1948 | + if not self.closed: |
---|
1949 | + self._abort() |
---|
1950 | + |
---|
1951 | + def remote_abort(self): |
---|
1952 | + log.msg("storage: aborting write to share %r" % self._share, |
---|
1953 | + facility="tahoe.storage", level=log.UNUSUAL) |
---|
1954 | + if not self.closed: |
---|
1955 | + self._canary.dontNotifyOnDisconnect(self._disconnect_marker) |
---|
1956 | + self._abort() |
---|
1957 | + self.ss.count("abort") |
---|
1958 | + |
---|
1959 | + def _abort(self): |
---|
1960 | + if self.closed: |
---|
1961 | + return |
---|
1962 | + self._share.unlink() |
---|
1963 | + self._share = None |
---|
1964 | + |
---|
1965 | + # We are now considered closed for further writing. We must tell |
---|
1966 | + # the storage server about this so that it stops expecting us to |
---|
1967 | + # use the space it allocated for us earlier. |
---|
1968 | + self.closed = True |
---|
1969 | + self.ss.bucket_writer_closed(self, 0) |
---|
1970 | + |
---|
1971 | + |
---|
1972 | +class BucketReader(Referenceable): |
---|
1973 | + implements(RIBucketReader) |
---|
1974 | + |
---|
1975 | + def __init__(self, ss, share): |
---|
1976 | + self.ss = ss |
---|
1977 | + self._share = share |
---|
1978 | + self.storageindex = share.storageindex |
---|
1979 | + self.shnum = share.shnum |
---|
1980 | + |
---|
1981 | + def __repr__(self): |
---|
1982 | + return "<%s %s %s>" % (self.__class__.__name__, |
---|
1983 | + base32.b2a_l(self.storageindex[:8], 60), |
---|
1984 | + self.shnum) |
---|
1985 | + |
---|
1986 | + def remote_read(self, offset, length): |
---|
1987 | + start = time.time() |
---|
1988 | + data = self._share.read_share_data(offset, length) |
---|
1989 | + self.ss.add_latency("read", time.time() - start) |
---|
1990 | + self.ss.count("read") |
---|
1991 | + return data |
---|
1992 | + |
---|
1993 | + def remote_advise_corrupt_share(self, reason): |
---|
1994 | + return self.ss.remote_advise_corrupt_share("immutable", |
---|
1995 | + self.storageindex, |
---|
1996 | + self.shnum, |
---|
1997 | + reason) |
---|
1998 | addfile ./src/allmydata/test/test_backends.py |
---|
1999 | hunk ./src/allmydata/test/test_backends.py 1 |
---|
2000 | +import os, stat |
---|
2001 | +from twisted.trial import unittest |
---|
2002 | +from allmydata.util.log import msg |
---|
2003 | +from allmydata.test.common_util import ReallyEqualMixin |
---|
2004 | +import mock |
---|
2005 | + |
---|
2006 | +# This is the code that we're going to be testing. |
---|
2007 | +from allmydata.storage.server import StorageServer |
---|
2008 | +from allmydata.storage.backends.disk.disk_backend import DiskBackend, si_si2dir |
---|
2009 | +from allmydata.storage.backends.null.null_backend import NullBackend |
---|
2010 | + |
---|
2011 | +# The following share file content was generated with |
---|
2012 | +# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2 |
---|
2013 | +# with share data == 'a'. The total size of this input |
---|
2014 | +# is 85 bytes. |
---|
2015 | +shareversionnumber = '\x00\x00\x00\x01' |
---|
2016 | +sharedatalength = '\x00\x00\x00\x01' |
---|
2017 | +numberofleases = '\x00\x00\x00\x01' |
---|
2018 | +shareinputdata = 'a' |
---|
2019 | +ownernumber = '\x00\x00\x00\x00' |
---|
2020 | +renewsecret = 'x'*32 |
---|
2021 | +cancelsecret = 'y'*32 |
---|
2022 | +expirationtime = '\x00(\xde\x80' |
---|
2023 | +nextlease = '' |
---|
2024 | +containerdata = shareversionnumber + sharedatalength + numberofleases |
---|
2025 | +client_data = shareinputdata + ownernumber + renewsecret + \ |
---|
2026 | + cancelsecret + expirationtime + nextlease |
---|
2027 | +share_data = containerdata + client_data |
---|
2028 | +testnodeid = 'testnodeidxxxxxxxxxx' |
---|
2029 | + |
---|
2030 | + |
---|
2031 | +class MockFileSystem(unittest.TestCase): |
---|
2032 | + """ I simulate a filesystem that the code under test can use. I simulate |
---|
2033 | + just the parts of the filesystem that the current implementation of Disk |
---|
2034 | + backend needs. """ |
---|
2035 | + def setUp(self): |
---|
2036 | + # Make patcher, patch, and effects for disk-using functions. |
---|
2037 | + msg( "%s.setUp()" % (self,)) |
---|
2038 | + self.mockedfilepaths = {} |
---|
2039 | + # keys are pathnames, values are MockFilePath objects. This is necessary because |
---|
2040 | + # MockFilePath behavior sometimes depends on the filesystem. Where it does, |
---|
2041 | + # self.mockedfilepaths has the relevant information. |
---|
2042 | + self.storedir = MockFilePath('teststoredir', self.mockedfilepaths) |
---|
2043 | + self.basedir = self.storedir.child('shares') |
---|
2044 | + self.baseincdir = self.basedir.child('incoming') |
---|
2045 | + self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a') |
---|
2046 | + self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a') |
---|
2047 | + self.shareincomingname = self.sharedirincomingname.child('0') |
---|
2048 | + self.sharefinalname = self.sharedirfinalname.child('0') |
---|
2049 | + |
---|
2050 | + # FIXME: these patches won't work; disk_backend no longer imports FilePath, BucketCountingCrawler, |
---|
2051 | + # or LeaseCheckingCrawler. |
---|
2052 | + |
---|
2053 | + self.FilePathFake = mock.patch('allmydata.storage.backends.disk.disk_backend.FilePath', new = MockFilePath) |
---|
2054 | + self.FilePathFake.__enter__() |
---|
2055 | + |
---|
2056 | + self.BCountingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.BucketCountingCrawler') |
---|
2057 | + FakeBCC = self.BCountingCrawler.__enter__() |
---|
2058 | + FakeBCC.side_effect = self.call_FakeBCC |
---|
2059 | + |
---|
2060 | + self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.disk.disk_backend.LeaseCheckingCrawler') |
---|
2061 | + FakeLCC = self.LeaseCheckingCrawler.__enter__() |
---|
2062 | + FakeLCC.side_effect = self.call_FakeLCC |
---|
2063 | + |
---|
2064 | + self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space') |
---|
2065 | + GetSpace = self.get_available_space.__enter__() |
---|
2066 | + GetSpace.side_effect = self.call_get_available_space |
---|
2067 | + |
---|
2068 | + self.statforsize = mock.patch('allmydata.storage.backends.disk.core.filepath.stat') |
---|
2069 | + getsize = self.statforsize.__enter__() |
---|
2070 | + getsize.side_effect = self.call_statforsize |
---|
2071 | + |
---|
2072 | + def call_FakeBCC(self, StateFile): |
---|
2073 | + return MockBCC() |
---|
2074 | + |
---|
2075 | + def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy): |
---|
2076 | + return MockLCC() |
---|
2077 | + |
---|
2078 | + def call_get_available_space(self, storedir, reservedspace): |
---|
2079 | + # The input vector has an input size of 85. |
---|
2080 | + return 85 - reservedspace |
---|
2081 | + |
---|
2082 | + def call_statforsize(self, fakefpname): |
---|
2083 | + return self.mockedfilepaths[fakefpname].fileobject.size() |
---|
2084 | + |
---|
2085 | + def tearDown(self): |
---|
2086 | + msg( "%s.tearDown()" % (self,)) |
---|
2087 | + self.FilePathFake.__exit__() |
---|
2088 | + self.mockedfilepaths = {} |
---|
2089 | + |
---|
2090 | + |
---|
2091 | +class MockFilePath: |
---|
2092 | + def __init__(self, pathstring, ffpathsenvironment, existence=False): |
---|
2093 | + # I can't just make the values MockFileObjects because they may be directories. |
---|
2094 | + self.mockedfilepaths = ffpathsenvironment |
---|
2095 | + self.path = pathstring |
---|
2096 | + self.existence = existence |
---|
2097 | + if not self.mockedfilepaths.has_key(self.path): |
---|
2098 | + # The first MockFilePath object is special |
---|
2099 | + self.mockedfilepaths[self.path] = self |
---|
2100 | + self.fileobject = None |
---|
2101 | + else: |
---|
2102 | + self.fileobject = self.mockedfilepaths[self.path].fileobject |
---|
2103 | + self.spawn = {} |
---|
2104 | + self.antecedent = os.path.dirname(self.path) |
---|
2105 | + |
---|
2106 | + def setContent(self, contentstring): |
---|
2107 | + # This method rewrites the data in the file that corresponds to its path |
---|
2108 | + # name whether it preexisted or not. |
---|
2109 | + self.fileobject = MockFileObject(contentstring) |
---|
2110 | + self.existence = True |
---|
2111 | + self.mockedfilepaths[self.path].fileobject = self.fileobject |
---|
2112 | + self.mockedfilepaths[self.path].existence = self.existence |
---|
2113 | + self.setparents() |
---|
2114 | + |
---|
2115 | + def create(self): |
---|
2116 | + # This method chokes if there's a pre-existing file! |
---|
2117 | + if self.mockedfilepaths[self.path].fileobject: |
---|
2118 | + raise OSError |
---|
2119 | + else: |
---|
2120 | + self.existence = True |
---|
2121 | + self.mockedfilepaths[self.path].fileobject = self.fileobject |
---|
2122 | + self.mockedfilepaths[self.path].existence = self.existence |
---|
2123 | + self.setparents() |
---|
2124 | + |
---|
2125 | + def open(self, mode='r'): |
---|
2126 | + # XXX Makes no use of mode. |
---|
2127 | + if not self.mockedfilepaths[self.path].fileobject: |
---|
2128 | + # If there's no fileobject there already then make one and put it there. |
---|
2129 | + self.fileobject = MockFileObject() |
---|
2130 | + self.existence = True |
---|
2131 | + self.mockedfilepaths[self.path].fileobject = self.fileobject |
---|
2132 | + self.mockedfilepaths[self.path].existence = self.existence |
---|
2133 | + else: |
---|
2134 | + # Otherwise get a ref to it. |
---|
2135 | + self.fileobject = self.mockedfilepaths[self.path].fileobject |
---|
2136 | + self.existence = self.mockedfilepaths[self.path].existence |
---|
2137 | + return self.fileobject.open(mode) |
---|
2138 | + |
---|
2139 | + def child(self, childstring): |
---|
2140 | + arg2child = os.path.join(self.path, childstring) |
---|
2141 | + child = MockFilePath(arg2child, self.mockedfilepaths) |
---|
2142 | + return child |
---|
2143 | + |
---|
2144 | + def children(self): |
---|
2145 | + childrenfromffs = [ffp for ffp in self.mockedfilepaths.values() if ffp.path.startswith(self.path)] |
---|
2146 | + childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)] |
---|
2147 | + childrenfromffs = [ffp for ffp in childrenfromffs if ffp.exists()] |
---|
2148 | + self.spawn = frozenset(childrenfromffs) |
---|
2149 | + return self.spawn |
---|
2150 | + |
---|
2151 | + def parent(self): |
---|
2152 | + if self.mockedfilepaths.has_key(self.antecedent): |
---|
2153 | + parent = self.mockedfilepaths[self.antecedent] |
---|
2154 | + else: |
---|
2155 | + parent = MockFilePath(self.antecedent, self.mockedfilepaths) |
---|
2156 | + return parent |
---|
2157 | + |
---|
2158 | + def parents(self): |
---|
2159 | + antecedents = [] |
---|
2160 | + def f(fps, antecedents): |
---|
2161 | + newfps = os.path.split(fps)[0] |
---|
2162 | + if newfps: |
---|
2163 | + antecedents.append(newfps) |
---|
2164 | + f(newfps, antecedents) |
---|
2165 | + f(self.path, antecedents) |
---|
2166 | + return antecedents |
---|
2167 | + |
---|
2168 | + def setparents(self): |
---|
2169 | + for fps in self.parents(): |
---|
2170 | + if not self.mockedfilepaths.has_key(fps): |
---|
2171 | + self.mockedfilepaths[fps] = MockFilePath(fps, self.mockedfilepaths, exists=True) |
---|
2172 | + |
---|
2173 | + def basename(self): |
---|
2174 | + return os.path.split(self.path)[1] |
---|
2175 | + |
---|
2176 | + def moveTo(self, newffp): |
---|
2177 | + # XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo |
---|
2178 | + if self.mockedfilepaths[newffp.path].exists(): |
---|
2179 | + raise OSError |
---|
2180 | + else: |
---|
2181 | + self.mockedfilepaths[newffp.path] = self |
---|
2182 | + self.path = newffp.path |
---|
2183 | + |
---|
2184 | + def getsize(self): |
---|
2185 | + return self.fileobject.getsize() |
---|
2186 | + |
---|
2187 | + def exists(self): |
---|
2188 | + return self.existence |
---|
2189 | + |
---|
2190 | + def isdir(self): |
---|
2191 | + return True |
---|
2192 | + |
---|
2193 | + def makedirs(self): |
---|
2194 | + # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere! |
---|
2195 | + pass |
---|
2196 | + |
---|
2197 | + def remove(self): |
---|
2198 | + pass |
---|
2199 | + |
---|
2200 | + |
---|
2201 | +class MockFileObject: |
---|
2202 | + def __init__(self, contentstring=''): |
---|
2203 | + self.buffer = contentstring |
---|
2204 | + self.pos = 0 |
---|
2205 | + def open(self, mode='r'): |
---|
2206 | + return self |
---|
2207 | + def write(self, instring): |
---|
2208 | + begin = self.pos |
---|
2209 | + padlen = begin - len(self.buffer) |
---|
2210 | + if padlen > 0: |
---|
2211 | + self.buffer += '\x00' * padlen |
---|
2212 | + end = self.pos + len(instring) |
---|
2213 | + self.buffer = self.buffer[:begin]+instring+self.buffer[end:] |
---|
2214 | + self.pos = end |
---|
2215 | + def close(self): |
---|
2216 | + self.pos = 0 |
---|
2217 | + def seek(self, pos): |
---|
2218 | + self.pos = pos |
---|
2219 | + def read(self, numberbytes): |
---|
2220 | + return self.buffer[self.pos:self.pos+numberbytes] |
---|
2221 | + def tell(self): |
---|
2222 | + return self.pos |
---|
2223 | + def size(self): |
---|
2224 | + # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat! |
---|
2225 | + # XXX Finally we shall hopefully use a getsize method soon, must consult first though. |
---|
2226 | + # Hmmm... perhaps we need to sometimes stat the address when there's not a mockfileobject present? |
---|
2227 | + return {stat.ST_SIZE:len(self.buffer)} |
---|
2228 | + def getsize(self): |
---|
2229 | + return len(self.buffer) |
---|
2230 | + |
---|
2231 | +class MockBCC: |
---|
2232 | + def setServiceParent(self, Parent): |
---|
2233 | + pass |
---|
2234 | + |
---|
2235 | + |
---|
2236 | +class MockLCC: |
---|
2237 | + def setServiceParent(self, Parent): |
---|
2238 | + pass |
---|
2239 | + |
---|
2240 | + |
---|
2241 | +class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin): |
---|
2242 | + """ NullBackend is just for testing and executable documentation, so |
---|
2243 | + this test is actually a test of StorageServer in which we're using |
---|
2244 | + NullBackend as helper code for the test, rather than a test of |
---|
2245 | + NullBackend. """ |
---|
2246 | + def setUp(self): |
---|
2247 | + self.ss = StorageServer(testnodeid, NullBackend()) |
---|
2248 | + |
---|
2249 | + @mock.patch('os.mkdir') |
---|
2250 | + @mock.patch('__builtin__.open') |
---|
2251 | + @mock.patch('os.listdir') |
---|
2252 | + @mock.patch('os.path.isdir') |
---|
2253 | + def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir): |
---|
2254 | + """ |
---|
2255 | + Write a new share. This tests that StorageServer's remote_allocate_buckets |
---|
2256 | + generates the correct return types when given test-vector arguments. That |
---|
2257 | + bs is of the correct type is verified by attempting to invoke remote_write |
---|
2258 | + on bs[0]. |
---|
2259 | + """ |
---|
2260 | + alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
2261 | + bs[0].remote_write(0, 'a') |
---|
2262 | + self.failIf(mockisdir.called) |
---|
2263 | + self.failIf(mocklistdir.called) |
---|
2264 | + self.failIf(mockopen.called) |
---|
2265 | + self.failIf(mockmkdir.called) |
---|
2266 | + |
---|
2267 | + |
---|
2268 | +class TestServerConstruction(MockFileSystem, ReallyEqualMixin): |
---|
2269 | + def test_create_server_disk_backend(self): |
---|
2270 | + """ This tests whether a server instance can be constructed with a |
---|
2271 | + filesystem backend. To pass the test, it mustn't use the filesystem |
---|
2272 | + outside of its configured storedir. """ |
---|
2273 | + StorageServer(testnodeid, DiskBackend(self.storedir)) |
---|
2274 | + |
---|
2275 | + |
---|
2276 | +class TestServerAndDiskBackend(MockFileSystem, ReallyEqualMixin): |
---|
2277 | + """ This tests both the StorageServer and the Disk backend together. """ |
---|
2278 | + def setUp(self): |
---|
2279 | + MockFileSystem.setUp(self) |
---|
2280 | + try: |
---|
2281 | + self.backend = DiskBackend(self.storedir) |
---|
2282 | + self.ss = StorageServer(testnodeid, self.backend) |
---|
2283 | + |
---|
2284 | + self.backendwithreserve = DiskBackend(self.storedir, reserved_space = 1) |
---|
2285 | + self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve) |
---|
2286 | + except: |
---|
2287 | + MockFileSystem.tearDown(self) |
---|
2288 | + raise |
---|
2289 | + |
---|
2290 | + @mock.patch('time.time') |
---|
2291 | + @mock.patch('allmydata.util.fileutil.get_available_space') |
---|
2292 | + def test_out_of_space(self, mockget_available_space, mocktime): |
---|
2293 | + mocktime.return_value = 0 |
---|
2294 | + |
---|
2295 | + def call_get_available_space(dir, reserve): |
---|
2296 | + return 0 |
---|
2297 | + |
---|
2298 | + mockget_available_space.side_effect = call_get_available_space |
---|
2299 | + alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
2300 | + self.failUnlessReallyEqual(bsc, {}) |
---|
2301 | + |
---|
2302 | + @mock.patch('time.time') |
---|
2303 | + def test_write_and_read_share(self, mocktime): |
---|
2304 | + """ |
---|
2305 | + Write a new share, read it, and test the server's (and disk backend's) |
---|
2306 | + handling of simultaneous and successive attempts to write the same |
---|
2307 | + share. |
---|
2308 | + """ |
---|
2309 | + mocktime.return_value = 0 |
---|
2310 | + # Inspect incoming and fail unless it's empty. |
---|
2311 | + incomingset = self.ss.backend.get_incoming_shnums('teststorage_index') |
---|
2312 | + |
---|
2313 | + self.failUnlessReallyEqual(incomingset, frozenset()) |
---|
2314 | + |
---|
2315 | + # Populate incoming with the sharenum: 0. |
---|
2316 | + alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock()) |
---|
2317 | + |
---|
2318 | + # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there. |
---|
2319 | + self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,))) |
---|
2320 | + |
---|
2321 | + |
---|
2322 | + |
---|
2323 | + # Attempt to create a second share writer with the same sharenum. |
---|
2324 | + alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock()) |
---|
2325 | + |
---|
2326 | + # Show that no sharewriter results from a remote_allocate_buckets |
---|
2327 | + # with the same si and sharenum, until BucketWriter.remote_close() |
---|
2328 | + # has been called. |
---|
2329 | + self.failIf(bsa) |
---|
2330 | + |
---|
2331 | + # Test allocated size. |
---|
2332 | + spaceint = self.ss.allocated_size() |
---|
2333 | + self.failUnlessReallyEqual(spaceint, 1) |
---|
2334 | + |
---|
2335 | + # Write 'a' to shnum 0. Only tested together with close and read. |
---|
2336 | + bs[0].remote_write(0, 'a') |
---|
2337 | + |
---|
2338 | + # Preclose: Inspect final, failUnless nothing there. |
---|
2339 | + self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0) |
---|
2340 | + bs[0].remote_close() |
---|
2341 | + |
---|
2342 | + # Postclose: (Omnibus) failUnless written data is in final. |
---|
2343 | + sharesinfinal = list(self.backend.get_shares('teststorage_index')) |
---|
2344 | + self.failUnlessReallyEqual(len(sharesinfinal), 1) |
---|
2345 | + contents = sharesinfinal[0].read_share_data(0, 73) |
---|
2346 | + self.failUnlessReallyEqual(contents, client_data) |
---|
2347 | + |
---|
2348 | + # Exercise the case that the share we're asking to allocate is |
---|
2349 | + # already (completely) uploaded. |
---|
2350 | + self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
2351 | + |
---|
2352 | + |
---|
2353 | + def test_read_old_share(self): |
---|
2354 | + """ This tests whether the code correctly finds and reads |
---|
2355 | + shares written out by old (Tahoe-LAFS <= v1.8.2) |
---|
2356 | + servers. There is a similar test in test_download, but that one |
---|
2357 | + is from the perspective of the client and exercises a deeper |
---|
2358 | + stack of code. This one is for exercising just the |
---|
2359 | + StorageServer object. """ |
---|
2360 | + # Contruct a file with the appropriate contents in the mockfilesystem. |
---|
2361 | + datalen = len(share_data) |
---|
2362 | + finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(0)) |
---|
2363 | + finalhome.setContent(share_data) |
---|
2364 | + |
---|
2365 | + # Now begin the test. |
---|
2366 | + bs = self.ss.remote_get_buckets('teststorage_index') |
---|
2367 | + |
---|
2368 | + self.failUnlessEqual(len(bs), 1) |
---|
2369 | + b = bs['0'] |
---|
2370 | + # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors. |
---|
2371 | + self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data) |
---|
2372 | + # If you try to read past the end you get the as much data as is there. |
---|
2373 | + self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data) |
---|
2374 | + # If you start reading past the end of the file you get the empty string. |
---|
2375 | + self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '') |
---|
2376 | } |
---|
2377 | [Pluggable backends -- all other changes. refs #999 |
---|
2378 | david-sarah@jacaranda.org**20110919233256 |
---|
2379 | Ignore-this: 1a77b6b5d178b32a9b914b699ba7e957 |
---|
2380 | ] { |
---|
2381 | hunk ./src/allmydata/client.py 245 |
---|
2382 | sharetypes.append("immutable") |
---|
2383 | if self.get_config("storage", "expire.mutable", True, boolean=True): |
---|
2384 | sharetypes.append("mutable") |
---|
2385 | - expiration_sharetypes = tuple(sharetypes) |
---|
2386 | |
---|
2387 | hunk ./src/allmydata/client.py 246 |
---|
2388 | + expiration_policy = { |
---|
2389 | + 'enabled': expire, |
---|
2390 | + 'mode': mode, |
---|
2391 | + 'override_lease_duration': o_l_d, |
---|
2392 | + 'cutoff_date': cutoff_date, |
---|
2393 | + 'sharetypes': tuple(sharetypes), |
---|
2394 | + } |
---|
2395 | ss = StorageServer(storedir, self.nodeid, |
---|
2396 | reserved_space=reserved, |
---|
2397 | discard_storage=discard, |
---|
2398 | hunk ./src/allmydata/client.py 258 |
---|
2399 | readonly_storage=readonly, |
---|
2400 | stats_provider=self.stats_provider, |
---|
2401 | - expiration_enabled=expire, |
---|
2402 | - expiration_mode=mode, |
---|
2403 | - expiration_override_lease_duration=o_l_d, |
---|
2404 | - expiration_cutoff_date=cutoff_date, |
---|
2405 | - expiration_sharetypes=expiration_sharetypes) |
---|
2406 | + expiration_policy=expiration_policy) |
---|
2407 | self.add_service(ss) |
---|
2408 | |
---|
2409 | d = self.when_tub_ready() |
---|
2410 | hunk ./src/allmydata/immutable/offloaded.py 306 |
---|
2411 | if os.path.exists(self._encoding_file): |
---|
2412 | self.log("ciphertext already present, bypassing fetch", |
---|
2413 | level=log.UNUSUAL) |
---|
2414 | + # XXX the following comment is probably stale, since |
---|
2415 | + # LocalCiphertextReader.get_plaintext_hashtree_leaves does not exist. |
---|
2416 | + # |
---|
2417 | # we'll still need the plaintext hashes (when |
---|
2418 | # LocalCiphertextReader.get_plaintext_hashtree_leaves() is |
---|
2419 | # called), and currently the easiest way to get them is to ask |
---|
2420 | hunk ./src/allmydata/immutable/upload.py 765 |
---|
2421 | self._status.set_progress(1, progress) |
---|
2422 | return cryptdata |
---|
2423 | |
---|
2424 | - |
---|
2425 | def get_plaintext_hashtree_leaves(self, first, last, num_segments): |
---|
2426 | hunk ./src/allmydata/immutable/upload.py 766 |
---|
2427 | + """OBSOLETE; Get the leaf nodes of a merkle hash tree over the |
---|
2428 | + plaintext segments, i.e. get the tagged hashes of the given segments. |
---|
2429 | + The segment size is expected to be generated by the |
---|
2430 | + IEncryptedUploadable before any plaintext is read or ciphertext |
---|
2431 | + produced, so that the segment hashes can be generated with only a |
---|
2432 | + single pass. |
---|
2433 | + |
---|
2434 | + This returns a Deferred that fires with a sequence of hashes, using: |
---|
2435 | + |
---|
2436 | + tuple(segment_hashes[first:last]) |
---|
2437 | + |
---|
2438 | + 'num_segments' is used to assert that the number of segments that the |
---|
2439 | + IEncryptedUploadable handled matches the number of segments that the |
---|
2440 | + encoder was expecting. |
---|
2441 | + |
---|
2442 | + This method must not be called until the final byte has been read |
---|
2443 | + from read_encrypted(). Once this method is called, read_encrypted() |
---|
2444 | + can never be called again. |
---|
2445 | + """ |
---|
2446 | # this is currently unused, but will live again when we fix #453 |
---|
2447 | if len(self._plaintext_segment_hashes) < num_segments: |
---|
2448 | # close out the last one |
---|
2449 | hunk ./src/allmydata/immutable/upload.py 803 |
---|
2450 | return defer.succeed(tuple(self._plaintext_segment_hashes[first:last])) |
---|
2451 | |
---|
2452 | def get_plaintext_hash(self): |
---|
2453 | + """OBSOLETE; Get the hash of the whole plaintext. |
---|
2454 | + |
---|
2455 | + This returns a Deferred that fires with a tagged SHA-256 hash of the |
---|
2456 | + whole plaintext, obtained from hashutil.plaintext_hash(data). |
---|
2457 | + """ |
---|
2458 | + # this is currently unused, but will live again when we fix #453 |
---|
2459 | h = self._plaintext_hasher.digest() |
---|
2460 | return defer.succeed(h) |
---|
2461 | |
---|
2462 | hunk ./src/allmydata/interfaces.py 29 |
---|
2463 | Number = IntegerConstraint(8) # 2**(8*8) == 16EiB ~= 18e18 ~= 18 exabytes |
---|
2464 | Offset = Number |
---|
2465 | ReadSize = int # the 'int' constraint is 2**31 == 2Gib -- large files are processed in not-so-large increments |
---|
2466 | -WriteEnablerSecret = Hash # used to protect mutable bucket modifications |
---|
2467 | -LeaseRenewSecret = Hash # used to protect bucket lease renewal requests |
---|
2468 | -LeaseCancelSecret = Hash # used to protect bucket lease cancellation requests |
---|
2469 | +WriteEnablerSecret = Hash # used to protect mutable share modifications |
---|
2470 | +LeaseRenewSecret = Hash # used to protect lease renewal requests |
---|
2471 | +LeaseCancelSecret = Hash # used to protect lease cancellation requests |
---|
2472 | |
---|
2473 | class RIStubClient(RemoteInterface): |
---|
2474 | """Each client publishes a service announcement for a dummy object called |
---|
2475 | hunk ./src/allmydata/interfaces.py 106 |
---|
2476 | sharenums=SetOf(int, maxLength=MAX_BUCKETS), |
---|
2477 | allocated_size=Offset, canary=Referenceable): |
---|
2478 | """ |
---|
2479 | - @param storage_index: the index of the bucket to be created or |
---|
2480 | + @param storage_index: the index of the shareset to be created or |
---|
2481 | increfed. |
---|
2482 | @param sharenums: these are the share numbers (probably between 0 and |
---|
2483 | 99) that the sender is proposing to store on this |
---|
2484 | hunk ./src/allmydata/interfaces.py 111 |
---|
2485 | server. |
---|
2486 | - @param renew_secret: This is the secret used to protect bucket refresh |
---|
2487 | + @param renew_secret: This is the secret used to protect lease renewal. |
---|
2488 | This secret is generated by the client and |
---|
2489 | stored for later comparison by the server. Each |
---|
2490 | server is given a different secret. |
---|
2491 | hunk ./src/allmydata/interfaces.py 115 |
---|
2492 | - @param cancel_secret: Like renew_secret, but protects bucket decref. |
---|
2493 | - @param canary: If the canary is lost before close(), the bucket is |
---|
2494 | + @param cancel_secret: ignored |
---|
2495 | + @param canary: If the canary is lost before close(), the allocation is |
---|
2496 | deleted. |
---|
2497 | @return: tuple of (alreadygot, allocated), where alreadygot is what we |
---|
2498 | already have and allocated is what we hereby agree to accept. |
---|
2499 | hunk ./src/allmydata/interfaces.py 129 |
---|
2500 | renew_secret=LeaseRenewSecret, |
---|
2501 | cancel_secret=LeaseCancelSecret): |
---|
2502 | """ |
---|
2503 | - Add a new lease on the given bucket. If the renew_secret matches an |
---|
2504 | + Add a new lease on the given shareset. If the renew_secret matches an |
---|
2505 | existing lease, that lease will be renewed instead. If there is no |
---|
2506 | hunk ./src/allmydata/interfaces.py 131 |
---|
2507 | - bucket for the given storage_index, return silently. (note that in |
---|
2508 | + shareset for the given storage_index, return silently. (Note that in |
---|
2509 | tahoe-1.3.0 and earlier, IndexError was raised if there was no |
---|
2510 | hunk ./src/allmydata/interfaces.py 133 |
---|
2511 | - bucket) |
---|
2512 | + shareset.) |
---|
2513 | """ |
---|
2514 | return Any() # returns None now, but future versions might change |
---|
2515 | |
---|
2516 | hunk ./src/allmydata/interfaces.py 139 |
---|
2517 | def renew_lease(storage_index=StorageIndex, renew_secret=LeaseRenewSecret): |
---|
2518 | """ |
---|
2519 | - Renew the lease on a given bucket, resetting the timer to 31 days. |
---|
2520 | - Some networks will use this, some will not. If there is no bucket for |
---|
2521 | + Renew the lease on a given shareset, resetting the timer to 31 days. |
---|
2522 | + Some networks will use this, some will not. If there is no shareset for |
---|
2523 | the given storage_index, IndexError will be raised. |
---|
2524 | |
---|
2525 | For mutable shares, if the given renew_secret does not match an |
---|
2526 | hunk ./src/allmydata/interfaces.py 146 |
---|
2527 | existing lease, IndexError will be raised with a note listing the |
---|
2528 | server-nodeids on the existing leases, so leases on migrated shares |
---|
2529 | - can be renewed or cancelled. For immutable shares, IndexError |
---|
2530 | - (without the note) will be raised. |
---|
2531 | + can be renewed. For immutable shares, IndexError (without the note) |
---|
2532 | + will be raised. |
---|
2533 | """ |
---|
2534 | return Any() |
---|
2535 | |
---|
2536 | hunk ./src/allmydata/interfaces.py 154 |
---|
2537 | def get_buckets(storage_index=StorageIndex): |
---|
2538 | return DictOf(int, RIBucketReader, maxKeys=MAX_BUCKETS) |
---|
2539 | |
---|
2540 | - |
---|
2541 | - |
---|
2542 | def slot_readv(storage_index=StorageIndex, |
---|
2543 | shares=ListOf(int), readv=ReadVector): |
---|
2544 | """Read a vector from the numbered shares associated with the given |
---|
2545 | hunk ./src/allmydata/interfaces.py 163 |
---|
2546 | |
---|
2547 | def slot_testv_and_readv_and_writev(storage_index=StorageIndex, |
---|
2548 | secrets=TupleOf(WriteEnablerSecret, |
---|
2549 | - LeaseRenewSecret, |
---|
2550 | - LeaseCancelSecret), |
---|
2551 | + LeaseRenewSecret), |
---|
2552 | tw_vectors=TestAndWriteVectorsForShares, |
---|
2553 | r_vector=ReadVector, |
---|
2554 | ): |
---|
2555 | hunk ./src/allmydata/interfaces.py 167 |
---|
2556 | - """General-purpose test-and-set operation for mutable slots. Perform |
---|
2557 | - a bunch of comparisons against the existing shares. If they all pass, |
---|
2558 | - then apply a bunch of write vectors to those shares. Then use the |
---|
2559 | - read vectors to extract data from all the shares and return the data. |
---|
2560 | + """ |
---|
2561 | + General-purpose atomic test-read-and-set operation for mutable slots. |
---|
2562 | + Perform a bunch of comparisons against the existing shares. If they |
---|
2563 | + all pass: use the read vectors to extract data from all the shares, |
---|
2564 | + then apply a bunch of write vectors to those shares. Return the read |
---|
2565 | + data, which does not include any modifications made by the writes. |
---|
2566 | |
---|
2567 | This method is, um, large. The goal is to allow clients to update all |
---|
2568 | the shares associated with a mutable file in a single round trip. |
---|
2569 | hunk ./src/allmydata/interfaces.py 177 |
---|
2570 | |
---|
2571 | - @param storage_index: the index of the bucket to be created or |
---|
2572 | + @param storage_index: the index of the shareset to be created or |
---|
2573 | increfed. |
---|
2574 | @param write_enabler: a secret that is stored along with the slot. |
---|
2575 | Writes are accepted from any caller who can |
---|
2576 | hunk ./src/allmydata/interfaces.py 183 |
---|
2577 | present the matching secret. A different secret |
---|
2578 | should be used for each slot*server pair. |
---|
2579 | - @param renew_secret: This is the secret used to protect bucket refresh |
---|
2580 | + @param renew_secret: This is the secret used to protect lease renewal. |
---|
2581 | This secret is generated by the client and |
---|
2582 | stored for later comparison by the server. Each |
---|
2583 | server is given a different secret. |
---|
2584 | hunk ./src/allmydata/interfaces.py 187 |
---|
2585 | - @param cancel_secret: Like renew_secret, but protects bucket decref. |
---|
2586 | + @param cancel_secret: ignored |
---|
2587 | |
---|
2588 | hunk ./src/allmydata/interfaces.py 189 |
---|
2589 | - The 'secrets' argument is a tuple of (write_enabler, renew_secret, |
---|
2590 | - cancel_secret). The first is required to perform any write. The |
---|
2591 | - latter two are used when allocating new shares. To simply acquire a |
---|
2592 | - new lease on existing shares, use an empty testv and an empty writev. |
---|
2593 | + The 'secrets' argument is a tuple with (write_enabler, renew_secret). |
---|
2594 | + The write_enabler is required to perform any write. The renew_secret |
---|
2595 | + is used when allocating new shares. |
---|
2596 | |
---|
2597 | Each share can have a separate test vector (i.e. a list of |
---|
2598 | comparisons to perform). If all vectors for all shares pass, then all |
---|
2599 | hunk ./src/allmydata/interfaces.py 280 |
---|
2600 | store that on disk. |
---|
2601 | """ |
---|
2602 | |
---|
2603 | -class IStorageBucketWriter(Interface): |
---|
2604 | + |
---|
2605 | +class IStorageBackend(Interface): |
---|
2606 | """ |
---|
2607 | hunk ./src/allmydata/interfaces.py 283 |
---|
2608 | - Objects of this kind live on the client side. |
---|
2609 | + Objects of this kind live on the server side and are used by the |
---|
2610 | + storage server object. |
---|
2611 | """ |
---|
2612 | hunk ./src/allmydata/interfaces.py 286 |
---|
2613 | - def put_block(segmentnum=int, data=ShareData): |
---|
2614 | - """@param data: For most segments, this data will be 'blocksize' |
---|
2615 | - bytes in length. The last segment might be shorter. |
---|
2616 | - @return: a Deferred that fires (with None) when the operation completes |
---|
2617 | + def get_available_space(): |
---|
2618 | + """ |
---|
2619 | + Returns available space for share storage in bytes, or |
---|
2620 | + None if this information is not available or if the available |
---|
2621 | + space is unlimited. |
---|
2622 | + |
---|
2623 | + If the backend is configured for read-only mode then this will |
---|
2624 | + return 0. |
---|
2625 | + """ |
---|
2626 | + |
---|
2627 | + def get_sharesets_for_prefix(prefix): |
---|
2628 | + """ |
---|
2629 | + Generates IShareSet objects for all storage indices matching the |
---|
2630 | + given prefix for which this backend holds shares. |
---|
2631 | + """ |
---|
2632 | + |
---|
2633 | + def get_shareset(storageindex): |
---|
2634 | + """ |
---|
2635 | + Get an IShareSet object for the given storage index. |
---|
2636 | + """ |
---|
2637 | + |
---|
2638 | + def advise_corrupt_share(storageindex, sharetype, shnum, reason): |
---|
2639 | + """ |
---|
2640 | + Clients who discover hash failures in shares that they have |
---|
2641 | + downloaded from me will use this method to inform me about the |
---|
2642 | + failures. I will record their concern so that my operator can |
---|
2643 | + manually inspect the shares in question. |
---|
2644 | + |
---|
2645 | + 'sharetype' is either 'mutable' or 'immutable'. 'shnum' is the integer |
---|
2646 | + share number. 'reason' is a human-readable explanation of the problem, |
---|
2647 | + probably including some expected hash values and the computed ones |
---|
2648 | + that did not match. Corruption advisories for mutable shares should |
---|
2649 | + include a hash of the public key (the same value that appears in the |
---|
2650 | + mutable-file verify-cap), since the current share format does not |
---|
2651 | + store that on disk. |
---|
2652 | + |
---|
2653 | + @param storageindex=str |
---|
2654 | + @param sharetype=str |
---|
2655 | + @param shnum=int |
---|
2656 | + @param reason=str |
---|
2657 | + """ |
---|
2658 | + |
---|
2659 | + |
---|
2660 | +class IShareSet(Interface): |
---|
2661 | + def get_storage_index(): |
---|
2662 | + """ |
---|
2663 | + Returns the storage index for this shareset. |
---|
2664 | + """ |
---|
2665 | + |
---|
2666 | + def get_storage_index_string(): |
---|
2667 | + """ |
---|
2668 | + Returns the base32-encoded storage index for this shareset. |
---|
2669 | + """ |
---|
2670 | + |
---|
2671 | + def get_overhead(): |
---|
2672 | + """ |
---|
2673 | + Returns the storage overhead, in bytes, of this shareset (exclusive |
---|
2674 | + of the space used by its shares). |
---|
2675 | + """ |
---|
2676 | + |
---|
2677 | + def get_shares(): |
---|
2678 | + """ |
---|
2679 | + Generates the IStoredShare objects held in this shareset. |
---|
2680 | + """ |
---|
2681 | + |
---|
2682 | + def has_incoming(shnum): |
---|
2683 | + """ |
---|
2684 | + Returns True if this shareset has an incoming (partial) share with this number, otherwise False. |
---|
2685 | + """ |
---|
2686 | + |
---|
2687 | + def make_bucket_writer(storageserver, shnum, max_space_per_bucket, lease_info, canary): |
---|
2688 | + """ |
---|
2689 | + Create a bucket writer that can be used to write data to a given share. |
---|
2690 | + |
---|
2691 | + @param storageserver=RIStorageServer |
---|
2692 | + @param shnum=int: A share number in this shareset |
---|
2693 | + @param max_space_per_bucket=int: The maximum space allocated for the |
---|
2694 | + share, in bytes |
---|
2695 | + @param lease_info=LeaseInfo: The initial lease information |
---|
2696 | + @param canary=Referenceable: If the canary is lost before close(), the |
---|
2697 | + bucket is deleted. |
---|
2698 | + @return an IStorageBucketWriter for the given share |
---|
2699 | + """ |
---|
2700 | + |
---|
2701 | + def make_bucket_reader(storageserver, share): |
---|
2702 | + """ |
---|
2703 | + Create a bucket reader that can be used to read data from a given share. |
---|
2704 | + |
---|
2705 | + @param storageserver=RIStorageServer |
---|
2706 | + @param share=IStoredShare |
---|
2707 | + @return an IStorageBucketReader for the given share |
---|
2708 | + """ |
---|
2709 | + |
---|
2710 | + def readv(wanted_shnums, read_vector): |
---|
2711 | + """ |
---|
2712 | + Read a vector from the numbered shares in this shareset. An empty |
---|
2713 | + wanted_shnums list means to return data from all known shares. |
---|
2714 | + |
---|
2715 | + @param wanted_shnums=ListOf(int) |
---|
2716 | + @param read_vector=ReadVector |
---|
2717 | + @return DictOf(int, ReadData): shnum -> results, with one key per share |
---|
2718 | + """ |
---|
2719 | + |
---|
2720 | + def testv_and_readv_and_writev(storageserver, secrets, test_and_write_vectors, read_vector, expiration_time): |
---|
2721 | + """ |
---|
2722 | + General-purpose atomic test-read-and-set operation for mutable slots. |
---|
2723 | + Perform a bunch of comparisons against the existing shares in this |
---|
2724 | + shareset. If they all pass: use the read vectors to extract data from |
---|
2725 | + all the shares, then apply a bunch of write vectors to those shares. |
---|
2726 | + Return the read data, which does not include any modifications made by |
---|
2727 | + the writes. |
---|
2728 | + |
---|
2729 | + See the similar method in RIStorageServer for more detail. |
---|
2730 | + |
---|
2731 | + @param storageserver=RIStorageServer |
---|
2732 | + @param secrets=TupleOf(WriteEnablerSecret, LeaseRenewSecret[, ...]) |
---|
2733 | + @param test_and_write_vectors=TestAndWriteVectorsForShares |
---|
2734 | + @param read_vector=ReadVector |
---|
2735 | + @param expiration_time=int |
---|
2736 | + @return TupleOf(bool, DictOf(int, ReadData)) |
---|
2737 | + """ |
---|
2738 | + |
---|
2739 | + def add_or_renew_lease(lease_info): |
---|
2740 | + """ |
---|
2741 | + Add a new lease on the shares in this shareset. If the renew_secret |
---|
2742 | + matches an existing lease, that lease will be renewed instead. If |
---|
2743 | + there are no shares in this shareset, return silently. |
---|
2744 | + |
---|
2745 | + @param lease_info=LeaseInfo |
---|
2746 | + """ |
---|
2747 | + |
---|
2748 | + def renew_lease(renew_secret, new_expiration_time): |
---|
2749 | + """ |
---|
2750 | + Renew a lease on the shares in this shareset, resetting the timer |
---|
2751 | + to 31 days. Some grids will use this, some will not. If there are no |
---|
2752 | + shares in this shareset, IndexError will be raised. |
---|
2753 | + |
---|
2754 | + For mutable shares, if the given renew_secret does not match an |
---|
2755 | + existing lease, IndexError will be raised with a note listing the |
---|
2756 | + server-nodeids on the existing leases, so leases on migrated shares |
---|
2757 | + can be renewed. For immutable shares, IndexError (without the note) |
---|
2758 | + will be raised. |
---|
2759 | + |
---|
2760 | + @param renew_secret=LeaseRenewSecret |
---|
2761 | + """ |
---|
2762 | + |
---|
2763 | + |
---|
2764 | +class IStoredShare(Interface): |
---|
2765 | + """ |
---|
2766 | + This object contains as much as all of the share data. It is intended |
---|
2767 | + for lazy evaluation, such that in many use cases substantially less than |
---|
2768 | + all of the share data will be accessed. |
---|
2769 | + """ |
---|
2770 | + def close(): |
---|
2771 | + """ |
---|
2772 | + Complete writing to this share. |
---|
2773 | + """ |
---|
2774 | + |
---|
2775 | + def get_storage_index(): |
---|
2776 | + """ |
---|
2777 | + Returns the storage index. |
---|
2778 | + """ |
---|
2779 | + |
---|
2780 | + def get_shnum(): |
---|
2781 | + """ |
---|
2782 | + Returns the share number. |
---|
2783 | + """ |
---|
2784 | + |
---|
2785 | + def get_data_length(): |
---|
2786 | + """ |
---|
2787 | + Returns the data length in bytes. |
---|
2788 | + """ |
---|
2789 | + |
---|
2790 | + def get_size(): |
---|
2791 | + """ |
---|
2792 | + Returns the size of the share in bytes. |
---|
2793 | + """ |
---|
2794 | + |
---|
2795 | + def get_used_space(): |
---|
2796 | + """ |
---|
2797 | + Returns the amount of backend storage including overhead, in bytes, used |
---|
2798 | + by this share. |
---|
2799 | + """ |
---|
2800 | + |
---|
2801 | + def unlink(): |
---|
2802 | + """ |
---|
2803 | + Signal that this share can be removed from the backend storage. This does |
---|
2804 | + not guarantee that the share data will be immediately inaccessible, or |
---|
2805 | + that it will be securely erased. |
---|
2806 | + """ |
---|
2807 | + |
---|
2808 | + def readv(read_vector): |
---|
2809 | + """ |
---|
2810 | + XXX |
---|
2811 | + """ |
---|
2812 | + |
---|
2813 | + |
---|
2814 | +class IStoredMutableShare(IStoredShare): |
---|
2815 | + def check_write_enabler(write_enabler, si_s): |
---|
2816 | + """ |
---|
2817 | + XXX |
---|
2818 | """ |
---|
2819 | |
---|
2820 | hunk ./src/allmydata/interfaces.py 489 |
---|
2821 | - def put_plaintext_hashes(hashes=ListOf(Hash)): |
---|
2822 | + def check_testv(test_vector): |
---|
2823 | + """ |
---|
2824 | + XXX |
---|
2825 | + """ |
---|
2826 | + |
---|
2827 | + def writev(datav, new_length): |
---|
2828 | + """ |
---|
2829 | + XXX |
---|
2830 | + """ |
---|
2831 | + |
---|
2832 | + |
---|
2833 | +class IStorageBucketWriter(Interface): |
---|
2834 | + """ |
---|
2835 | + Objects of this kind live on the client side. |
---|
2836 | + """ |
---|
2837 | + def put_block(segmentnum, data): |
---|
2838 | """ |
---|
2839 | hunk ./src/allmydata/interfaces.py 506 |
---|
2840 | + @param segmentnum=int |
---|
2841 | + @param data=ShareData: For most segments, this data will be 'blocksize' |
---|
2842 | + bytes in length. The last segment might be shorter. |
---|
2843 | @return: a Deferred that fires (with None) when the operation completes |
---|
2844 | """ |
---|
2845 | |
---|
2846 | hunk ./src/allmydata/interfaces.py 512 |
---|
2847 | - def put_crypttext_hashes(hashes=ListOf(Hash)): |
---|
2848 | + def put_crypttext_hashes(hashes): |
---|
2849 | """ |
---|
2850 | hunk ./src/allmydata/interfaces.py 514 |
---|
2851 | + @param hashes=ListOf(Hash) |
---|
2852 | @return: a Deferred that fires (with None) when the operation completes |
---|
2853 | """ |
---|
2854 | |
---|
2855 | hunk ./src/allmydata/interfaces.py 518 |
---|
2856 | - def put_block_hashes(blockhashes=ListOf(Hash)): |
---|
2857 | + def put_block_hashes(blockhashes): |
---|
2858 | """ |
---|
2859 | hunk ./src/allmydata/interfaces.py 520 |
---|
2860 | + @param blockhashes=ListOf(Hash) |
---|
2861 | @return: a Deferred that fires (with None) when the operation completes |
---|
2862 | """ |
---|
2863 | |
---|
2864 | hunk ./src/allmydata/interfaces.py 524 |
---|
2865 | - def put_share_hashes(sharehashes=ListOf(TupleOf(int, Hash))): |
---|
2866 | + def put_share_hashes(sharehashes): |
---|
2867 | """ |
---|
2868 | hunk ./src/allmydata/interfaces.py 526 |
---|
2869 | + @param sharehashes=ListOf(TupleOf(int, Hash)) |
---|
2870 | @return: a Deferred that fires (with None) when the operation completes |
---|
2871 | """ |
---|
2872 | |
---|
2873 | hunk ./src/allmydata/interfaces.py 530 |
---|
2874 | - def put_uri_extension(data=URIExtensionData): |
---|
2875 | + def put_uri_extension(data): |
---|
2876 | """This block of data contains integrity-checking information (hashes |
---|
2877 | of plaintext, crypttext, and shares), as well as encoding parameters |
---|
2878 | that are necessary to recover the data. This is a serialized dict |
---|
2879 | hunk ./src/allmydata/interfaces.py 535 |
---|
2880 | mapping strings to other strings. The hash of this data is kept in |
---|
2881 | - the URI and verified before any of the data is used. All buckets for |
---|
2882 | - a given file contain identical copies of this data. |
---|
2883 | + the URI and verified before any of the data is used. All share |
---|
2884 | + containers for a given file contain identical copies of this data. |
---|
2885 | |
---|
2886 | The serialization format is specified with the following pseudocode: |
---|
2887 | for k in sorted(dict.keys()): |
---|
2888 | hunk ./src/allmydata/interfaces.py 543 |
---|
2889 | assert re.match(r'^[a-zA-Z_\-]+$', k) |
---|
2890 | write(k + ':' + netstring(dict[k])) |
---|
2891 | |
---|
2892 | + @param data=URIExtensionData |
---|
2893 | @return: a Deferred that fires (with None) when the operation completes |
---|
2894 | """ |
---|
2895 | |
---|
2896 | hunk ./src/allmydata/interfaces.py 558 |
---|
2897 | |
---|
2898 | class IStorageBucketReader(Interface): |
---|
2899 | |
---|
2900 | - def get_block_data(blocknum=int, blocksize=int, size=int): |
---|
2901 | + def get_block_data(blocknum, blocksize, size): |
---|
2902 | """Most blocks will be the same size. The last block might be shorter |
---|
2903 | than the others. |
---|
2904 | |
---|
2905 | hunk ./src/allmydata/interfaces.py 562 |
---|
2906 | + @param blocknum=int |
---|
2907 | + @param blocksize=int |
---|
2908 | + @param size=int |
---|
2909 | @return: ShareData |
---|
2910 | """ |
---|
2911 | |
---|
2912 | hunk ./src/allmydata/interfaces.py 573 |
---|
2913 | @return: ListOf(Hash) |
---|
2914 | """ |
---|
2915 | |
---|
2916 | - def get_block_hashes(at_least_these=SetOf(int)): |
---|
2917 | + def get_block_hashes(at_least_these=()): |
---|
2918 | """ |
---|
2919 | hunk ./src/allmydata/interfaces.py 575 |
---|
2920 | + @param at_least_these=SetOf(int) |
---|
2921 | @return: ListOf(Hash) |
---|
2922 | """ |
---|
2923 | |
---|
2924 | hunk ./src/allmydata/interfaces.py 579 |
---|
2925 | - def get_share_hashes(at_least_these=SetOf(int)): |
---|
2926 | + def get_share_hashes(): |
---|
2927 | """ |
---|
2928 | @return: ListOf(TupleOf(int, Hash)) |
---|
2929 | """ |
---|
2930 | hunk ./src/allmydata/interfaces.py 611 |
---|
2931 | @return: unicode nickname, or None |
---|
2932 | """ |
---|
2933 | |
---|
2934 | - # methods moved from IntroducerClient, need review |
---|
2935 | - def get_all_connections(): |
---|
2936 | - """Return a frozenset of (nodeid, service_name, rref) tuples, one for |
---|
2937 | - each active connection we've established to a remote service. This is |
---|
2938 | - mostly useful for unit tests that need to wait until a certain number |
---|
2939 | - of connections have been made.""" |
---|
2940 | - |
---|
2941 | - def get_all_connectors(): |
---|
2942 | - """Return a dict that maps from (nodeid, service_name) to a |
---|
2943 | - RemoteServiceConnector instance for all services that we are actively |
---|
2944 | - trying to connect to. Each RemoteServiceConnector has the following |
---|
2945 | - public attributes:: |
---|
2946 | - |
---|
2947 | - service_name: the type of service provided, like 'storage' |
---|
2948 | - announcement_time: when we first heard about this service |
---|
2949 | - last_connect_time: when we last established a connection |
---|
2950 | - last_loss_time: when we last lost a connection |
---|
2951 | - |
---|
2952 | - version: the peer's version, from the most recent connection |
---|
2953 | - oldest_supported: the peer's oldest supported version, same |
---|
2954 | - |
---|
2955 | - rref: the RemoteReference, if connected, otherwise None |
---|
2956 | - remote_host: the IAddress, if connected, otherwise None |
---|
2957 | - |
---|
2958 | - This method is intended for monitoring interfaces, such as a web page |
---|
2959 | - that describes connecting and connected peers. |
---|
2960 | - """ |
---|
2961 | - |
---|
2962 | - def get_all_peerids(): |
---|
2963 | - """Return a frozenset of all peerids to whom we have a connection (to |
---|
2964 | - one or more services) established. Mostly useful for unit tests.""" |
---|
2965 | - |
---|
2966 | - def get_all_connections_for(service_name): |
---|
2967 | - """Return a frozenset of (nodeid, service_name, rref) tuples, one |
---|
2968 | - for each active connection that provides the given SERVICE_NAME.""" |
---|
2969 | - |
---|
2970 | - def get_permuted_peers(service_name, key): |
---|
2971 | - """Returns an ordered list of (peerid, rref) tuples, selecting from |
---|
2972 | - the connections that provide SERVICE_NAME, using a hash-based |
---|
2973 | - permutation keyed by KEY. This randomizes the service list in a |
---|
2974 | - repeatable way, to distribute load over many peers. |
---|
2975 | - """ |
---|
2976 | - |
---|
2977 | |
---|
2978 | class IMutableSlotWriter(Interface): |
---|
2979 | """ |
---|
2980 | hunk ./src/allmydata/interfaces.py 616 |
---|
2981 | The interface for a writer around a mutable slot on a remote server. |
---|
2982 | """ |
---|
2983 | - def set_checkstring(checkstring, *args): |
---|
2984 | + def set_checkstring(seqnum_or_checkstring, root_hash=None, salt=None): |
---|
2985 | """ |
---|
2986 | Set the checkstring that I will pass to the remote server when |
---|
2987 | writing. |
---|
2988 | hunk ./src/allmydata/interfaces.py 640 |
---|
2989 | Add a block and salt to the share. |
---|
2990 | """ |
---|
2991 | |
---|
2992 | - def put_encprivey(encprivkey): |
---|
2993 | + def put_encprivkey(encprivkey): |
---|
2994 | """ |
---|
2995 | Add the encrypted private key to the share. |
---|
2996 | """ |
---|
2997 | hunk ./src/allmydata/interfaces.py 645 |
---|
2998 | |
---|
2999 | - def put_blockhashes(blockhashes=list): |
---|
3000 | + def put_blockhashes(blockhashes): |
---|
3001 | """ |
---|
3002 | hunk ./src/allmydata/interfaces.py 647 |
---|
3003 | + @param blockhashes=list |
---|
3004 | Add the block hash tree to the share. |
---|
3005 | """ |
---|
3006 | |
---|
3007 | hunk ./src/allmydata/interfaces.py 651 |
---|
3008 | - def put_sharehashes(sharehashes=dict): |
---|
3009 | + def put_sharehashes(sharehashes): |
---|
3010 | """ |
---|
3011 | hunk ./src/allmydata/interfaces.py 653 |
---|
3012 | + @param sharehashes=dict |
---|
3013 | Add the share hash chain to the share. |
---|
3014 | """ |
---|
3015 | |
---|
3016 | hunk ./src/allmydata/interfaces.py 739 |
---|
3017 | def get_extension_params(): |
---|
3018 | """Return the extension parameters in the URI""" |
---|
3019 | |
---|
3020 | - def set_extension_params(): |
---|
3021 | + def set_extension_params(params): |
---|
3022 | """Set the extension parameters that should be in the URI""" |
---|
3023 | |
---|
3024 | class IDirectoryURI(Interface): |
---|
3025 | hunk ./src/allmydata/interfaces.py 879 |
---|
3026 | writer-visible data using this writekey. |
---|
3027 | """ |
---|
3028 | |
---|
3029 | - # TODO: Can this be overwrite instead of replace? |
---|
3030 | - def replace(new_contents): |
---|
3031 | - """Replace the contents of the mutable file, provided that no other |
---|
3032 | + def overwrite(new_contents): |
---|
3033 | + """Overwrite the contents of the mutable file, provided that no other |
---|
3034 | node has published (or is attempting to publish, concurrently) a |
---|
3035 | newer version of the file than this one. |
---|
3036 | |
---|
3037 | hunk ./src/allmydata/interfaces.py 1346 |
---|
3038 | is empty, the metadata will be an empty dictionary. |
---|
3039 | """ |
---|
3040 | |
---|
3041 | - def set_uri(name, writecap, readcap=None, metadata=None, overwrite=True): |
---|
3042 | + def set_uri(name, writecap, readcap, metadata=None, overwrite=True): |
---|
3043 | """I add a child (by writecap+readcap) at the specific name. I return |
---|
3044 | a Deferred that fires when the operation finishes. If overwrite= is |
---|
3045 | True, I will replace any existing child of the same name, otherwise |
---|
3046 | hunk ./src/allmydata/interfaces.py 1745 |
---|
3047 | Block Hash, and the encoding parameters, both of which must be included |
---|
3048 | in the URI. |
---|
3049 | |
---|
3050 | - I do not choose shareholders, that is left to the IUploader. I must be |
---|
3051 | - given a dict of RemoteReferences to storage buckets that are ready and |
---|
3052 | - willing to receive data. |
---|
3053 | + I do not choose shareholders, that is left to the IUploader. |
---|
3054 | """ |
---|
3055 | |
---|
3056 | def set_size(size): |
---|
3057 | hunk ./src/allmydata/interfaces.py 1752 |
---|
3058 | """Specify the number of bytes that will be encoded. This must be |
---|
3059 | peformed before get_serialized_params() can be called. |
---|
3060 | """ |
---|
3061 | + |
---|
3062 | def set_params(params): |
---|
3063 | """Override the default encoding parameters. 'params' is a tuple of |
---|
3064 | (k,d,n), where 'k' is the number of required shares, 'd' is the |
---|
3065 | hunk ./src/allmydata/interfaces.py 1848 |
---|
3066 | download, validate, decode, and decrypt data from them, writing the |
---|
3067 | results to an output file. |
---|
3068 | |
---|
3069 | - I do not locate the shareholders, that is left to the IDownloader. I must |
---|
3070 | - be given a dict of RemoteReferences to storage buckets that are ready to |
---|
3071 | - send data. |
---|
3072 | + I do not locate the shareholders, that is left to the IDownloader. |
---|
3073 | """ |
---|
3074 | |
---|
3075 | def setup(outfile): |
---|
3076 | hunk ./src/allmydata/interfaces.py 1950 |
---|
3077 | resuming an interrupted upload (where we need to compute the |
---|
3078 | plaintext hashes, but don't need the redundant encrypted data).""" |
---|
3079 | |
---|
3080 | - def get_plaintext_hashtree_leaves(first, last, num_segments): |
---|
3081 | - """OBSOLETE; Get the leaf nodes of a merkle hash tree over the |
---|
3082 | - plaintext segments, i.e. get the tagged hashes of the given segments. |
---|
3083 | - The segment size is expected to be generated by the |
---|
3084 | - IEncryptedUploadable before any plaintext is read or ciphertext |
---|
3085 | - produced, so that the segment hashes can be generated with only a |
---|
3086 | - single pass. |
---|
3087 | - |
---|
3088 | - This returns a Deferred that fires with a sequence of hashes, using: |
---|
3089 | - |
---|
3090 | - tuple(segment_hashes[first:last]) |
---|
3091 | - |
---|
3092 | - 'num_segments' is used to assert that the number of segments that the |
---|
3093 | - IEncryptedUploadable handled matches the number of segments that the |
---|
3094 | - encoder was expecting. |
---|
3095 | - |
---|
3096 | - This method must not be called until the final byte has been read |
---|
3097 | - from read_encrypted(). Once this method is called, read_encrypted() |
---|
3098 | - can never be called again. |
---|
3099 | - """ |
---|
3100 | - |
---|
3101 | - def get_plaintext_hash(): |
---|
3102 | - """OBSOLETE; Get the hash of the whole plaintext. |
---|
3103 | - |
---|
3104 | - This returns a Deferred that fires with a tagged SHA-256 hash of the |
---|
3105 | - whole plaintext, obtained from hashutil.plaintext_hash(data). |
---|
3106 | - """ |
---|
3107 | - |
---|
3108 | def close(): |
---|
3109 | """Just like IUploadable.close().""" |
---|
3110 | |
---|
3111 | hunk ./src/allmydata/interfaces.py 2144 |
---|
3112 | returns a Deferred that fires with an IUploadResults instance, from |
---|
3113 | which the URI of the file can be obtained as results.uri .""" |
---|
3114 | |
---|
3115 | - def upload_ssk(write_capability, new_version, uploadable): |
---|
3116 | - """TODO: how should this work?""" |
---|
3117 | - |
---|
3118 | class ICheckable(Interface): |
---|
3119 | def check(monitor, verify=False, add_lease=False): |
---|
3120 | """Check up on my health, optionally repairing any problems. |
---|
3121 | hunk ./src/allmydata/interfaces.py 2505 |
---|
3122 | |
---|
3123 | class IRepairResults(Interface): |
---|
3124 | """I contain the results of a repair operation.""" |
---|
3125 | - def get_successful(self): |
---|
3126 | + def get_successful(): |
---|
3127 | """Returns a boolean: True if the repair made the file healthy, False |
---|
3128 | if not. Repair failure generally indicates a file that has been |
---|
3129 | damaged beyond repair.""" |
---|
3130 | hunk ./src/allmydata/interfaces.py 2577 |
---|
3131 | Tahoe process will typically have a single NodeMaker, but unit tests may |
---|
3132 | create simplified/mocked forms for testing purposes. |
---|
3133 | """ |
---|
3134 | - def create_from_cap(writecap, readcap=None, **kwargs): |
---|
3135 | + def create_from_cap(writecap, readcap=None, deep_immutable=False, name=u"<unknown name>"): |
---|
3136 | """I create an IFilesystemNode from the given writecap/readcap. I can |
---|
3137 | only provide nodes for existing file/directory objects: use my other |
---|
3138 | methods to create new objects. I return synchronously.""" |
---|
3139 | hunk ./src/allmydata/monitor.py 30 |
---|
3140 | |
---|
3141 | # the following methods are provided for the operation code |
---|
3142 | |
---|
3143 | - def is_cancelled(self): |
---|
3144 | + def is_cancelled(): |
---|
3145 | """Returns True if the operation has been cancelled. If True, |
---|
3146 | operation code should stop creating new work, and attempt to stop any |
---|
3147 | work already in progress.""" |
---|
3148 | hunk ./src/allmydata/monitor.py 35 |
---|
3149 | |
---|
3150 | - def raise_if_cancelled(self): |
---|
3151 | + def raise_if_cancelled(): |
---|
3152 | """Raise OperationCancelledError if the operation has been cancelled. |
---|
3153 | Operation code that has a robust error-handling path can simply call |
---|
3154 | this periodically.""" |
---|
3155 | hunk ./src/allmydata/monitor.py 40 |
---|
3156 | |
---|
3157 | - def set_status(self, status): |
---|
3158 | + def set_status(status): |
---|
3159 | """Sets the Monitor's 'status' object to an arbitrary value. |
---|
3160 | Different operations will store different sorts of status information |
---|
3161 | here. Operation code should use get+modify+set sequences to update |
---|
3162 | hunk ./src/allmydata/monitor.py 46 |
---|
3163 | this.""" |
---|
3164 | |
---|
3165 | - def get_status(self): |
---|
3166 | + def get_status(): |
---|
3167 | """Return the status object. If the operation failed, this will be a |
---|
3168 | Failure instance.""" |
---|
3169 | |
---|
3170 | hunk ./src/allmydata/monitor.py 50 |
---|
3171 | - def finish(self, status): |
---|
3172 | + def finish(status): |
---|
3173 | """Call this when the operation is done, successful or not. The |
---|
3174 | Monitor's lifetime is influenced by the completion of the operation |
---|
3175 | it is monitoring. The Monitor's 'status' value will be set with the |
---|
3176 | hunk ./src/allmydata/monitor.py 63 |
---|
3177 | |
---|
3178 | # the following methods are provided for the initiator of the operation |
---|
3179 | |
---|
3180 | - def is_finished(self): |
---|
3181 | + def is_finished(): |
---|
3182 | """Return a boolean, True if the operation is done (whether |
---|
3183 | successful or failed), False if it is still running.""" |
---|
3184 | |
---|
3185 | hunk ./src/allmydata/monitor.py 67 |
---|
3186 | - def when_done(self): |
---|
3187 | + def when_done(): |
---|
3188 | """Return a Deferred that fires when the operation is complete. It |
---|
3189 | will fire with the operation status, the same value as returned by |
---|
3190 | get_status().""" |
---|
3191 | hunk ./src/allmydata/monitor.py 72 |
---|
3192 | |
---|
3193 | - def cancel(self): |
---|
3194 | + def cancel(): |
---|
3195 | """Cancel the operation as soon as possible. is_cancelled() will |
---|
3196 | start returning True after this is called.""" |
---|
3197 | |
---|
3198 | hunk ./src/allmydata/mutable/filenode.py 753 |
---|
3199 | self._writekey = writekey |
---|
3200 | self._serializer = defer.succeed(None) |
---|
3201 | |
---|
3202 | - |
---|
3203 | def get_sequence_number(self): |
---|
3204 | """ |
---|
3205 | Get the sequence number of the mutable version that I represent. |
---|
3206 | hunk ./src/allmydata/mutable/filenode.py 759 |
---|
3207 | """ |
---|
3208 | return self._version[0] # verinfo[0] == the sequence number |
---|
3209 | |
---|
3210 | + def get_servermap(self): |
---|
3211 | + return self._servermap |
---|
3212 | |
---|
3213 | hunk ./src/allmydata/mutable/filenode.py 762 |
---|
3214 | - # TODO: Terminology? |
---|
3215 | def get_writekey(self): |
---|
3216 | """ |
---|
3217 | I return a writekey or None if I don't have a writekey. |
---|
3218 | hunk ./src/allmydata/mutable/filenode.py 768 |
---|
3219 | """ |
---|
3220 | return self._writekey |
---|
3221 | |
---|
3222 | - |
---|
3223 | def set_downloader_hints(self, hints): |
---|
3224 | """ |
---|
3225 | I set the downloader hints. |
---|
3226 | hunk ./src/allmydata/mutable/filenode.py 776 |
---|
3227 | |
---|
3228 | self._downloader_hints = hints |
---|
3229 | |
---|
3230 | - |
---|
3231 | def get_downloader_hints(self): |
---|
3232 | """ |
---|
3233 | I return the downloader hints. |
---|
3234 | hunk ./src/allmydata/mutable/filenode.py 782 |
---|
3235 | """ |
---|
3236 | return self._downloader_hints |
---|
3237 | |
---|
3238 | - |
---|
3239 | def overwrite(self, new_contents): |
---|
3240 | """ |
---|
3241 | I overwrite the contents of this mutable file version with the |
---|
3242 | hunk ./src/allmydata/mutable/filenode.py 791 |
---|
3243 | |
---|
3244 | return self._do_serialized(self._overwrite, new_contents) |
---|
3245 | |
---|
3246 | - |
---|
3247 | def _overwrite(self, new_contents): |
---|
3248 | assert IMutableUploadable.providedBy(new_contents) |
---|
3249 | assert self._servermap.last_update_mode == MODE_WRITE |
---|
3250 | hunk ./src/allmydata/mutable/filenode.py 797 |
---|
3251 | |
---|
3252 | return self._upload(new_contents) |
---|
3253 | |
---|
3254 | - |
---|
3255 | def modify(self, modifier, backoffer=None): |
---|
3256 | """I use a modifier callback to apply a change to the mutable file. |
---|
3257 | I implement the following pseudocode:: |
---|
3258 | hunk ./src/allmydata/mutable/filenode.py 841 |
---|
3259 | |
---|
3260 | return self._do_serialized(self._modify, modifier, backoffer) |
---|
3261 | |
---|
3262 | - |
---|
3263 | def _modify(self, modifier, backoffer): |
---|
3264 | if backoffer is None: |
---|
3265 | backoffer = BackoffAgent().delay |
---|
3266 | hunk ./src/allmydata/mutable/filenode.py 846 |
---|
3267 | return self._modify_and_retry(modifier, backoffer, True) |
---|
3268 | |
---|
3269 | - |
---|
3270 | def _modify_and_retry(self, modifier, backoffer, first_time): |
---|
3271 | """ |
---|
3272 | I try to apply modifier to the contents of this version of the |
---|
3273 | hunk ./src/allmydata/mutable/filenode.py 878 |
---|
3274 | d.addErrback(_retry) |
---|
3275 | return d |
---|
3276 | |
---|
3277 | - |
---|
3278 | def _modify_once(self, modifier, first_time): |
---|
3279 | """ |
---|
3280 | I attempt to apply a modifier to the contents of the mutable |
---|
3281 | hunk ./src/allmydata/mutable/filenode.py 913 |
---|
3282 | d.addCallback(_apply) |
---|
3283 | return d |
---|
3284 | |
---|
3285 | - |
---|
3286 | def is_readonly(self): |
---|
3287 | """ |
---|
3288 | I return True if this MutableFileVersion provides no write |
---|
3289 | hunk ./src/allmydata/mutable/filenode.py 921 |
---|
3290 | """ |
---|
3291 | return self._writekey is None |
---|
3292 | |
---|
3293 | - |
---|
3294 | def is_mutable(self): |
---|
3295 | """ |
---|
3296 | I return True, since mutable files are always mutable by |
---|
3297 | hunk ./src/allmydata/mutable/filenode.py 928 |
---|
3298 | """ |
---|
3299 | return True |
---|
3300 | |
---|
3301 | - |
---|
3302 | def get_storage_index(self): |
---|
3303 | """ |
---|
3304 | I return the storage index of the reference that I encapsulate. |
---|
3305 | hunk ./src/allmydata/mutable/filenode.py 934 |
---|
3306 | """ |
---|
3307 | return self._storage_index |
---|
3308 | |
---|
3309 | - |
---|
3310 | def get_size(self): |
---|
3311 | """ |
---|
3312 | I return the length, in bytes, of this readable object. |
---|
3313 | hunk ./src/allmydata/mutable/filenode.py 940 |
---|
3314 | """ |
---|
3315 | return self._servermap.size_of_version(self._version) |
---|
3316 | |
---|
3317 | - |
---|
3318 | def download_to_data(self, fetch_privkey=False): |
---|
3319 | """ |
---|
3320 | I return a Deferred that fires with the contents of this |
---|
3321 | hunk ./src/allmydata/mutable/filenode.py 951 |
---|
3322 | d.addCallback(lambda mc: "".join(mc.chunks)) |
---|
3323 | return d |
---|
3324 | |
---|
3325 | - |
---|
3326 | def _try_to_download_data(self): |
---|
3327 | """ |
---|
3328 | I am an unserialized cousin of download_to_data; I am called |
---|
3329 | hunk ./src/allmydata/mutable/filenode.py 963 |
---|
3330 | d.addCallback(lambda mc: "".join(mc.chunks)) |
---|
3331 | return d |
---|
3332 | |
---|
3333 | - |
---|
3334 | def read(self, consumer, offset=0, size=None, fetch_privkey=False): |
---|
3335 | """ |
---|
3336 | I read a portion (possibly all) of the mutable file that I |
---|
3337 | hunk ./src/allmydata/mutable/filenode.py 971 |
---|
3338 | return self._do_serialized(self._read, consumer, offset, size, |
---|
3339 | fetch_privkey) |
---|
3340 | |
---|
3341 | - |
---|
3342 | def _read(self, consumer, offset=0, size=None, fetch_privkey=False): |
---|
3343 | """ |
---|
3344 | I am the serialized companion of read. |
---|
3345 | hunk ./src/allmydata/mutable/filenode.py 981 |
---|
3346 | d = r.download(consumer, offset, size) |
---|
3347 | return d |
---|
3348 | |
---|
3349 | - |
---|
3350 | def _do_serialized(self, cb, *args, **kwargs): |
---|
3351 | # note: to avoid deadlock, this callable is *not* allowed to invoke |
---|
3352 | # other serialized methods within this (or any other) |
---|
3353 | hunk ./src/allmydata/mutable/filenode.py 999 |
---|
3354 | self._serializer.addErrback(log.err) |
---|
3355 | return d |
---|
3356 | |
---|
3357 | - |
---|
3358 | def _upload(self, new_contents): |
---|
3359 | #assert self._pubkey, "update_servermap must be called before publish" |
---|
3360 | p = Publish(self._node, self._storage_broker, self._servermap) |
---|
3361 | hunk ./src/allmydata/mutable/filenode.py 1009 |
---|
3362 | d.addCallback(self._did_upload, new_contents.get_size()) |
---|
3363 | return d |
---|
3364 | |
---|
3365 | - |
---|
3366 | def _did_upload(self, res, size): |
---|
3367 | self._most_recent_size = size |
---|
3368 | return res |
---|
3369 | hunk ./src/allmydata/mutable/filenode.py 1029 |
---|
3370 | """ |
---|
3371 | return self._do_serialized(self._update, data, offset) |
---|
3372 | |
---|
3373 | - |
---|
3374 | def _update(self, data, offset): |
---|
3375 | """ |
---|
3376 | I update the mutable file version represented by this particular |
---|
3377 | hunk ./src/allmydata/mutable/filenode.py 1058 |
---|
3378 | d.addCallback(self._build_uploadable_and_finish, data, offset) |
---|
3379 | return d |
---|
3380 | |
---|
3381 | - |
---|
3382 | def _do_modify_update(self, data, offset): |
---|
3383 | """ |
---|
3384 | I perform a file update by modifying the contents of the file |
---|
3385 | hunk ./src/allmydata/mutable/filenode.py 1073 |
---|
3386 | return new |
---|
3387 | return self._modify(m, None) |
---|
3388 | |
---|
3389 | - |
---|
3390 | def _do_update_update(self, data, offset): |
---|
3391 | """ |
---|
3392 | I start the Servermap update that gets us the data we need to |
---|
3393 | hunk ./src/allmydata/mutable/filenode.py 1108 |
---|
3394 | return self._update_servermap(update_range=(start_segment, |
---|
3395 | end_segment)) |
---|
3396 | |
---|
3397 | - |
---|
3398 | def _decode_and_decrypt_segments(self, ignored, data, offset): |
---|
3399 | """ |
---|
3400 | After the servermap update, I take the encrypted and encoded |
---|
3401 | hunk ./src/allmydata/mutable/filenode.py 1148 |
---|
3402 | d3 = defer.succeed(blockhashes) |
---|
3403 | return deferredutil.gatherResults([d1, d2, d3]) |
---|
3404 | |
---|
3405 | - |
---|
3406 | def _build_uploadable_and_finish(self, segments_and_bht, data, offset): |
---|
3407 | """ |
---|
3408 | After the process has the plaintext segments, I build the |
---|
3409 | hunk ./src/allmydata/mutable/filenode.py 1163 |
---|
3410 | p = Publish(self._node, self._storage_broker, self._servermap) |
---|
3411 | return p.update(u, offset, segments_and_bht[2], self._version) |
---|
3412 | |
---|
3413 | - |
---|
3414 | def _update_servermap(self, mode=MODE_WRITE, update_range=None): |
---|
3415 | """ |
---|
3416 | I update the servermap. I return a Deferred that fires when the |
---|
3417 | hunk ./src/allmydata/storage/common.py 1 |
---|
3418 | - |
---|
3419 | -import os.path |
---|
3420 | from allmydata.util import base32 |
---|
3421 | |
---|
3422 | class DataTooLargeError(Exception): |
---|
3423 | hunk ./src/allmydata/storage/common.py 5 |
---|
3424 | pass |
---|
3425 | + |
---|
3426 | class UnknownMutableContainerVersionError(Exception): |
---|
3427 | pass |
---|
3428 | hunk ./src/allmydata/storage/common.py 8 |
---|
3429 | + |
---|
3430 | class UnknownImmutableContainerVersionError(Exception): |
---|
3431 | pass |
---|
3432 | |
---|
3433 | hunk ./src/allmydata/storage/common.py 18 |
---|
3434 | |
---|
3435 | def si_a2b(ascii_storageindex): |
---|
3436 | return base32.a2b(ascii_storageindex) |
---|
3437 | - |
---|
3438 | -def storage_index_to_dir(storageindex): |
---|
3439 | - sia = si_b2a(storageindex) |
---|
3440 | - return os.path.join(sia[:2], sia) |
---|
3441 | hunk ./src/allmydata/storage/crawler.py 2 |
---|
3442 | |
---|
3443 | -import os, time, struct |
---|
3444 | +import time, struct |
---|
3445 | import cPickle as pickle |
---|
3446 | from twisted.internet import reactor |
---|
3447 | from twisted.application import service |
---|
3448 | hunk ./src/allmydata/storage/crawler.py 6 |
---|
3449 | + |
---|
3450 | +from allmydata.util.assertutil import precondition |
---|
3451 | +from allmydata.interfaces import IStorageBackend |
---|
3452 | from allmydata.storage.common import si_b2a |
---|
3453 | hunk ./src/allmydata/storage/crawler.py 10 |
---|
3454 | -from allmydata.util import fileutil |
---|
3455 | + |
---|
3456 | |
---|
3457 | class TimeSliceExceeded(Exception): |
---|
3458 | pass |
---|
3459 | hunk ./src/allmydata/storage/crawler.py 15 |
---|
3460 | |
---|
3461 | + |
---|
3462 | class ShareCrawler(service.MultiService): |
---|
3463 | hunk ./src/allmydata/storage/crawler.py 17 |
---|
3464 | - """A ShareCrawler subclass is attached to a StorageServer, and |
---|
3465 | - periodically walks all of its shares, processing each one in some |
---|
3466 | - fashion. This crawl is rate-limited, to reduce the IO burden on the host, |
---|
3467 | - since large servers can easily have a terabyte of shares, in several |
---|
3468 | - million files, which can take hours or days to read. |
---|
3469 | + """ |
---|
3470 | + An instance of a subclass of ShareCrawler is attached to a storage |
---|
3471 | + backend, and periodically walks the backend's shares, processing them |
---|
3472 | + in some fashion. This crawl is rate-limited to reduce the I/O burden on |
---|
3473 | + the host, since large servers can easily have a terabyte of shares in |
---|
3474 | + several million files, which can take hours or days to read. |
---|
3475 | |
---|
3476 | Once the crawler starts a cycle, it will proceed at a rate limited by the |
---|
3477 | allowed_cpu_percentage= and cpu_slice= parameters: yielding the reactor |
---|
3478 | hunk ./src/allmydata/storage/crawler.py 33 |
---|
3479 | long enough to ensure that 'minimum_cycle_time' elapses between the start |
---|
3480 | of two consecutive cycles. |
---|
3481 | |
---|
3482 | - We assume that the normal upload/download/get_buckets traffic of a tahoe |
---|
3483 | + We assume that the normal upload/download/DYHB traffic of a Tahoe-LAFS |
---|
3484 | grid will cause the prefixdir contents to be mostly cached in the kernel, |
---|
3485 | hunk ./src/allmydata/storage/crawler.py 35 |
---|
3486 | - or that the number of buckets in each prefixdir will be small enough to |
---|
3487 | - load quickly. A 1TB allmydata.com server was measured to have 2.56M |
---|
3488 | - buckets, spread into the 1024 prefixdirs, with about 2500 buckets per |
---|
3489 | + or that the number of sharesets in each prefixdir will be small enough to |
---|
3490 | + load quickly. A 1TB allmydata.com server was measured to have 2.56 million |
---|
3491 | + sharesets, spread into the 1024 prefixdirs, with about 2500 sharesets per |
---|
3492 | prefix. On this server, each prefixdir took 130ms-200ms to list the first |
---|
3493 | time, and 17ms to list the second time. |
---|
3494 | |
---|
3495 | hunk ./src/allmydata/storage/crawler.py 41 |
---|
3496 | - To use a crawler, create a subclass which implements the process_bucket() |
---|
3497 | - method. It will be called with a prefixdir and a base32 storage index |
---|
3498 | - string. process_bucket() must run synchronously. Any keys added to |
---|
3499 | - self.state will be preserved. Override add_initial_state() to set up |
---|
3500 | - initial state keys. Override finished_cycle() to perform additional |
---|
3501 | - processing when the cycle is complete. Any status that the crawler |
---|
3502 | - produces should be put in the self.state dictionary. Status renderers |
---|
3503 | - (like a web page which describes the accomplishments of your crawler) |
---|
3504 | - will use crawler.get_state() to retrieve this dictionary; they can |
---|
3505 | - present the contents as they see fit. |
---|
3506 | + To implement a crawler, create a subclass that implements the |
---|
3507 | + process_shareset() method. It will be called with a prefixdir and an |
---|
3508 | + object providing the IShareSet interface. process_shareset() must run |
---|
3509 | + synchronously. Any keys added to self.state will be preserved. Override |
---|
3510 | + add_initial_state() to set up initial state keys. Override |
---|
3511 | + finished_cycle() to perform additional processing when the cycle is |
---|
3512 | + complete. Any status that the crawler produces should be put in the |
---|
3513 | + self.state dictionary. Status renderers (like a web page describing the |
---|
3514 | + accomplishments of your crawler) will use crawler.get_state() to retrieve |
---|
3515 | + this dictionary; they can present the contents as they see fit. |
---|
3516 | |
---|
3517 | hunk ./src/allmydata/storage/crawler.py 52 |
---|
3518 | - Then create an instance, with a reference to a StorageServer and a |
---|
3519 | - filename where it can store persistent state. The statefile is used to |
---|
3520 | - keep track of how far around the ring the process has travelled, as well |
---|
3521 | - as timing history to allow the pace to be predicted and controlled. The |
---|
3522 | - statefile will be updated and written to disk after each time slice (just |
---|
3523 | - before the crawler yields to the reactor), and also after each cycle is |
---|
3524 | - finished, and also when stopService() is called. Note that this means |
---|
3525 | - that a crawler which is interrupted with SIGKILL while it is in the |
---|
3526 | - middle of a time slice will lose progress: the next time the node is |
---|
3527 | - started, the crawler will repeat some unknown amount of work. |
---|
3528 | + Then create an instance, with a reference to a backend object providing |
---|
3529 | + the IStorageBackend interface, and a filename where it can store |
---|
3530 | + persistent state. The statefile is used to keep track of how far around |
---|
3531 | + the ring the process has travelled, as well as timing history to allow |
---|
3532 | + the pace to be predicted and controlled. The statefile will be updated |
---|
3533 | + and written to disk after each time slice (just before the crawler yields |
---|
3534 | + to the reactor), and also after each cycle is finished, and also when |
---|
3535 | + stopService() is called. Note that this means that a crawler that is |
---|
3536 | + interrupted with SIGKILL while it is in the middle of a time slice will |
---|
3537 | + lose progress: the next time the node is started, the crawler will repeat |
---|
3538 | + some unknown amount of work. |
---|
3539 | |
---|
3540 | The crawler instance must be started with startService() before it will |
---|
3541 | hunk ./src/allmydata/storage/crawler.py 65 |
---|
3542 | - do any work. To make it stop doing work, call stopService(). |
---|
3543 | + do any work. To make it stop doing work, call stopService(). A crawler |
---|
3544 | + is usually a child service of a StorageServer, although it should not |
---|
3545 | + depend on that. |
---|
3546 | + |
---|
3547 | + For historical reasons, some dictionary key names use the term "bucket" |
---|
3548 | + for what is now preferably called a "shareset" (the set of shares that a |
---|
3549 | + server holds under a given storage index). |
---|
3550 | """ |
---|
3551 | |
---|
3552 | slow_start = 300 # don't start crawling for 5 minutes after startup |
---|
3553 | hunk ./src/allmydata/storage/crawler.py 80 |
---|
3554 | cpu_slice = 1.0 # use up to 1.0 seconds before yielding |
---|
3555 | minimum_cycle_time = 300 # don't run a cycle faster than this |
---|
3556 | |
---|
3557 | - def __init__(self, server, statefile, allowed_cpu_percentage=None): |
---|
3558 | + def __init__(self, backend, statefp, allowed_cpu_percentage=None): |
---|
3559 | + precondition(IStorageBackend.providedBy(backend), backend) |
---|
3560 | service.MultiService.__init__(self) |
---|
3561 | hunk ./src/allmydata/storage/crawler.py 83 |
---|
3562 | + self.backend = backend |
---|
3563 | + self.statefp = statefp |
---|
3564 | if allowed_cpu_percentage is not None: |
---|
3565 | self.allowed_cpu_percentage = allowed_cpu_percentage |
---|
3566 | hunk ./src/allmydata/storage/crawler.py 87 |
---|
3567 | - self.server = server |
---|
3568 | - self.sharedir = server.sharedir |
---|
3569 | - self.statefile = statefile |
---|
3570 | self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2] |
---|
3571 | for i in range(2**10)] |
---|
3572 | self.prefixes.sort() |
---|
3573 | hunk ./src/allmydata/storage/crawler.py 91 |
---|
3574 | self.timer = None |
---|
3575 | - self.bucket_cache = (None, []) |
---|
3576 | + self.shareset_cache = (None, []) |
---|
3577 | self.current_sleep_time = None |
---|
3578 | self.next_wake_time = None |
---|
3579 | self.last_prefix_finished_time = None |
---|
3580 | hunk ./src/allmydata/storage/crawler.py 154 |
---|
3581 | left = len(self.prefixes) - self.last_complete_prefix_index |
---|
3582 | remaining = left * self.last_prefix_elapsed_time |
---|
3583 | # TODO: remainder of this prefix: we need to estimate the |
---|
3584 | - # per-bucket time, probably by measuring the time spent on |
---|
3585 | - # this prefix so far, divided by the number of buckets we've |
---|
3586 | + # per-shareset time, probably by measuring the time spent on |
---|
3587 | + # this prefix so far, divided by the number of sharesets we've |
---|
3588 | # processed. |
---|
3589 | d["estimated-cycle-complete-time-left"] = remaining |
---|
3590 | # it's possible to call get_progress() from inside a crawler's |
---|
3591 | hunk ./src/allmydata/storage/crawler.py 175 |
---|
3592 | state dictionary. |
---|
3593 | |
---|
3594 | If we are not currently sleeping (i.e. get_state() was called from |
---|
3595 | - inside the process_prefixdir, process_bucket, or finished_cycle() |
---|
3596 | + inside the process_prefixdir, process_shareset, or finished_cycle() |
---|
3597 | methods, or if startService has not yet been called on this crawler), |
---|
3598 | these two keys will be None. |
---|
3599 | |
---|
3600 | hunk ./src/allmydata/storage/crawler.py 188 |
---|
3601 | def load_state(self): |
---|
3602 | # we use this to store state for both the crawler's internals and |
---|
3603 | # anything the subclass-specific code needs. The state is stored |
---|
3604 | - # after each bucket is processed, after each prefixdir is processed, |
---|
3605 | + # after each shareset is processed, after each prefixdir is processed, |
---|
3606 | # and after a cycle is complete. The internal keys we use are: |
---|
3607 | # ["version"]: int, always 1 |
---|
3608 | # ["last-cycle-finished"]: int, or None if we have not yet finished |
---|
3609 | hunk ./src/allmydata/storage/crawler.py 202 |
---|
3610 | # are sleeping between cycles, or if we |
---|
3611 | # have not yet finished any prefixdir since |
---|
3612 | # a cycle was started |
---|
3613 | - # ["last-complete-bucket"]: str, base32 storage index bucket name |
---|
3614 | - # of the last bucket to be processed, or |
---|
3615 | - # None if we are sleeping between cycles |
---|
3616 | + # ["last-complete-bucket"]: str, base32 storage index of the last |
---|
3617 | + # shareset to be processed, or None if we |
---|
3618 | + # are sleeping between cycles |
---|
3619 | try: |
---|
3620 | hunk ./src/allmydata/storage/crawler.py 206 |
---|
3621 | - f = open(self.statefile, "rb") |
---|
3622 | - state = pickle.load(f) |
---|
3623 | - f.close() |
---|
3624 | + state = pickle.loads(self.statefp.getContent()) |
---|
3625 | except EnvironmentError: |
---|
3626 | state = {"version": 1, |
---|
3627 | "last-cycle-finished": None, |
---|
3628 | hunk ./src/allmydata/storage/crawler.py 242 |
---|
3629 | else: |
---|
3630 | last_complete_prefix = self.prefixes[lcpi] |
---|
3631 | self.state["last-complete-prefix"] = last_complete_prefix |
---|
3632 | - tmpfile = self.statefile + ".tmp" |
---|
3633 | - f = open(tmpfile, "wb") |
---|
3634 | - pickle.dump(self.state, f) |
---|
3635 | - f.close() |
---|
3636 | - fileutil.move_into_place(tmpfile, self.statefile) |
---|
3637 | + self.statefp.setContent(pickle.dumps(self.state)) |
---|
3638 | |
---|
3639 | def startService(self): |
---|
3640 | # arrange things to look like we were just sleeping, so |
---|
3641 | hunk ./src/allmydata/storage/crawler.py 284 |
---|
3642 | sleep_time = (this_slice / self.allowed_cpu_percentage) - this_slice |
---|
3643 | # if the math gets weird, or a timequake happens, don't sleep |
---|
3644 | # forever. Note that this means that, while a cycle is running, we |
---|
3645 | - # will process at least one bucket every 5 minutes, no matter how |
---|
3646 | - # long that bucket takes. |
---|
3647 | + # will process at least one shareset every 5 minutes, no matter how |
---|
3648 | + # long that shareset takes. |
---|
3649 | sleep_time = max(0.0, min(sleep_time, 299)) |
---|
3650 | if finished_cycle: |
---|
3651 | # how long should we sleep between cycles? Don't run faster than |
---|
3652 | hunk ./src/allmydata/storage/crawler.py 315 |
---|
3653 | for i in range(self.last_complete_prefix_index+1, len(self.prefixes)): |
---|
3654 | # if we want to yield earlier, just raise TimeSliceExceeded() |
---|
3655 | prefix = self.prefixes[i] |
---|
3656 | - prefixdir = os.path.join(self.sharedir, prefix) |
---|
3657 | - if i == self.bucket_cache[0]: |
---|
3658 | - buckets = self.bucket_cache[1] |
---|
3659 | + if i == self.shareset_cache[0]: |
---|
3660 | + sharesets = self.shareset_cache[1] |
---|
3661 | else: |
---|
3662 | hunk ./src/allmydata/storage/crawler.py 318 |
---|
3663 | - try: |
---|
3664 | - buckets = os.listdir(prefixdir) |
---|
3665 | - buckets.sort() |
---|
3666 | - except EnvironmentError: |
---|
3667 | - buckets = [] |
---|
3668 | - self.bucket_cache = (i, buckets) |
---|
3669 | - self.process_prefixdir(cycle, prefix, prefixdir, |
---|
3670 | - buckets, start_slice) |
---|
3671 | + sharesets = self.backend.get_sharesets_for_prefix(prefix) |
---|
3672 | + self.shareset_cache = (i, sharesets) |
---|
3673 | + self.process_prefixdir(cycle, prefix, sharesets, start_slice) |
---|
3674 | self.last_complete_prefix_index = i |
---|
3675 | |
---|
3676 | now = time.time() |
---|
3677 | hunk ./src/allmydata/storage/crawler.py 345 |
---|
3678 | self.finished_cycle(cycle) |
---|
3679 | self.save_state() |
---|
3680 | |
---|
3681 | - def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice): |
---|
3682 | - """This gets a list of bucket names (i.e. storage index strings, |
---|
3683 | + def process_prefixdir(self, cycle, prefix, sharesets, start_slice): |
---|
3684 | + """ |
---|
3685 | + This gets a list of shareset names (i.e. storage index strings, |
---|
3686 | base32-encoded) in sorted order. |
---|
3687 | |
---|
3688 | You can override this if your crawler doesn't care about the actual |
---|
3689 | hunk ./src/allmydata/storage/crawler.py 352 |
---|
3690 | shares, for example a crawler which merely keeps track of how many |
---|
3691 | - buckets are being managed by this server. |
---|
3692 | + sharesets are being managed by this server. |
---|
3693 | |
---|
3694 | hunk ./src/allmydata/storage/crawler.py 354 |
---|
3695 | - Subclasses which *do* care about actual bucket should leave this |
---|
3696 | - method along, and implement process_bucket() instead. |
---|
3697 | + Subclasses which *do* care about actual shareset should leave this |
---|
3698 | + method alone, and implement process_shareset() instead. |
---|
3699 | """ |
---|
3700 | |
---|
3701 | hunk ./src/allmydata/storage/crawler.py 358 |
---|
3702 | - for bucket in buckets: |
---|
3703 | - if bucket <= self.state["last-complete-bucket"]: |
---|
3704 | + for shareset in sharesets: |
---|
3705 | + base32si = shareset.get_storage_index_string() |
---|
3706 | + if base32si <= self.state["last-complete-bucket"]: |
---|
3707 | continue |
---|
3708 | hunk ./src/allmydata/storage/crawler.py 362 |
---|
3709 | - self.process_bucket(cycle, prefix, prefixdir, bucket) |
---|
3710 | - self.state["last-complete-bucket"] = bucket |
---|
3711 | + self.process_shareset(cycle, prefix, shareset) |
---|
3712 | + self.state["last-complete-bucket"] = base32si |
---|
3713 | if time.time() >= start_slice + self.cpu_slice: |
---|
3714 | raise TimeSliceExceeded() |
---|
3715 | |
---|
3716 | hunk ./src/allmydata/storage/crawler.py 370 |
---|
3717 | # the remaining methods are explictly for subclasses to implement. |
---|
3718 | |
---|
3719 | def started_cycle(self, cycle): |
---|
3720 | - """Notify a subclass that the crawler is about to start a cycle. |
---|
3721 | + """ |
---|
3722 | + Notify a subclass that the crawler is about to start a cycle. |
---|
3723 | |
---|
3724 | This method is for subclasses to override. No upcall is necessary. |
---|
3725 | """ |
---|
3726 | hunk ./src/allmydata/storage/crawler.py 377 |
---|
3727 | pass |
---|
3728 | |
---|
3729 | - def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32): |
---|
3730 | - """Examine a single bucket. Subclasses should do whatever they want |
---|
3731 | + def process_shareset(self, cycle, prefix, shareset): |
---|
3732 | + """ |
---|
3733 | + Examine a single shareset. Subclasses should do whatever they want |
---|
3734 | to do to the shares therein, then update self.state as necessary. |
---|
3735 | |
---|
3736 | If the crawler is never interrupted by SIGKILL, this method will be |
---|
3737 | hunk ./src/allmydata/storage/crawler.py 383 |
---|
3738 | - called exactly once per share (per cycle). If it *is* interrupted, |
---|
3739 | + called exactly once per shareset (per cycle). If it *is* interrupted, |
---|
3740 | then the next time the node is started, some amount of work will be |
---|
3741 | duplicated, according to when self.save_state() was last called. By |
---|
3742 | default, save_state() is called at the end of each timeslice, and |
---|
3743 | hunk ./src/allmydata/storage/crawler.py 391 |
---|
3744 | |
---|
3745 | To reduce the chance of duplicate work (i.e. to avoid adding multiple |
---|
3746 | records to a database), you can call save_state() at the end of your |
---|
3747 | - process_bucket() method. This will reduce the maximum duplicated work |
---|
3748 | - to one bucket per SIGKILL. It will also add overhead, probably 1-20ms |
---|
3749 | - per bucket (and some disk writes), which will count against your |
---|
3750 | - allowed_cpu_percentage, and which may be considerable if |
---|
3751 | - process_bucket() runs quickly. |
---|
3752 | + process_shareset() method. This will reduce the maximum duplicated |
---|
3753 | + work to one shareset per SIGKILL. It will also add overhead, probably |
---|
3754 | + 1-20ms per shareset (and some disk writes), which will count against |
---|
3755 | + your allowed_cpu_percentage, and which may be considerable if |
---|
3756 | + process_shareset() runs quickly. |
---|
3757 | |
---|
3758 | This method is for subclasses to override. No upcall is necessary. |
---|
3759 | """ |
---|
3760 | hunk ./src/allmydata/storage/crawler.py 402 |
---|
3761 | pass |
---|
3762 | |
---|
3763 | def finished_prefix(self, cycle, prefix): |
---|
3764 | - """Notify a subclass that the crawler has just finished processing a |
---|
3765 | - prefix directory (all buckets with the same two-character/10bit |
---|
3766 | + """ |
---|
3767 | + Notify a subclass that the crawler has just finished processing a |
---|
3768 | + prefix directory (all sharesets with the same two-character/10-bit |
---|
3769 | prefix). To impose a limit on how much work might be duplicated by a |
---|
3770 | SIGKILL that occurs during a timeslice, you can call |
---|
3771 | self.save_state() here, but be aware that it may represent a |
---|
3772 | hunk ./src/allmydata/storage/crawler.py 415 |
---|
3773 | pass |
---|
3774 | |
---|
3775 | def finished_cycle(self, cycle): |
---|
3776 | - """Notify subclass that a cycle (one complete traversal of all |
---|
3777 | + """ |
---|
3778 | + Notify subclass that a cycle (one complete traversal of all |
---|
3779 | prefixdirs) has just finished. 'cycle' is the number of the cycle |
---|
3780 | that just finished. This method should perform summary work and |
---|
3781 | update self.state to publish information to status displays. |
---|
3782 | hunk ./src/allmydata/storage/crawler.py 433 |
---|
3783 | pass |
---|
3784 | |
---|
3785 | def yielding(self, sleep_time): |
---|
3786 | - """The crawler is about to sleep for 'sleep_time' seconds. This |
---|
3787 | + """ |
---|
3788 | + The crawler is about to sleep for 'sleep_time' seconds. This |
---|
3789 | method is mostly for the convenience of unit tests. |
---|
3790 | |
---|
3791 | This method is for subclasses to override. No upcall is necessary. |
---|
3792 | hunk ./src/allmydata/storage/crawler.py 443 |
---|
3793 | |
---|
3794 | |
---|
3795 | class BucketCountingCrawler(ShareCrawler): |
---|
3796 | - """I keep track of how many buckets are being managed by this server. |
---|
3797 | - This is equivalent to the number of distributed files and directories for |
---|
3798 | - which I am providing storage. The actual number of files+directories in |
---|
3799 | - the full grid is probably higher (especially when there are more servers |
---|
3800 | - than 'N', the number of generated shares), because some files+directories |
---|
3801 | - will have shares on other servers instead of me. Also note that the |
---|
3802 | - number of buckets will differ from the number of shares in small grids, |
---|
3803 | - when more than one share is placed on a single server. |
---|
3804 | + """ |
---|
3805 | + I keep track of how many sharesets, each corresponding to a storage index, |
---|
3806 | + are being managed by this server. This is equivalent to the number of |
---|
3807 | + distributed files and directories for which I am providing storage. The |
---|
3808 | + actual number of files and directories in the full grid is probably higher |
---|
3809 | + (especially when there are more servers than 'N', the number of generated |
---|
3810 | + shares), because some files and directories will have shares on other |
---|
3811 | + servers instead of me. Also note that the number of sharesets will differ |
---|
3812 | + from the number of shares in small grids, when more than one share is |
---|
3813 | + placed on a single server. |
---|
3814 | """ |
---|
3815 | |
---|
3816 | minimum_cycle_time = 60*60 # we don't need this more than once an hour |
---|
3817 | hunk ./src/allmydata/storage/crawler.py 457 |
---|
3818 | |
---|
3819 | - def __init__(self, server, statefile, num_sample_prefixes=1): |
---|
3820 | - ShareCrawler.__init__(self, server, statefile) |
---|
3821 | + def __init__(self, backend, statefp, num_sample_prefixes=1): |
---|
3822 | + ShareCrawler.__init__(self, backend, statefp) |
---|
3823 | self.num_sample_prefixes = num_sample_prefixes |
---|
3824 | |
---|
3825 | def add_initial_state(self): |
---|
3826 | hunk ./src/allmydata/storage/crawler.py 471 |
---|
3827 | self.state.setdefault("last-complete-bucket-count", None) |
---|
3828 | self.state.setdefault("storage-index-samples", {}) |
---|
3829 | |
---|
3830 | - def process_prefixdir(self, cycle, prefix, prefixdir, buckets, start_slice): |
---|
3831 | + def process_prefixdir(self, cycle, prefix, sharesets, start_slice): |
---|
3832 | # we override process_prefixdir() because we don't want to look at |
---|
3833 | hunk ./src/allmydata/storage/crawler.py 473 |
---|
3834 | - # the individual buckets. We'll save state after each one. On my |
---|
3835 | + # the individual sharesets. We'll save state after each one. On my |
---|
3836 | # laptop, a mostly-empty storage server can process about 70 |
---|
3837 | # prefixdirs in a 1.0s slice. |
---|
3838 | if cycle not in self.state["bucket-counts"]: |
---|
3839 | hunk ./src/allmydata/storage/crawler.py 478 |
---|
3840 | self.state["bucket-counts"][cycle] = {} |
---|
3841 | - self.state["bucket-counts"][cycle][prefix] = len(buckets) |
---|
3842 | + self.state["bucket-counts"][cycle][prefix] = len(sharesets) |
---|
3843 | if prefix in self.prefixes[:self.num_sample_prefixes]: |
---|
3844 | hunk ./src/allmydata/storage/crawler.py 480 |
---|
3845 | - self.state["storage-index-samples"][prefix] = (cycle, buckets) |
---|
3846 | + self.state["storage-index-samples"][prefix] = (cycle, sharesets) |
---|
3847 | |
---|
3848 | def finished_cycle(self, cycle): |
---|
3849 | last_counts = self.state["bucket-counts"].get(cycle, []) |
---|
3850 | hunk ./src/allmydata/storage/crawler.py 486 |
---|
3851 | if len(last_counts) == len(self.prefixes): |
---|
3852 | # great, we have a whole cycle. |
---|
3853 | - num_buckets = sum(last_counts.values()) |
---|
3854 | - self.state["last-complete-bucket-count"] = num_buckets |
---|
3855 | + num_sharesets = sum(last_counts.values()) |
---|
3856 | + self.state["last-complete-bucket-count"] = num_sharesets |
---|
3857 | # get rid of old counts |
---|
3858 | for old_cycle in list(self.state["bucket-counts"].keys()): |
---|
3859 | if old_cycle != cycle: |
---|
3860 | hunk ./src/allmydata/storage/crawler.py 494 |
---|
3861 | del self.state["bucket-counts"][old_cycle] |
---|
3862 | # get rid of old samples too |
---|
3863 | for prefix in list(self.state["storage-index-samples"].keys()): |
---|
3864 | - old_cycle,buckets = self.state["storage-index-samples"][prefix] |
---|
3865 | + old_cycle, storage_indices = self.state["storage-index-samples"][prefix] |
---|
3866 | if old_cycle != cycle: |
---|
3867 | del self.state["storage-index-samples"][prefix] |
---|
3868 | hunk ./src/allmydata/storage/crawler.py 497 |
---|
3869 | - |
---|
3870 | hunk ./src/allmydata/storage/expirer.py 1 |
---|
3871 | -import time, os, pickle, struct |
---|
3872 | + |
---|
3873 | +import time, pickle, struct |
---|
3874 | +from twisted.python import log as twlog |
---|
3875 | + |
---|
3876 | from allmydata.storage.crawler import ShareCrawler |
---|
3877 | hunk ./src/allmydata/storage/expirer.py 6 |
---|
3878 | -from allmydata.storage.shares import get_share_file |
---|
3879 | -from allmydata.storage.common import UnknownMutableContainerVersionError, \ |
---|
3880 | +from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \ |
---|
3881 | UnknownImmutableContainerVersionError |
---|
3882 | hunk ./src/allmydata/storage/expirer.py 8 |
---|
3883 | -from twisted.python import log as twlog |
---|
3884 | + |
---|
3885 | |
---|
3886 | class LeaseCheckingCrawler(ShareCrawler): |
---|
3887 | """I examine the leases on all shares, determining which are still valid |
---|
3888 | hunk ./src/allmydata/storage/expirer.py 17 |
---|
3889 | removed. |
---|
3890 | |
---|
3891 | I collect statistics on the leases and make these available to a web |
---|
3892 | - status page, including:: |
---|
3893 | + status page, including: |
---|
3894 | |
---|
3895 | Space recovered during this cycle-so-far: |
---|
3896 | actual (only if expiration_enabled=True): |
---|
3897 | hunk ./src/allmydata/storage/expirer.py 21 |
---|
3898 | - num-buckets, num-shares, sum of share sizes, real disk usage |
---|
3899 | + num-storage-indices, num-shares, sum of share sizes, real disk usage |
---|
3900 | ('real disk usage' means we use stat(fn).st_blocks*512 and include any |
---|
3901 | space used by the directory) |
---|
3902 | what it would have been with the original lease expiration time |
---|
3903 | hunk ./src/allmydata/storage/expirer.py 32 |
---|
3904 | |
---|
3905 | Space recovered during the last 10 cycles <-- saved in separate pickle |
---|
3906 | |
---|
3907 | - Shares/buckets examined: |
---|
3908 | + Shares/storage-indices examined: |
---|
3909 | this cycle-so-far |
---|
3910 | prediction of rest of cycle |
---|
3911 | during last 10 cycles <-- separate pickle |
---|
3912 | hunk ./src/allmydata/storage/expirer.py 42 |
---|
3913 | Histogram of leases-per-share: |
---|
3914 | this-cycle-to-date |
---|
3915 | last 10 cycles <-- separate pickle |
---|
3916 | - Histogram of lease ages, buckets = 1day |
---|
3917 | + Histogram of lease ages, storage-indices over 1 day |
---|
3918 | cycle-to-date |
---|
3919 | last 10 cycles <-- separate pickle |
---|
3920 | |
---|
3921 | hunk ./src/allmydata/storage/expirer.py 53 |
---|
3922 | slow_start = 360 # wait 6 minutes after startup |
---|
3923 | minimum_cycle_time = 12*60*60 # not more than twice per day |
---|
3924 | |
---|
3925 | - def __init__(self, server, statefile, historyfile, |
---|
3926 | - expiration_enabled, mode, |
---|
3927 | - override_lease_duration, # used if expiration_mode=="age" |
---|
3928 | - cutoff_date, # used if expiration_mode=="cutoff-date" |
---|
3929 | - sharetypes): |
---|
3930 | - self.historyfile = historyfile |
---|
3931 | - self.expiration_enabled = expiration_enabled |
---|
3932 | - self.mode = mode |
---|
3933 | + def __init__(self, backend, statefp, historyfp, expiration_policy): |
---|
3934 | + # ShareCrawler.__init__ will call add_initial_state, so self.historyfp has to be set first. |
---|
3935 | + self.historyfp = historyfp |
---|
3936 | + ShareCrawler.__init__(self, backend, statefp) |
---|
3937 | + |
---|
3938 | + self.expiration_enabled = expiration_policy['enabled'] |
---|
3939 | + self.mode = expiration_policy['mode'] |
---|
3940 | self.override_lease_duration = None |
---|
3941 | self.cutoff_date = None |
---|
3942 | if self.mode == "age": |
---|
3943 | hunk ./src/allmydata/storage/expirer.py 63 |
---|
3944 | - assert isinstance(override_lease_duration, (int, type(None))) |
---|
3945 | - self.override_lease_duration = override_lease_duration # seconds |
---|
3946 | + assert isinstance(expiration_policy['override_lease_duration'], (int, type(None))) |
---|
3947 | + self.override_lease_duration = expiration_policy['override_lease_duration'] # seconds |
---|
3948 | elif self.mode == "cutoff-date": |
---|
3949 | hunk ./src/allmydata/storage/expirer.py 66 |
---|
3950 | - assert isinstance(cutoff_date, int) # seconds-since-epoch |
---|
3951 | - assert cutoff_date is not None |
---|
3952 | - self.cutoff_date = cutoff_date |
---|
3953 | + assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch |
---|
3954 | + self.cutoff_date = expiration_policy['cutoff_date'] |
---|
3955 | else: |
---|
3956 | hunk ./src/allmydata/storage/expirer.py 69 |
---|
3957 | - raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode) |
---|
3958 | - self.sharetypes_to_expire = sharetypes |
---|
3959 | - ShareCrawler.__init__(self, server, statefile) |
---|
3960 | + raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode']) |
---|
3961 | + self.sharetypes_to_expire = expiration_policy['sharetypes'] |
---|
3962 | |
---|
3963 | def add_initial_state(self): |
---|
3964 | # we fill ["cycle-to-date"] here (even though they will be reset in |
---|
3965 | hunk ./src/allmydata/storage/expirer.py 84 |
---|
3966 | self.state["cycle-to-date"].setdefault(k, so_far[k]) |
---|
3967 | |
---|
3968 | # initialize history |
---|
3969 | - if not os.path.exists(self.historyfile): |
---|
3970 | + if not self.historyfp.exists(): |
---|
3971 | history = {} # cyclenum -> dict |
---|
3972 | hunk ./src/allmydata/storage/expirer.py 86 |
---|
3973 | - f = open(self.historyfile, "wb") |
---|
3974 | - pickle.dump(history, f) |
---|
3975 | - f.close() |
---|
3976 | + self.historyfp.setContent(pickle.dumps(history)) |
---|
3977 | |
---|
3978 | def create_empty_cycle_dict(self): |
---|
3979 | recovered = self.create_empty_recovered_dict() |
---|
3980 | hunk ./src/allmydata/storage/expirer.py 99 |
---|
3981 | |
---|
3982 | def create_empty_recovered_dict(self): |
---|
3983 | recovered = {} |
---|
3984 | + # "buckets" is ambiguous; here it means the number of sharesets (one per storage index per server) |
---|
3985 | for a in ("actual", "original", "configured", "examined"): |
---|
3986 | for b in ("buckets", "shares", "sharebytes", "diskbytes"): |
---|
3987 | recovered[a+"-"+b] = 0 |
---|
3988 | hunk ./src/allmydata/storage/expirer.py 110 |
---|
3989 | def started_cycle(self, cycle): |
---|
3990 | self.state["cycle-to-date"] = self.create_empty_cycle_dict() |
---|
3991 | |
---|
3992 | - def stat(self, fn): |
---|
3993 | - return os.stat(fn) |
---|
3994 | - |
---|
3995 | - def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32): |
---|
3996 | - bucketdir = os.path.join(prefixdir, storage_index_b32) |
---|
3997 | - s = self.stat(bucketdir) |
---|
3998 | + def process_storage_index(self, cycle, prefix, container): |
---|
3999 | would_keep_shares = [] |
---|
4000 | wks = None |
---|
4001 | hunk ./src/allmydata/storage/expirer.py 113 |
---|
4002 | + sharetype = None |
---|
4003 | |
---|
4004 | hunk ./src/allmydata/storage/expirer.py 115 |
---|
4005 | - for fn in os.listdir(bucketdir): |
---|
4006 | - try: |
---|
4007 | - shnum = int(fn) |
---|
4008 | - except ValueError: |
---|
4009 | - continue # non-numeric means not a sharefile |
---|
4010 | - sharefile = os.path.join(bucketdir, fn) |
---|
4011 | + for share in container.get_shares(): |
---|
4012 | + sharetype = share.sharetype |
---|
4013 | try: |
---|
4014 | hunk ./src/allmydata/storage/expirer.py 118 |
---|
4015 | - wks = self.process_share(sharefile) |
---|
4016 | + wks = self.process_share(share) |
---|
4017 | except (UnknownMutableContainerVersionError, |
---|
4018 | UnknownImmutableContainerVersionError, |
---|
4019 | struct.error): |
---|
4020 | hunk ./src/allmydata/storage/expirer.py 122 |
---|
4021 | - twlog.msg("lease-checker error processing %s" % sharefile) |
---|
4022 | + twlog.msg("lease-checker error processing %r" % (share,)) |
---|
4023 | twlog.err() |
---|
4024 | hunk ./src/allmydata/storage/expirer.py 124 |
---|
4025 | - which = (storage_index_b32, shnum) |
---|
4026 | + which = (si_b2a(share.storageindex), share.get_shnum()) |
---|
4027 | self.state["cycle-to-date"]["corrupt-shares"].append(which) |
---|
4028 | wks = (1, 1, 1, "unknown") |
---|
4029 | would_keep_shares.append(wks) |
---|
4030 | hunk ./src/allmydata/storage/expirer.py 129 |
---|
4031 | |
---|
4032 | - sharetype = None |
---|
4033 | + container_type = None |
---|
4034 | if wks: |
---|
4035 | hunk ./src/allmydata/storage/expirer.py 131 |
---|
4036 | - # use the last share's sharetype as the buckettype |
---|
4037 | - sharetype = wks[3] |
---|
4038 | + # use the last share's sharetype as the container type |
---|
4039 | + container_type = wks[3] |
---|
4040 | rec = self.state["cycle-to-date"]["space-recovered"] |
---|
4041 | self.increment(rec, "examined-buckets", 1) |
---|
4042 | if sharetype: |
---|
4043 | hunk ./src/allmydata/storage/expirer.py 136 |
---|
4044 | - self.increment(rec, "examined-buckets-"+sharetype, 1) |
---|
4045 | + self.increment(rec, "examined-buckets-"+container_type, 1) |
---|
4046 | + |
---|
4047 | + container_diskbytes = container.get_overhead() |
---|
4048 | |
---|
4049 | hunk ./src/allmydata/storage/expirer.py 140 |
---|
4050 | - try: |
---|
4051 | - bucket_diskbytes = s.st_blocks * 512 |
---|
4052 | - except AttributeError: |
---|
4053 | - bucket_diskbytes = 0 # no stat().st_blocks on windows |
---|
4054 | if sum([wks[0] for wks in would_keep_shares]) == 0: |
---|
4055 | hunk ./src/allmydata/storage/expirer.py 141 |
---|
4056 | - self.increment_bucketspace("original", bucket_diskbytes, sharetype) |
---|
4057 | + self.increment_container_space("original", container_diskbytes, sharetype) |
---|
4058 | if sum([wks[1] for wks in would_keep_shares]) == 0: |
---|
4059 | hunk ./src/allmydata/storage/expirer.py 143 |
---|
4060 | - self.increment_bucketspace("configured", bucket_diskbytes, sharetype) |
---|
4061 | + self.increment_container_space("configured", container_diskbytes, sharetype) |
---|
4062 | if sum([wks[2] for wks in would_keep_shares]) == 0: |
---|
4063 | hunk ./src/allmydata/storage/expirer.py 145 |
---|
4064 | - self.increment_bucketspace("actual", bucket_diskbytes, sharetype) |
---|
4065 | + self.increment_container_space("actual", container_diskbytes, sharetype) |
---|
4066 | |
---|
4067 | hunk ./src/allmydata/storage/expirer.py 147 |
---|
4068 | - def process_share(self, sharefilename): |
---|
4069 | - # first, find out what kind of a share it is |
---|
4070 | - sf = get_share_file(sharefilename) |
---|
4071 | - sharetype = sf.sharetype |
---|
4072 | + def process_share(self, share): |
---|
4073 | + sharetype = share.sharetype |
---|
4074 | now = time.time() |
---|
4075 | hunk ./src/allmydata/storage/expirer.py 150 |
---|
4076 | - s = self.stat(sharefilename) |
---|
4077 | + sharebytes = share.get_size() |
---|
4078 | + diskbytes = share.get_used_space() |
---|
4079 | |
---|
4080 | num_leases = 0 |
---|
4081 | num_valid_leases_original = 0 |
---|
4082 | hunk ./src/allmydata/storage/expirer.py 158 |
---|
4083 | num_valid_leases_configured = 0 |
---|
4084 | expired_leases_configured = [] |
---|
4085 | |
---|
4086 | - for li in sf.get_leases(): |
---|
4087 | + for li in share.get_leases(): |
---|
4088 | num_leases += 1 |
---|
4089 | original_expiration_time = li.get_expiration_time() |
---|
4090 | grant_renew_time = li.get_grant_renew_time_time() |
---|
4091 | hunk ./src/allmydata/storage/expirer.py 171 |
---|
4092 | |
---|
4093 | # expired-or-not according to our configured age limit |
---|
4094 | expired = False |
---|
4095 | - if self.mode == "age": |
---|
4096 | - age_limit = original_expiration_time |
---|
4097 | - if self.override_lease_duration is not None: |
---|
4098 | - age_limit = self.override_lease_duration |
---|
4099 | - if age > age_limit: |
---|
4100 | - expired = True |
---|
4101 | - else: |
---|
4102 | - assert self.mode == "cutoff-date" |
---|
4103 | - if grant_renew_time < self.cutoff_date: |
---|
4104 | - expired = True |
---|
4105 | - if sharetype not in self.sharetypes_to_expire: |
---|
4106 | - expired = False |
---|
4107 | + if sharetype in self.sharetypes_to_expire: |
---|
4108 | + if self.mode == "age": |
---|
4109 | + age_limit = original_expiration_time |
---|
4110 | + if self.override_lease_duration is not None: |
---|
4111 | + age_limit = self.override_lease_duration |
---|
4112 | + if age > age_limit: |
---|
4113 | + expired = True |
---|
4114 | + else: |
---|
4115 | + assert self.mode == "cutoff-date" |
---|
4116 | + if grant_renew_time < self.cutoff_date: |
---|
4117 | + expired = True |
---|
4118 | |
---|
4119 | if expired: |
---|
4120 | expired_leases_configured.append(li) |
---|
4121 | hunk ./src/allmydata/storage/expirer.py 190 |
---|
4122 | |
---|
4123 | so_far = self.state["cycle-to-date"] |
---|
4124 | self.increment(so_far["leases-per-share-histogram"], num_leases, 1) |
---|
4125 | - self.increment_space("examined", s, sharetype) |
---|
4126 | + self.increment_space("examined", diskbytes, sharetype) |
---|
4127 | |
---|
4128 | would_keep_share = [1, 1, 1, sharetype] |
---|
4129 | |
---|
4130 | hunk ./src/allmydata/storage/expirer.py 196 |
---|
4131 | if self.expiration_enabled: |
---|
4132 | for li in expired_leases_configured: |
---|
4133 | - sf.cancel_lease(li.cancel_secret) |
---|
4134 | + share.cancel_lease(li.cancel_secret) |
---|
4135 | |
---|
4136 | if num_valid_leases_original == 0: |
---|
4137 | would_keep_share[0] = 0 |
---|
4138 | hunk ./src/allmydata/storage/expirer.py 200 |
---|
4139 | - self.increment_space("original", s, sharetype) |
---|
4140 | + self.increment_space("original", sharebytes, diskbytes, sharetype) |
---|
4141 | |
---|
4142 | if num_valid_leases_configured == 0: |
---|
4143 | would_keep_share[1] = 0 |
---|
4144 | hunk ./src/allmydata/storage/expirer.py 204 |
---|
4145 | - self.increment_space("configured", s, sharetype) |
---|
4146 | + self.increment_space("configured", sharebytes, diskbytes, sharetype) |
---|
4147 | if self.expiration_enabled: |
---|
4148 | would_keep_share[2] = 0 |
---|
4149 | hunk ./src/allmydata/storage/expirer.py 207 |
---|
4150 | - self.increment_space("actual", s, sharetype) |
---|
4151 | + self.increment_space("actual", sharebytes, diskbytes, sharetype) |
---|
4152 | |
---|
4153 | return would_keep_share |
---|
4154 | |
---|
4155 | hunk ./src/allmydata/storage/expirer.py 211 |
---|
4156 | - def increment_space(self, a, s, sharetype): |
---|
4157 | - sharebytes = s.st_size |
---|
4158 | - try: |
---|
4159 | - # note that stat(2) says that st_blocks is 512 bytes, and that |
---|
4160 | - # st_blksize is "optimal file sys I/O ops blocksize", which is |
---|
4161 | - # independent of the block-size that st_blocks uses. |
---|
4162 | - diskbytes = s.st_blocks * 512 |
---|
4163 | - except AttributeError: |
---|
4164 | - # the docs say that st_blocks is only on linux. I also see it on |
---|
4165 | - # MacOS. But it isn't available on windows. |
---|
4166 | - diskbytes = sharebytes |
---|
4167 | + def increment_space(self, a, sharebytes, diskbytes, sharetype): |
---|
4168 | so_far_sr = self.state["cycle-to-date"]["space-recovered"] |
---|
4169 | self.increment(so_far_sr, a+"-shares", 1) |
---|
4170 | self.increment(so_far_sr, a+"-sharebytes", sharebytes) |
---|
4171 | hunk ./src/allmydata/storage/expirer.py 221 |
---|
4172 | self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes) |
---|
4173 | self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes) |
---|
4174 | |
---|
4175 | - def increment_bucketspace(self, a, bucket_diskbytes, sharetype): |
---|
4176 | + def increment_container_space(self, a, container_diskbytes, container_type): |
---|
4177 | rec = self.state["cycle-to-date"]["space-recovered"] |
---|
4178 | hunk ./src/allmydata/storage/expirer.py 223 |
---|
4179 | - self.increment(rec, a+"-diskbytes", bucket_diskbytes) |
---|
4180 | + self.increment(rec, a+"-diskbytes", container_diskbytes) |
---|
4181 | self.increment(rec, a+"-buckets", 1) |
---|
4182 | hunk ./src/allmydata/storage/expirer.py 225 |
---|
4183 | - if sharetype: |
---|
4184 | - self.increment(rec, a+"-diskbytes-"+sharetype, bucket_diskbytes) |
---|
4185 | - self.increment(rec, a+"-buckets-"+sharetype, 1) |
---|
4186 | + if container_type: |
---|
4187 | + self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes) |
---|
4188 | + self.increment(rec, a+"-buckets-"+container_type, 1) |
---|
4189 | |
---|
4190 | def increment(self, d, k, delta=1): |
---|
4191 | if k not in d: |
---|
4192 | hunk ./src/allmydata/storage/expirer.py 281 |
---|
4193 | # copy() needs to become a deepcopy |
---|
4194 | h["space-recovered"] = s["space-recovered"].copy() |
---|
4195 | |
---|
4196 | - history = pickle.load(open(self.historyfile, "rb")) |
---|
4197 | + history = pickle.load(self.historyfp.getContent()) |
---|
4198 | history[cycle] = h |
---|
4199 | while len(history) > 10: |
---|
4200 | oldcycles = sorted(history.keys()) |
---|
4201 | hunk ./src/allmydata/storage/expirer.py 286 |
---|
4202 | del history[oldcycles[0]] |
---|
4203 | - f = open(self.historyfile, "wb") |
---|
4204 | - pickle.dump(history, f) |
---|
4205 | - f.close() |
---|
4206 | + self.historyfp.setContent(pickle.dumps(history)) |
---|
4207 | |
---|
4208 | def get_state(self): |
---|
4209 | """In addition to the crawler state described in |
---|
4210 | hunk ./src/allmydata/storage/expirer.py 355 |
---|
4211 | progress = self.get_progress() |
---|
4212 | |
---|
4213 | state = ShareCrawler.get_state(self) # does a shallow copy |
---|
4214 | - history = pickle.load(open(self.historyfile, "rb")) |
---|
4215 | + history = pickle.load(self.historyfp.getContent()) |
---|
4216 | state["history"] = history |
---|
4217 | |
---|
4218 | if not progress["cycle-in-progress"]: |
---|
4219 | hunk ./src/allmydata/storage/lease.py 3 |
---|
4220 | import struct, time |
---|
4221 | |
---|
4222 | + |
---|
4223 | +class NonExistentLeaseError(Exception): |
---|
4224 | + pass |
---|
4225 | + |
---|
4226 | class LeaseInfo: |
---|
4227 | def __init__(self, owner_num=None, renew_secret=None, cancel_secret=None, |
---|
4228 | expiration_time=None, nodeid=None): |
---|
4229 | hunk ./src/allmydata/storage/lease.py 21 |
---|
4230 | |
---|
4231 | def get_expiration_time(self): |
---|
4232 | return self.expiration_time |
---|
4233 | + |
---|
4234 | def get_grant_renew_time_time(self): |
---|
4235 | # hack, based upon fixed 31day expiration period |
---|
4236 | return self.expiration_time - 31*24*60*60 |
---|
4237 | hunk ./src/allmydata/storage/lease.py 25 |
---|
4238 | + |
---|
4239 | def get_age(self): |
---|
4240 | return time.time() - self.get_grant_renew_time_time() |
---|
4241 | |
---|
4242 | hunk ./src/allmydata/storage/lease.py 36 |
---|
4243 | self.expiration_time) = struct.unpack(">L32s32sL", data) |
---|
4244 | self.nodeid = None |
---|
4245 | return self |
---|
4246 | + |
---|
4247 | def to_immutable_data(self): |
---|
4248 | return struct.pack(">L32s32sL", |
---|
4249 | self.owner_num, |
---|
4250 | hunk ./src/allmydata/storage/lease.py 49 |
---|
4251 | int(self.expiration_time), |
---|
4252 | self.renew_secret, self.cancel_secret, |
---|
4253 | self.nodeid) |
---|
4254 | + |
---|
4255 | def from_mutable_data(self, data): |
---|
4256 | (self.owner_num, |
---|
4257 | self.expiration_time, |
---|
4258 | hunk ./src/allmydata/storage/server.py 1 |
---|
4259 | -import os, re, weakref, struct, time |
---|
4260 | +import weakref, time |
---|
4261 | |
---|
4262 | from foolscap.api import Referenceable |
---|
4263 | from twisted.application import service |
---|
4264 | hunk ./src/allmydata/storage/server.py 7 |
---|
4265 | |
---|
4266 | from zope.interface import implements |
---|
4267 | -from allmydata.interfaces import RIStorageServer, IStatsProducer |
---|
4268 | -from allmydata.util import fileutil, idlib, log, time_format |
---|
4269 | +from allmydata.interfaces import RIStorageServer, IStatsProducer, IStorageBackend |
---|
4270 | +from allmydata.util.assertutil import precondition |
---|
4271 | +from allmydata.util import idlib, log |
---|
4272 | import allmydata # for __full_version__ |
---|
4273 | |
---|
4274 | hunk ./src/allmydata/storage/server.py 12 |
---|
4275 | -from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir |
---|
4276 | -_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported |
---|
4277 | +from allmydata.storage.common import si_a2b, si_b2a |
---|
4278 | +[si_a2b] # hush pyflakes |
---|
4279 | from allmydata.storage.lease import LeaseInfo |
---|
4280 | hunk ./src/allmydata/storage/server.py 15 |
---|
4281 | -from allmydata.storage.mutable import MutableShareFile, EmptyShare, \ |
---|
4282 | - create_mutable_sharefile |
---|
4283 | -from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader |
---|
4284 | -from allmydata.storage.crawler import BucketCountingCrawler |
---|
4285 | from allmydata.storage.expirer import LeaseCheckingCrawler |
---|
4286 | hunk ./src/allmydata/storage/server.py 16 |
---|
4287 | - |
---|
4288 | -# storage/ |
---|
4289 | -# storage/shares/incoming |
---|
4290 | -# incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will |
---|
4291 | -# be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success |
---|
4292 | -# storage/shares/$START/$STORAGEINDEX |
---|
4293 | -# storage/shares/$START/$STORAGEINDEX/$SHARENUM |
---|
4294 | - |
---|
4295 | -# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2 |
---|
4296 | -# base-32 chars). |
---|
4297 | - |
---|
4298 | -# $SHARENUM matches this regex: |
---|
4299 | -NUM_RE=re.compile("^[0-9]+$") |
---|
4300 | - |
---|
4301 | +from allmydata.storage.crawler import BucketCountingCrawler |
---|
4302 | |
---|
4303 | |
---|
4304 | class StorageServer(service.MultiService, Referenceable): |
---|
4305 | hunk ./src/allmydata/storage/server.py 21 |
---|
4306 | implements(RIStorageServer, IStatsProducer) |
---|
4307 | + |
---|
4308 | name = 'storage' |
---|
4309 | LeaseCheckerClass = LeaseCheckingCrawler |
---|
4310 | hunk ./src/allmydata/storage/server.py 24 |
---|
4311 | + DEFAULT_EXPIRATION_POLICY = { |
---|
4312 | + 'enabled': False, |
---|
4313 | + 'mode': 'age', |
---|
4314 | + 'override_lease_duration': None, |
---|
4315 | + 'cutoff_date': None, |
---|
4316 | + 'sharetypes': ('mutable', 'immutable'), |
---|
4317 | + } |
---|
4318 | |
---|
4319 | hunk ./src/allmydata/storage/server.py 32 |
---|
4320 | - def __init__(self, storedir, nodeid, reserved_space=0, |
---|
4321 | - discard_storage=False, readonly_storage=False, |
---|
4322 | + def __init__(self, serverid, backend, statedir, |
---|
4323 | stats_provider=None, |
---|
4324 | hunk ./src/allmydata/storage/server.py 34 |
---|
4325 | - expiration_enabled=False, |
---|
4326 | - expiration_mode="age", |
---|
4327 | - expiration_override_lease_duration=None, |
---|
4328 | - expiration_cutoff_date=None, |
---|
4329 | - expiration_sharetypes=("mutable", "immutable")): |
---|
4330 | + expiration_policy=None): |
---|
4331 | service.MultiService.__init__(self) |
---|
4332 | hunk ./src/allmydata/storage/server.py 36 |
---|
4333 | - assert isinstance(nodeid, str) |
---|
4334 | - assert len(nodeid) == 20 |
---|
4335 | - self.my_nodeid = nodeid |
---|
4336 | - self.storedir = storedir |
---|
4337 | - sharedir = os.path.join(storedir, "shares") |
---|
4338 | - fileutil.make_dirs(sharedir) |
---|
4339 | - self.sharedir = sharedir |
---|
4340 | - # we don't actually create the corruption-advisory dir until necessary |
---|
4341 | - self.corruption_advisory_dir = os.path.join(storedir, |
---|
4342 | - "corruption-advisories") |
---|
4343 | - self.reserved_space = int(reserved_space) |
---|
4344 | - self.no_storage = discard_storage |
---|
4345 | - self.readonly_storage = readonly_storage |
---|
4346 | + precondition(IStorageBackend.providedBy(backend), backend) |
---|
4347 | + precondition(isinstance(serverid, str), serverid) |
---|
4348 | + precondition(len(serverid) == 20, serverid) |
---|
4349 | + |
---|
4350 | + self._serverid = serverid |
---|
4351 | self.stats_provider = stats_provider |
---|
4352 | if self.stats_provider: |
---|
4353 | self.stats_provider.register_producer(self) |
---|
4354 | hunk ./src/allmydata/storage/server.py 44 |
---|
4355 | - self.incomingdir = os.path.join(sharedir, 'incoming') |
---|
4356 | - self._clean_incomplete() |
---|
4357 | - fileutil.make_dirs(self.incomingdir) |
---|
4358 | self._active_writers = weakref.WeakKeyDictionary() |
---|
4359 | hunk ./src/allmydata/storage/server.py 45 |
---|
4360 | + self.backend = backend |
---|
4361 | + self.backend.setServiceParent(self) |
---|
4362 | + self._statedir = statedir |
---|
4363 | log.msg("StorageServer created", facility="tahoe.storage") |
---|
4364 | |
---|
4365 | hunk ./src/allmydata/storage/server.py 50 |
---|
4366 | - if reserved_space: |
---|
4367 | - if self.get_available_space() is None: |
---|
4368 | - log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored", |
---|
4369 | - umin="0wZ27w", level=log.UNUSUAL) |
---|
4370 | - |
---|
4371 | self.latencies = {"allocate": [], # immutable |
---|
4372 | "write": [], |
---|
4373 | "close": [], |
---|
4374 | hunk ./src/allmydata/storage/server.py 61 |
---|
4375 | "renew": [], |
---|
4376 | "cancel": [], |
---|
4377 | } |
---|
4378 | - self.add_bucket_counter() |
---|
4379 | - |
---|
4380 | - statefile = os.path.join(self.storedir, "lease_checker.state") |
---|
4381 | - historyfile = os.path.join(self.storedir, "lease_checker.history") |
---|
4382 | - klass = self.LeaseCheckerClass |
---|
4383 | - self.lease_checker = klass(self, statefile, historyfile, |
---|
4384 | - expiration_enabled, expiration_mode, |
---|
4385 | - expiration_override_lease_duration, |
---|
4386 | - expiration_cutoff_date, |
---|
4387 | - expiration_sharetypes) |
---|
4388 | - self.lease_checker.setServiceParent(self) |
---|
4389 | + self._setup_bucket_counter() |
---|
4390 | + self._setup_lease_checker(expiration_policy or self.DEFAULT_EXPIRATION_POLICY) |
---|
4391 | |
---|
4392 | def __repr__(self): |
---|
4393 | hunk ./src/allmydata/storage/server.py 65 |
---|
4394 | - return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),) |
---|
4395 | + return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self._serverid),) |
---|
4396 | |
---|
4397 | hunk ./src/allmydata/storage/server.py 67 |
---|
4398 | - def add_bucket_counter(self): |
---|
4399 | - statefile = os.path.join(self.storedir, "bucket_counter.state") |
---|
4400 | - self.bucket_counter = BucketCountingCrawler(self, statefile) |
---|
4401 | + def _setup_bucket_counter(self): |
---|
4402 | + statefp = self._statedir.child("bucket_counter.state") |
---|
4403 | + self.bucket_counter = BucketCountingCrawler(self.backend, statefp) |
---|
4404 | self.bucket_counter.setServiceParent(self) |
---|
4405 | |
---|
4406 | hunk ./src/allmydata/storage/server.py 72 |
---|
4407 | + def _setup_lease_checker(self, expiration_policy): |
---|
4408 | + statefp = self._statedir.child("lease_checker.state") |
---|
4409 | + historyfp = self._statedir.child("lease_checker.history") |
---|
4410 | + self.lease_checker = self.LeaseCheckerClass(self.backend, statefp, historyfp, expiration_policy) |
---|
4411 | + self.lease_checker.setServiceParent(self) |
---|
4412 | + |
---|
4413 | def count(self, name, delta=1): |
---|
4414 | if self.stats_provider: |
---|
4415 | self.stats_provider.count("storage_server." + name, delta) |
---|
4416 | hunk ./src/allmydata/storage/server.py 92 |
---|
4417 | """Return a dict, indexed by category, that contains a dict of |
---|
4418 | latency numbers for each category. If there are sufficient samples |
---|
4419 | for unambiguous interpretation, each dict will contain the |
---|
4420 | - following keys: mean, 01_0_percentile, 10_0_percentile, |
---|
4421 | + following keys: samplesize, mean, 01_0_percentile, 10_0_percentile, |
---|
4422 | 50_0_percentile (median), 90_0_percentile, 95_0_percentile, |
---|
4423 | 99_0_percentile, 99_9_percentile. If there are insufficient |
---|
4424 | samples for a given percentile to be interpreted unambiguously |
---|
4425 | hunk ./src/allmydata/storage/server.py 114 |
---|
4426 | else: |
---|
4427 | stats["mean"] = None |
---|
4428 | |
---|
4429 | - orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\ |
---|
4430 | - (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\ |
---|
4431 | - (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\ |
---|
4432 | + orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \ |
---|
4433 | + (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \ |
---|
4434 | + (0.01, "01_0_percentile", 100), (0.99, "99_0_percentile", 100),\ |
---|
4435 | (0.999, "99_9_percentile", 1000)] |
---|
4436 | |
---|
4437 | for percentile, percentilestring, minnumtoobserve in orderstatlist: |
---|
4438 | hunk ./src/allmydata/storage/server.py 133 |
---|
4439 | kwargs["facility"] = "tahoe.storage" |
---|
4440 | return log.msg(*args, **kwargs) |
---|
4441 | |
---|
4442 | - def _clean_incomplete(self): |
---|
4443 | - fileutil.rm_dir(self.incomingdir) |
---|
4444 | + def get_serverid(self): |
---|
4445 | + return self._serverid |
---|
4446 | |
---|
4447 | def get_stats(self): |
---|
4448 | # remember: RIStatsProvider requires that our return dict |
---|
4449 | hunk ./src/allmydata/storage/server.py 138 |
---|
4450 | - # contains numeric values. |
---|
4451 | + # contains numeric, or None values. |
---|
4452 | stats = { 'storage_server.allocated': self.allocated_size(), } |
---|
4453 | hunk ./src/allmydata/storage/server.py 140 |
---|
4454 | - stats['storage_server.reserved_space'] = self.reserved_space |
---|
4455 | for category,ld in self.get_latencies().items(): |
---|
4456 | for name,v in ld.items(): |
---|
4457 | stats['storage_server.latencies.%s.%s' % (category, name)] = v |
---|
4458 | hunk ./src/allmydata/storage/server.py 144 |
---|
4459 | |
---|
4460 | - try: |
---|
4461 | - disk = fileutil.get_disk_stats(self.sharedir, self.reserved_space) |
---|
4462 | - writeable = disk['avail'] > 0 |
---|
4463 | - |
---|
4464 | - # spacetime predictors should use disk_avail / (d(disk_used)/dt) |
---|
4465 | - stats['storage_server.disk_total'] = disk['total'] |
---|
4466 | - stats['storage_server.disk_used'] = disk['used'] |
---|
4467 | - stats['storage_server.disk_free_for_root'] = disk['free_for_root'] |
---|
4468 | - stats['storage_server.disk_free_for_nonroot'] = disk['free_for_nonroot'] |
---|
4469 | - stats['storage_server.disk_avail'] = disk['avail'] |
---|
4470 | - except AttributeError: |
---|
4471 | - writeable = True |
---|
4472 | - except EnvironmentError: |
---|
4473 | - log.msg("OS call to get disk statistics failed", level=log.UNUSUAL) |
---|
4474 | - writeable = False |
---|
4475 | - |
---|
4476 | - if self.readonly_storage: |
---|
4477 | - stats['storage_server.disk_avail'] = 0 |
---|
4478 | - writeable = False |
---|
4479 | + self.backend.fill_in_space_stats(stats) |
---|
4480 | |
---|
4481 | hunk ./src/allmydata/storage/server.py 146 |
---|
4482 | - stats['storage_server.accepting_immutable_shares'] = int(writeable) |
---|
4483 | s = self.bucket_counter.get_state() |
---|
4484 | bucket_count = s.get("last-complete-bucket-count") |
---|
4485 | if bucket_count: |
---|
4486 | hunk ./src/allmydata/storage/server.py 153 |
---|
4487 | return stats |
---|
4488 | |
---|
4489 | def get_available_space(self): |
---|
4490 | - """Returns available space for share storage in bytes, or None if no |
---|
4491 | - API to get this information is available.""" |
---|
4492 | - |
---|
4493 | - if self.readonly_storage: |
---|
4494 | - return 0 |
---|
4495 | - return fileutil.get_available_space(self.sharedir, self.reserved_space) |
---|
4496 | + return self.backend.get_available_space() |
---|
4497 | |
---|
4498 | def allocated_size(self): |
---|
4499 | space = 0 |
---|
4500 | hunk ./src/allmydata/storage/server.py 162 |
---|
4501 | return space |
---|
4502 | |
---|
4503 | def remote_get_version(self): |
---|
4504 | - remaining_space = self.get_available_space() |
---|
4505 | + remaining_space = self.backend.get_available_space() |
---|
4506 | if remaining_space is None: |
---|
4507 | # We're on a platform that has no API to get disk stats. |
---|
4508 | remaining_space = 2**64 |
---|
4509 | hunk ./src/allmydata/storage/server.py 178 |
---|
4510 | } |
---|
4511 | return version |
---|
4512 | |
---|
4513 | - def remote_allocate_buckets(self, storage_index, |
---|
4514 | + def remote_allocate_buckets(self, storageindex, |
---|
4515 | renew_secret, cancel_secret, |
---|
4516 | sharenums, allocated_size, |
---|
4517 | canary, owner_num=0): |
---|
4518 | hunk ./src/allmydata/storage/server.py 182 |
---|
4519 | + # cancel_secret is no longer used. |
---|
4520 | # owner_num is not for clients to set, but rather it should be |
---|
4521 | hunk ./src/allmydata/storage/server.py 184 |
---|
4522 | - # curried into the PersonalStorageServer instance that is dedicated |
---|
4523 | - # to a particular owner. |
---|
4524 | + # curried into a StorageServer instance dedicated to a particular |
---|
4525 | + # owner. |
---|
4526 | start = time.time() |
---|
4527 | self.count("allocate") |
---|
4528 | hunk ./src/allmydata/storage/server.py 188 |
---|
4529 | - alreadygot = set() |
---|
4530 | bucketwriters = {} # k: shnum, v: BucketWriter |
---|
4531 | hunk ./src/allmydata/storage/server.py 189 |
---|
4532 | - si_dir = storage_index_to_dir(storage_index) |
---|
4533 | - si_s = si_b2a(storage_index) |
---|
4534 | |
---|
4535 | hunk ./src/allmydata/storage/server.py 190 |
---|
4536 | + si_s = si_b2a(storageindex) |
---|
4537 | log.msg("storage: allocate_buckets %s" % si_s) |
---|
4538 | |
---|
4539 | hunk ./src/allmydata/storage/server.py 193 |
---|
4540 | - # in this implementation, the lease information (including secrets) |
---|
4541 | - # goes into the share files themselves. It could also be put into a |
---|
4542 | - # separate database. Note that the lease should not be added until |
---|
4543 | - # the BucketWriter has been closed. |
---|
4544 | + # Note that the lease should not be added until the BucketWriter |
---|
4545 | + # has been closed. |
---|
4546 | expire_time = time.time() + 31*24*60*60 |
---|
4547 | hunk ./src/allmydata/storage/server.py 196 |
---|
4548 | - lease_info = LeaseInfo(owner_num, |
---|
4549 | - renew_secret, cancel_secret, |
---|
4550 | - expire_time, self.my_nodeid) |
---|
4551 | + lease_info = LeaseInfo(owner_num, renew_secret, |
---|
4552 | + expire_time, self._serverid) |
---|
4553 | |
---|
4554 | max_space_per_bucket = allocated_size |
---|
4555 | |
---|
4556 | hunk ./src/allmydata/storage/server.py 201 |
---|
4557 | - remaining_space = self.get_available_space() |
---|
4558 | + remaining_space = self.backend.get_available_space() |
---|
4559 | limited = remaining_space is not None |
---|
4560 | if limited: |
---|
4561 | hunk ./src/allmydata/storage/server.py 204 |
---|
4562 | - # this is a bit conservative, since some of this allocated_size() |
---|
4563 | - # has already been written to disk, where it will show up in |
---|
4564 | + # This is a bit conservative, since some of this allocated_size() |
---|
4565 | + # has already been written to the backend, where it will show up in |
---|
4566 | # get_available_space. |
---|
4567 | remaining_space -= self.allocated_size() |
---|
4568 | hunk ./src/allmydata/storage/server.py 208 |
---|
4569 | - # self.readonly_storage causes remaining_space <= 0 |
---|
4570 | + # If the backend is read-only, remaining_space will be <= 0. |
---|
4571 | + |
---|
4572 | + shareset = self.backend.get_shareset(storageindex) |
---|
4573 | |
---|
4574 | hunk ./src/allmydata/storage/server.py 212 |
---|
4575 | - # fill alreadygot with all shares that we have, not just the ones |
---|
4576 | + # Fill alreadygot with all shares that we have, not just the ones |
---|
4577 | # they asked about: this will save them a lot of work. Add or update |
---|
4578 | # leases for all of them: if they want us to hold shares for this |
---|
4579 | hunk ./src/allmydata/storage/server.py 215 |
---|
4580 | - # file, they'll want us to hold leases for this file. |
---|
4581 | - for (shnum, fn) in self._get_bucket_shares(storage_index): |
---|
4582 | - alreadygot.add(shnum) |
---|
4583 | - sf = ShareFile(fn) |
---|
4584 | - sf.add_or_renew_lease(lease_info) |
---|
4585 | + # file, they'll want us to hold leases for all the shares of it. |
---|
4586 | + # |
---|
4587 | + # XXX should we be making the assumption here that lease info is |
---|
4588 | + # duplicated in all shares? |
---|
4589 | + alreadygot = set() |
---|
4590 | + for share in shareset.get_shares(): |
---|
4591 | + share.add_or_renew_lease(lease_info) |
---|
4592 | + alreadygot.add(share.shnum) |
---|
4593 | |
---|
4594 | hunk ./src/allmydata/storage/server.py 224 |
---|
4595 | - for shnum in sharenums: |
---|
4596 | - incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum) |
---|
4597 | - finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum) |
---|
4598 | - if os.path.exists(finalhome): |
---|
4599 | - # great! we already have it. easy. |
---|
4600 | - pass |
---|
4601 | - elif os.path.exists(incominghome): |
---|
4602 | + for shnum in sharenums - alreadygot: |
---|
4603 | + if shareset.has_incoming(shnum): |
---|
4604 | # Note that we don't create BucketWriters for shnums that |
---|
4605 | # have a partial share (in incoming/), so if a second upload |
---|
4606 | # occurs while the first is still in progress, the second |
---|
4607 | hunk ./src/allmydata/storage/server.py 232 |
---|
4608 | # uploader will use different storage servers. |
---|
4609 | pass |
---|
4610 | elif (not limited) or (remaining_space >= max_space_per_bucket): |
---|
4611 | - # ok! we need to create the new share file. |
---|
4612 | - bw = BucketWriter(self, incominghome, finalhome, |
---|
4613 | - max_space_per_bucket, lease_info, canary) |
---|
4614 | - if self.no_storage: |
---|
4615 | - bw.throw_out_all_data = True |
---|
4616 | + bw = shareset.make_bucket_writer(self, shnum, max_space_per_bucket, |
---|
4617 | + lease_info, canary) |
---|
4618 | bucketwriters[shnum] = bw |
---|
4619 | self._active_writers[bw] = 1 |
---|
4620 | if limited: |
---|
4621 | hunk ./src/allmydata/storage/server.py 239 |
---|
4622 | remaining_space -= max_space_per_bucket |
---|
4623 | else: |
---|
4624 | - # bummer! not enough space to accept this bucket |
---|
4625 | + # Bummer not enough space to accept this share. |
---|
4626 | pass |
---|
4627 | |
---|
4628 | hunk ./src/allmydata/storage/server.py 242 |
---|
4629 | - if bucketwriters: |
---|
4630 | - fileutil.make_dirs(os.path.join(self.sharedir, si_dir)) |
---|
4631 | - |
---|
4632 | self.add_latency("allocate", time.time() - start) |
---|
4633 | return alreadygot, bucketwriters |
---|
4634 | |
---|
4635 | hunk ./src/allmydata/storage/server.py 245 |
---|
4636 | - def _iter_share_files(self, storage_index): |
---|
4637 | - for shnum, filename in self._get_bucket_shares(storage_index): |
---|
4638 | - f = open(filename, 'rb') |
---|
4639 | - header = f.read(32) |
---|
4640 | - f.close() |
---|
4641 | - if header[:32] == MutableShareFile.MAGIC: |
---|
4642 | - sf = MutableShareFile(filename, self) |
---|
4643 | - # note: if the share has been migrated, the renew_lease() |
---|
4644 | - # call will throw an exception, with information to help the |
---|
4645 | - # client update the lease. |
---|
4646 | - elif header[:4] == struct.pack(">L", 1): |
---|
4647 | - sf = ShareFile(filename) |
---|
4648 | - else: |
---|
4649 | - continue # non-sharefile |
---|
4650 | - yield sf |
---|
4651 | - |
---|
4652 | - def remote_add_lease(self, storage_index, renew_secret, cancel_secret, |
---|
4653 | + def remote_add_lease(self, storageindex, renew_secret, cancel_secret, |
---|
4654 | owner_num=1): |
---|
4655 | hunk ./src/allmydata/storage/server.py 247 |
---|
4656 | + # cancel_secret is no longer used. |
---|
4657 | start = time.time() |
---|
4658 | self.count("add-lease") |
---|
4659 | new_expire_time = time.time() + 31*24*60*60 |
---|
4660 | hunk ./src/allmydata/storage/server.py 251 |
---|
4661 | - lease_info = LeaseInfo(owner_num, |
---|
4662 | - renew_secret, cancel_secret, |
---|
4663 | - new_expire_time, self.my_nodeid) |
---|
4664 | - for sf in self._iter_share_files(storage_index): |
---|
4665 | - sf.add_or_renew_lease(lease_info) |
---|
4666 | - self.add_latency("add-lease", time.time() - start) |
---|
4667 | - return None |
---|
4668 | + lease_info = LeaseInfo(owner_num, renew_secret, |
---|
4669 | + new_expire_time, self._serverid) |
---|
4670 | |
---|
4671 | hunk ./src/allmydata/storage/server.py 254 |
---|
4672 | - def remote_renew_lease(self, storage_index, renew_secret): |
---|
4673 | + try: |
---|
4674 | + self.backend.add_or_renew_lease(lease_info) |
---|
4675 | + finally: |
---|
4676 | + self.add_latency("add-lease", time.time() - start) |
---|
4677 | + |
---|
4678 | + def remote_renew_lease(self, storageindex, renew_secret): |
---|
4679 | start = time.time() |
---|
4680 | self.count("renew") |
---|
4681 | hunk ./src/allmydata/storage/server.py 262 |
---|
4682 | - new_expire_time = time.time() + 31*24*60*60 |
---|
4683 | - found_buckets = False |
---|
4684 | - for sf in self._iter_share_files(storage_index): |
---|
4685 | - found_buckets = True |
---|
4686 | - sf.renew_lease(renew_secret, new_expire_time) |
---|
4687 | - self.add_latency("renew", time.time() - start) |
---|
4688 | - if not found_buckets: |
---|
4689 | - raise IndexError("no such lease to renew") |
---|
4690 | + |
---|
4691 | + try: |
---|
4692 | + shareset = self.backend.get_shareset(storageindex) |
---|
4693 | + new_expiration_time = start + 31*24*60*60 # one month from now |
---|
4694 | + shareset.renew_lease(renew_secret, new_expiration_time) |
---|
4695 | + finally: |
---|
4696 | + self.add_latency("renew", time.time() - start) |
---|
4697 | |
---|
4698 | def bucket_writer_closed(self, bw, consumed_size): |
---|
4699 | if self.stats_provider: |
---|
4700 | hunk ./src/allmydata/storage/server.py 275 |
---|
4701 | self.stats_provider.count('storage_server.bytes_added', consumed_size) |
---|
4702 | del self._active_writers[bw] |
---|
4703 | |
---|
4704 | - def _get_bucket_shares(self, storage_index): |
---|
4705 | - """Return a list of (shnum, pathname) tuples for files that hold |
---|
4706 | - shares for this storage_index. In each tuple, 'shnum' will always be |
---|
4707 | - the integer form of the last component of 'pathname'.""" |
---|
4708 | - storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index)) |
---|
4709 | - try: |
---|
4710 | - for f in os.listdir(storagedir): |
---|
4711 | - if NUM_RE.match(f): |
---|
4712 | - filename = os.path.join(storagedir, f) |
---|
4713 | - yield (int(f), filename) |
---|
4714 | - except OSError: |
---|
4715 | - # Commonly caused by there being no buckets at all. |
---|
4716 | - pass |
---|
4717 | - |
---|
4718 | - def remote_get_buckets(self, storage_index): |
---|
4719 | + def remote_get_buckets(self, storageindex): |
---|
4720 | start = time.time() |
---|
4721 | self.count("get") |
---|
4722 | hunk ./src/allmydata/storage/server.py 278 |
---|
4723 | - si_s = si_b2a(storage_index) |
---|
4724 | + si_s = si_b2a(storageindex) |
---|
4725 | log.msg("storage: get_buckets %s" % si_s) |
---|
4726 | bucketreaders = {} # k: sharenum, v: BucketReader |
---|
4727 | hunk ./src/allmydata/storage/server.py 281 |
---|
4728 | - for shnum, filename in self._get_bucket_shares(storage_index): |
---|
4729 | - bucketreaders[shnum] = BucketReader(self, filename, |
---|
4730 | - storage_index, shnum) |
---|
4731 | - self.add_latency("get", time.time() - start) |
---|
4732 | - return bucketreaders |
---|
4733 | |
---|
4734 | hunk ./src/allmydata/storage/server.py 282 |
---|
4735 | - def get_leases(self, storage_index): |
---|
4736 | - """Provide an iterator that yields all of the leases attached to this |
---|
4737 | - bucket. Each lease is returned as a LeaseInfo instance. |
---|
4738 | + try: |
---|
4739 | + shareset = self.backend.get_shareset(storageindex) |
---|
4740 | + for share in shareset.get_shares(): |
---|
4741 | + bucketreaders[share.get_shnum()] = shareset.make_bucket_reader(self, share) |
---|
4742 | + return bucketreaders |
---|
4743 | + finally: |
---|
4744 | + self.add_latency("get", time.time() - start) |
---|
4745 | |
---|
4746 | hunk ./src/allmydata/storage/server.py 290 |
---|
4747 | - This method is not for client use. |
---|
4748 | + def get_leases(self, storageindex): |
---|
4749 | """ |
---|
4750 | hunk ./src/allmydata/storage/server.py 292 |
---|
4751 | + Provide an iterator that yields all of the leases attached to this |
---|
4752 | + bucket. Each lease is returned as a LeaseInfo instance. |
---|
4753 | |
---|
4754 | hunk ./src/allmydata/storage/server.py 295 |
---|
4755 | - # since all shares get the same lease data, we just grab the leases |
---|
4756 | - # from the first share |
---|
4757 | - try: |
---|
4758 | - shnum, filename = self._get_bucket_shares(storage_index).next() |
---|
4759 | - sf = ShareFile(filename) |
---|
4760 | - return sf.get_leases() |
---|
4761 | - except StopIteration: |
---|
4762 | - return iter([]) |
---|
4763 | + This method is not for client use. XXX do we need it at all? |
---|
4764 | + """ |
---|
4765 | + return self.backend.get_shareset(storageindex).get_leases() |
---|
4766 | |
---|
4767 | hunk ./src/allmydata/storage/server.py 299 |
---|
4768 | - def remote_slot_testv_and_readv_and_writev(self, storage_index, |
---|
4769 | + def remote_slot_testv_and_readv_and_writev(self, storageindex, |
---|
4770 | secrets, |
---|
4771 | test_and_write_vectors, |
---|
4772 | read_vector): |
---|
4773 | hunk ./src/allmydata/storage/server.py 305 |
---|
4774 | start = time.time() |
---|
4775 | self.count("writev") |
---|
4776 | - si_s = si_b2a(storage_index) |
---|
4777 | + si_s = si_b2a(storageindex) |
---|
4778 | log.msg("storage: slot_writev %s" % si_s) |
---|
4779 | hunk ./src/allmydata/storage/server.py 307 |
---|
4780 | - si_dir = storage_index_to_dir(storage_index) |
---|
4781 | - (write_enabler, renew_secret, cancel_secret) = secrets |
---|
4782 | - # shares exist if there is a file for them |
---|
4783 | - bucketdir = os.path.join(self.sharedir, si_dir) |
---|
4784 | - shares = {} |
---|
4785 | - if os.path.isdir(bucketdir): |
---|
4786 | - for sharenum_s in os.listdir(bucketdir): |
---|
4787 | - try: |
---|
4788 | - sharenum = int(sharenum_s) |
---|
4789 | - except ValueError: |
---|
4790 | - continue |
---|
4791 | - filename = os.path.join(bucketdir, sharenum_s) |
---|
4792 | - msf = MutableShareFile(filename, self) |
---|
4793 | - msf.check_write_enabler(write_enabler, si_s) |
---|
4794 | - shares[sharenum] = msf |
---|
4795 | - # write_enabler is good for all existing shares. |
---|
4796 | - |
---|
4797 | - # Now evaluate test vectors. |
---|
4798 | - testv_is_good = True |
---|
4799 | - for sharenum in test_and_write_vectors: |
---|
4800 | - (testv, datav, new_length) = test_and_write_vectors[sharenum] |
---|
4801 | - if sharenum in shares: |
---|
4802 | - if not shares[sharenum].check_testv(testv): |
---|
4803 | - self.log("testv failed: [%d]: %r" % (sharenum, testv)) |
---|
4804 | - testv_is_good = False |
---|
4805 | - break |
---|
4806 | - else: |
---|
4807 | - # compare the vectors against an empty share, in which all |
---|
4808 | - # reads return empty strings. |
---|
4809 | - if not EmptyShare().check_testv(testv): |
---|
4810 | - self.log("testv failed (empty): [%d] %r" % (sharenum, |
---|
4811 | - testv)) |
---|
4812 | - testv_is_good = False |
---|
4813 | - break |
---|
4814 | - |
---|
4815 | - # now gather the read vectors, before we do any writes |
---|
4816 | - read_data = {} |
---|
4817 | - for sharenum, share in shares.items(): |
---|
4818 | - read_data[sharenum] = share.readv(read_vector) |
---|
4819 | - |
---|
4820 | - ownerid = 1 # TODO |
---|
4821 | - expire_time = time.time() + 31*24*60*60 # one month |
---|
4822 | - lease_info = LeaseInfo(ownerid, |
---|
4823 | - renew_secret, cancel_secret, |
---|
4824 | - expire_time, self.my_nodeid) |
---|
4825 | - |
---|
4826 | - if testv_is_good: |
---|
4827 | - # now apply the write vectors |
---|
4828 | - for sharenum in test_and_write_vectors: |
---|
4829 | - (testv, datav, new_length) = test_and_write_vectors[sharenum] |
---|
4830 | - if new_length == 0: |
---|
4831 | - if sharenum in shares: |
---|
4832 | - shares[sharenum].unlink() |
---|
4833 | - else: |
---|
4834 | - if sharenum not in shares: |
---|
4835 | - # allocate a new share |
---|
4836 | - allocated_size = 2000 # arbitrary, really |
---|
4837 | - share = self._allocate_slot_share(bucketdir, secrets, |
---|
4838 | - sharenum, |
---|
4839 | - allocated_size, |
---|
4840 | - owner_num=0) |
---|
4841 | - shares[sharenum] = share |
---|
4842 | - shares[sharenum].writev(datav, new_length) |
---|
4843 | - # and update the lease |
---|
4844 | - shares[sharenum].add_or_renew_lease(lease_info) |
---|
4845 | - |
---|
4846 | - if new_length == 0: |
---|
4847 | - # delete empty bucket directories |
---|
4848 | - if not os.listdir(bucketdir): |
---|
4849 | - os.rmdir(bucketdir) |
---|
4850 | |
---|
4851 | hunk ./src/allmydata/storage/server.py 308 |
---|
4852 | + try: |
---|
4853 | + shareset = self.backend.get_shareset(storageindex) |
---|
4854 | + expiration_time = start + 31*24*60*60 # one month from now |
---|
4855 | + return shareset.testv_and_readv_and_writev(self, secrets, test_and_write_vectors, |
---|
4856 | + read_vector, expiration_time) |
---|
4857 | + finally: |
---|
4858 | + self.add_latency("writev", time.time() - start) |
---|
4859 | |
---|
4860 | hunk ./src/allmydata/storage/server.py 316 |
---|
4861 | - # all done |
---|
4862 | - self.add_latency("writev", time.time() - start) |
---|
4863 | - return (testv_is_good, read_data) |
---|
4864 | - |
---|
4865 | - def _allocate_slot_share(self, bucketdir, secrets, sharenum, |
---|
4866 | - allocated_size, owner_num=0): |
---|
4867 | - (write_enabler, renew_secret, cancel_secret) = secrets |
---|
4868 | - my_nodeid = self.my_nodeid |
---|
4869 | - fileutil.make_dirs(bucketdir) |
---|
4870 | - filename = os.path.join(bucketdir, "%d" % sharenum) |
---|
4871 | - share = create_mutable_sharefile(filename, my_nodeid, write_enabler, |
---|
4872 | - self) |
---|
4873 | - return share |
---|
4874 | - |
---|
4875 | - def remote_slot_readv(self, storage_index, shares, readv): |
---|
4876 | + def remote_slot_readv(self, storageindex, shares, readv): |
---|
4877 | start = time.time() |
---|
4878 | self.count("readv") |
---|
4879 | hunk ./src/allmydata/storage/server.py 319 |
---|
4880 | - si_s = si_b2a(storage_index) |
---|
4881 | - lp = log.msg("storage: slot_readv %s %s" % (si_s, shares), |
---|
4882 | - facility="tahoe.storage", level=log.OPERATIONAL) |
---|
4883 | - si_dir = storage_index_to_dir(storage_index) |
---|
4884 | - # shares exist if there is a file for them |
---|
4885 | - bucketdir = os.path.join(self.sharedir, si_dir) |
---|
4886 | - if not os.path.isdir(bucketdir): |
---|
4887 | + si_s = si_b2a(storageindex) |
---|
4888 | + log.msg("storage: slot_readv %s %s" % (si_s, shares), |
---|
4889 | + facility="tahoe.storage", level=log.OPERATIONAL) |
---|
4890 | + |
---|
4891 | + try: |
---|
4892 | + shareset = self.backend.get_shareset(storageindex) |
---|
4893 | + return shareset.readv(self, shares, readv) |
---|
4894 | + finally: |
---|
4895 | self.add_latency("readv", time.time() - start) |
---|
4896 | hunk ./src/allmydata/storage/server.py 328 |
---|
4897 | - return {} |
---|
4898 | - datavs = {} |
---|
4899 | - for sharenum_s in os.listdir(bucketdir): |
---|
4900 | - try: |
---|
4901 | - sharenum = int(sharenum_s) |
---|
4902 | - except ValueError: |
---|
4903 | - continue |
---|
4904 | - if sharenum in shares or not shares: |
---|
4905 | - filename = os.path.join(bucketdir, sharenum_s) |
---|
4906 | - msf = MutableShareFile(filename, self) |
---|
4907 | - datavs[sharenum] = msf.readv(readv) |
---|
4908 | - log.msg("returning shares %s" % (datavs.keys(),), |
---|
4909 | - facility="tahoe.storage", level=log.NOISY, parent=lp) |
---|
4910 | - self.add_latency("readv", time.time() - start) |
---|
4911 | - return datavs |
---|
4912 | |
---|
4913 | hunk ./src/allmydata/storage/server.py 329 |
---|
4914 | - def remote_advise_corrupt_share(self, share_type, storage_index, shnum, |
---|
4915 | - reason): |
---|
4916 | - fileutil.make_dirs(self.corruption_advisory_dir) |
---|
4917 | - now = time_format.iso_utc(sep="T") |
---|
4918 | - si_s = si_b2a(storage_index) |
---|
4919 | - # windows can't handle colons in the filename |
---|
4920 | - fn = os.path.join(self.corruption_advisory_dir, |
---|
4921 | - "%s--%s-%d" % (now, si_s, shnum)).replace(":","") |
---|
4922 | - f = open(fn, "w") |
---|
4923 | - f.write("report: Share Corruption\n") |
---|
4924 | - f.write("type: %s\n" % share_type) |
---|
4925 | - f.write("storage_index: %s\n" % si_s) |
---|
4926 | - f.write("share_number: %d\n" % shnum) |
---|
4927 | - f.write("\n") |
---|
4928 | - f.write(reason) |
---|
4929 | - f.write("\n") |
---|
4930 | - f.close() |
---|
4931 | - log.msg(format=("client claims corruption in (%(share_type)s) " + |
---|
4932 | - "%(si)s-%(shnum)d: %(reason)s"), |
---|
4933 | - share_type=share_type, si=si_s, shnum=shnum, reason=reason, |
---|
4934 | - level=log.SCARY, umid="SGx2fA") |
---|
4935 | - return None |
---|
4936 | + def remote_advise_corrupt_share(self, share_type, storage_index, shnum, reason): |
---|
4937 | + self.backend.advise_corrupt_share(share_type, storage_index, shnum, reason) |
---|
4938 | hunk ./src/allmydata/test/common.py 20 |
---|
4939 | from allmydata.mutable.common import CorruptShareError |
---|
4940 | from allmydata.mutable.layout import unpack_header |
---|
4941 | from allmydata.mutable.publish import MutableData |
---|
4942 | -from allmydata.storage.mutable import MutableShareFile |
---|
4943 | +from allmydata.storage.backends.disk.mutable import MutableDiskShare |
---|
4944 | from allmydata.util import hashutil, log, fileutil, pollmixin |
---|
4945 | from allmydata.util.assertutil import precondition |
---|
4946 | from allmydata.util.consumer import download_to_data |
---|
4947 | hunk ./src/allmydata/test/common.py 1297 |
---|
4948 | |
---|
4949 | def _corrupt_mutable_share_data(data, debug=False): |
---|
4950 | prefix = data[:32] |
---|
4951 | - assert prefix == MutableShareFile.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableShareFile.MAGIC) |
---|
4952 | - data_offset = MutableShareFile.DATA_OFFSET |
---|
4953 | + assert prefix == MutableDiskShare.MAGIC, "This function is designed to corrupt mutable shares of v1, and the magic number doesn't look right: %r vs %r" % (prefix, MutableDiskShare.MAGIC) |
---|
4954 | + data_offset = MutableDiskShare.DATA_OFFSET |
---|
4955 | sharetype = data[data_offset:data_offset+1] |
---|
4956 | assert sharetype == "\x00", "non-SDMF mutable shares not supported" |
---|
4957 | (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize, |
---|
4958 | hunk ./src/allmydata/test/no_network.py 21 |
---|
4959 | from twisted.application import service |
---|
4960 | from twisted.internet import defer, reactor |
---|
4961 | from twisted.python.failure import Failure |
---|
4962 | +from twisted.python.filepath import FilePath |
---|
4963 | from foolscap.api import Referenceable, fireEventually, RemoteException |
---|
4964 | from base64 import b32encode |
---|
4965 | hunk ./src/allmydata/test/no_network.py 24 |
---|
4966 | + |
---|
4967 | from allmydata import uri as tahoe_uri |
---|
4968 | from allmydata.client import Client |
---|
4969 | hunk ./src/allmydata/test/no_network.py 27 |
---|
4970 | -from allmydata.storage.server import StorageServer, storage_index_to_dir |
---|
4971 | +from allmydata.storage.server import StorageServer |
---|
4972 | +from allmydata.storage.backends.disk.disk_backend import DiskBackend |
---|
4973 | from allmydata.util import fileutil, idlib, hashutil |
---|
4974 | from allmydata.util.hashutil import sha1 |
---|
4975 | from allmydata.test.common_web import HTTPClientGETFactory |
---|
4976 | hunk ./src/allmydata/test/no_network.py 155 |
---|
4977 | seed = server.get_permutation_seed() |
---|
4978 | return sha1(peer_selection_index + seed).digest() |
---|
4979 | return sorted(self.get_connected_servers(), key=_permuted) |
---|
4980 | + |
---|
4981 | def get_connected_servers(self): |
---|
4982 | return self.client._servers |
---|
4983 | hunk ./src/allmydata/test/no_network.py 158 |
---|
4984 | + |
---|
4985 | def get_nickname_for_serverid(self, serverid): |
---|
4986 | return None |
---|
4987 | |
---|
4988 | hunk ./src/allmydata/test/no_network.py 162 |
---|
4989 | + def get_known_servers(self): |
---|
4990 | + return self.get_connected_servers() |
---|
4991 | + |
---|
4992 | + def get_all_serverids(self): |
---|
4993 | + return self.client.get_all_serverids() |
---|
4994 | + |
---|
4995 | + |
---|
4996 | class NoNetworkClient(Client): |
---|
4997 | def create_tub(self): |
---|
4998 | pass |
---|
4999 | hunk ./src/allmydata/test/no_network.py 262 |
---|
5000 | |
---|
5001 | def make_server(self, i, readonly=False): |
---|
5002 | serverid = hashutil.tagged_hash("serverid", str(i))[:20] |
---|
5003 | - serverdir = os.path.join(self.basedir, "servers", |
---|
5004 | - idlib.shortnodeid_b2a(serverid), "storage") |
---|
5005 | - fileutil.make_dirs(serverdir) |
---|
5006 | - ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(), |
---|
5007 | - readonly_storage=readonly) |
---|
5008 | + storagedir = FilePath(self.basedir).child("servers").child(idlib.shortnodeid_b2a(serverid)).child("storage") |
---|
5009 | + |
---|
5010 | + # The backend will make the storage directory and any necessary parents. |
---|
5011 | + backend = DiskBackend(storagedir, readonly=readonly) |
---|
5012 | + ss = StorageServer(serverid, backend, storagedir, stats_provider=SimpleStats()) |
---|
5013 | ss._no_network_server_number = i |
---|
5014 | return ss |
---|
5015 | |
---|
5016 | hunk ./src/allmydata/test/no_network.py 276 |
---|
5017 | middleman = service.MultiService() |
---|
5018 | middleman.setServiceParent(self) |
---|
5019 | ss.setServiceParent(middleman) |
---|
5020 | - serverid = ss.my_nodeid |
---|
5021 | + serverid = ss.get_serverid() |
---|
5022 | self.servers_by_number[i] = ss |
---|
5023 | wrapper = wrap_storage_server(ss) |
---|
5024 | self.wrappers_by_id[serverid] = wrapper |
---|
5025 | hunk ./src/allmydata/test/no_network.py 295 |
---|
5026 | # it's enough to remove the server from c._servers (we don't actually |
---|
5027 | # have to detach and stopService it) |
---|
5028 | for i,ss in self.servers_by_number.items(): |
---|
5029 | - if ss.my_nodeid == serverid: |
---|
5030 | + if ss.get_serverid() == serverid: |
---|
5031 | del self.servers_by_number[i] |
---|
5032 | break |
---|
5033 | del self.wrappers_by_id[serverid] |
---|
5034 | hunk ./src/allmydata/test/no_network.py 345 |
---|
5035 | def get_clientdir(self, i=0): |
---|
5036 | return self.g.clients[i].basedir |
---|
5037 | |
---|
5038 | + def get_server(self, i): |
---|
5039 | + return self.g.servers_by_number[i] |
---|
5040 | + |
---|
5041 | def get_serverdir(self, i): |
---|
5042 | hunk ./src/allmydata/test/no_network.py 349 |
---|
5043 | - return self.g.servers_by_number[i].storedir |
---|
5044 | + return self.g.servers_by_number[i].backend.storedir |
---|
5045 | + |
---|
5046 | + def remove_server(self, i): |
---|
5047 | + self.g.remove_server(self.g.servers_by_number[i].get_serverid()) |
---|
5048 | |
---|
5049 | def iterate_servers(self): |
---|
5050 | for i in sorted(self.g.servers_by_number.keys()): |
---|
5051 | hunk ./src/allmydata/test/no_network.py 357 |
---|
5052 | ss = self.g.servers_by_number[i] |
---|
5053 | - yield (i, ss, ss.storedir) |
---|
5054 | + yield (i, ss, ss.backend.storedir) |
---|
5055 | |
---|
5056 | def find_uri_shares(self, uri): |
---|
5057 | si = tahoe_uri.from_string(uri).get_storage_index() |
---|
5058 | hunk ./src/allmydata/test/no_network.py 361 |
---|
5059 | - prefixdir = storage_index_to_dir(si) |
---|
5060 | shares = [] |
---|
5061 | for i,ss in self.g.servers_by_number.items(): |
---|
5062 | hunk ./src/allmydata/test/no_network.py 363 |
---|
5063 | - serverid = ss.my_nodeid |
---|
5064 | - basedir = os.path.join(ss.sharedir, prefixdir) |
---|
5065 | - if not os.path.exists(basedir): |
---|
5066 | - continue |
---|
5067 | - for f in os.listdir(basedir): |
---|
5068 | - try: |
---|
5069 | - shnum = int(f) |
---|
5070 | - shares.append((shnum, serverid, os.path.join(basedir, f))) |
---|
5071 | - except ValueError: |
---|
5072 | - pass |
---|
5073 | + for share in ss.backend.get_shareset(si).get_shares(): |
---|
5074 | + shares.append((share.get_shnum(), ss.get_serverid(), share._home)) |
---|
5075 | return sorted(shares) |
---|
5076 | |
---|
5077 | hunk ./src/allmydata/test/no_network.py 367 |
---|
5078 | + def count_leases(self, uri): |
---|
5079 | + """Return (filename, leasecount) pairs in arbitrary order.""" |
---|
5080 | + si = tahoe_uri.from_string(uri).get_storage_index() |
---|
5081 | + lease_counts = [] |
---|
5082 | + for i,ss in self.g.servers_by_number.items(): |
---|
5083 | + for share in ss.backend.get_shareset(si).get_shares(): |
---|
5084 | + num_leases = len(list(share.get_leases())) |
---|
5085 | + lease_counts.append( (share._home.path, num_leases) ) |
---|
5086 | + return lease_counts |
---|
5087 | + |
---|
5088 | def copy_shares(self, uri): |
---|
5089 | shares = {} |
---|
5090 | hunk ./src/allmydata/test/no_network.py 379 |
---|
5091 | - for (shnum, serverid, sharefile) in self.find_uri_shares(uri): |
---|
5092 | - shares[sharefile] = open(sharefile, "rb").read() |
---|
5093 | + for (shnum, serverid, sharefp) in self.find_uri_shares(uri): |
---|
5094 | + shares[sharefp.path] = sharefp.getContent() |
---|
5095 | return shares |
---|
5096 | |
---|
5097 | hunk ./src/allmydata/test/no_network.py 383 |
---|
5098 | + def copy_share(self, from_share, uri, to_server): |
---|
5099 | + si = uri.from_string(self.uri).get_storage_index() |
---|
5100 | + (i_shnum, i_serverid, i_sharefp) = from_share |
---|
5101 | + shares_dir = to_server.backend.get_shareset(si)._sharehomedir |
---|
5102 | + i_sharefp.copyTo(shares_dir.child(str(i_shnum))) |
---|
5103 | + |
---|
5104 | def restore_all_shares(self, shares): |
---|
5105 | hunk ./src/allmydata/test/no_network.py 390 |
---|
5106 | - for sharefile, data in shares.items(): |
---|
5107 | - open(sharefile, "wb").write(data) |
---|
5108 | + for share, data in shares.items(): |
---|
5109 | + share.home.setContent(data) |
---|
5110 | |
---|
5111 | hunk ./src/allmydata/test/no_network.py 393 |
---|
5112 | - def delete_share(self, (shnum, serverid, sharefile)): |
---|
5113 | - os.unlink(sharefile) |
---|
5114 | + def delete_share(self, (shnum, serverid, sharefp)): |
---|
5115 | + sharefp.remove() |
---|
5116 | |
---|
5117 | def delete_shares_numbered(self, uri, shnums): |
---|
5118 | hunk ./src/allmydata/test/no_network.py 397 |
---|
5119 | - for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri): |
---|
5120 | + for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri): |
---|
5121 | if i_shnum in shnums: |
---|
5122 | hunk ./src/allmydata/test/no_network.py 399 |
---|
5123 | - os.unlink(i_sharefile) |
---|
5124 | + i_sharefp.remove() |
---|
5125 | |
---|
5126 | hunk ./src/allmydata/test/no_network.py 401 |
---|
5127 | - def corrupt_share(self, (shnum, serverid, sharefile), corruptor_function): |
---|
5128 | - sharedata = open(sharefile, "rb").read() |
---|
5129 | - corruptdata = corruptor_function(sharedata) |
---|
5130 | - open(sharefile, "wb").write(corruptdata) |
---|
5131 | + def corrupt_share(self, (shnum, serverid, sharefp), corruptor_function, debug=False): |
---|
5132 | + sharedata = sharefp.getContent() |
---|
5133 | + corruptdata = corruptor_function(sharedata, debug=debug) |
---|
5134 | + sharefp.setContent(corruptdata) |
---|
5135 | |
---|
5136 | def corrupt_shares_numbered(self, uri, shnums, corruptor, debug=False): |
---|
5137 | hunk ./src/allmydata/test/no_network.py 407 |
---|
5138 | - for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri): |
---|
5139 | + for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri): |
---|
5140 | if i_shnum in shnums: |
---|
5141 | hunk ./src/allmydata/test/no_network.py 409 |
---|
5142 | - sharedata = open(i_sharefile, "rb").read() |
---|
5143 | - corruptdata = corruptor(sharedata, debug=debug) |
---|
5144 | - open(i_sharefile, "wb").write(corruptdata) |
---|
5145 | + self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug) |
---|
5146 | |
---|
5147 | def corrupt_all_shares(self, uri, corruptor, debug=False): |
---|
5148 | hunk ./src/allmydata/test/no_network.py 412 |
---|
5149 | - for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri): |
---|
5150 | - sharedata = open(i_sharefile, "rb").read() |
---|
5151 | - corruptdata = corruptor(sharedata, debug=debug) |
---|
5152 | - open(i_sharefile, "wb").write(corruptdata) |
---|
5153 | + for (i_shnum, i_serverid, i_sharefp) in self.find_uri_shares(uri): |
---|
5154 | + self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor, debug=debug) |
---|
5155 | |
---|
5156 | def GET(self, urlpath, followRedirect=False, return_response=False, |
---|
5157 | method="GET", clientnum=0, **kwargs): |
---|
5158 | hunk ./src/allmydata/test/test_download.py 6 |
---|
5159 | # a previous run. This asserts that the current code is capable of decoding |
---|
5160 | # shares from a previous version. |
---|
5161 | |
---|
5162 | -import os |
---|
5163 | from twisted.trial import unittest |
---|
5164 | from twisted.internet import defer, reactor |
---|
5165 | from allmydata import uri |
---|
5166 | hunk ./src/allmydata/test/test_download.py 9 |
---|
5167 | -from allmydata.storage.server import storage_index_to_dir |
---|
5168 | from allmydata.util import base32, fileutil, spans, log, hashutil |
---|
5169 | from allmydata.util.consumer import download_to_data, MemoryConsumer |
---|
5170 | from allmydata.immutable import upload, layout |
---|
5171 | hunk ./src/allmydata/test/test_download.py 85 |
---|
5172 | u = upload.Data(plaintext, None) |
---|
5173 | d = self.c0.upload(u) |
---|
5174 | f = open("stored_shares.py", "w") |
---|
5175 | - def _created_immutable(ur): |
---|
5176 | - # write the generated shares and URI to a file, which can then be |
---|
5177 | - # incorporated into this one next time. |
---|
5178 | - f.write('immutable_uri = "%s"\n' % ur.uri) |
---|
5179 | - f.write('immutable_shares = {\n') |
---|
5180 | - si = uri.from_string(ur.uri).get_storage_index() |
---|
5181 | - si_dir = storage_index_to_dir(si) |
---|
5182 | + |
---|
5183 | + def _write_py(uri): |
---|
5184 | + si = uri.from_string(uri).get_storage_index() |
---|
5185 | for (i,ss,ssdir) in self.iterate_servers(): |
---|
5186 | hunk ./src/allmydata/test/test_download.py 89 |
---|
5187 | - sharedir = os.path.join(ssdir, "shares", si_dir) |
---|
5188 | shares = {} |
---|
5189 | hunk ./src/allmydata/test/test_download.py 90 |
---|
5190 | - for fn in os.listdir(sharedir): |
---|
5191 | - shnum = int(fn) |
---|
5192 | - sharedata = open(os.path.join(sharedir, fn), "rb").read() |
---|
5193 | - shares[shnum] = sharedata |
---|
5194 | - fileutil.rm_dir(sharedir) |
---|
5195 | + shareset = ss.backend.get_shareset(si) |
---|
5196 | + for share in shareset.get_shares(): |
---|
5197 | + sharedata = share._home.getContent() |
---|
5198 | + shares[share.get_shnum()] = sharedata |
---|
5199 | + |
---|
5200 | + fileutil.fp_remove(shareset._sharehomedir) |
---|
5201 | if shares: |
---|
5202 | f.write(' %d: { # client[%d]\n' % (i, i)) |
---|
5203 | for shnum in sorted(shares.keys()): |
---|
5204 | hunk ./src/allmydata/test/test_download.py 103 |
---|
5205 | (shnum, base32.b2a(shares[shnum]))) |
---|
5206 | f.write(' },\n') |
---|
5207 | f.write('}\n') |
---|
5208 | - f.write('\n') |
---|
5209 | |
---|
5210 | hunk ./src/allmydata/test/test_download.py 104 |
---|
5211 | + def _created_immutable(ur): |
---|
5212 | + # write the generated shares and URI to a file, which can then be |
---|
5213 | + # incorporated into this one next time. |
---|
5214 | + f.write('immutable_uri = "%s"\n' % ur.uri) |
---|
5215 | + f.write('immutable_shares = {\n') |
---|
5216 | + _write_py(ur.uri) |
---|
5217 | + f.write('\n') |
---|
5218 | d.addCallback(_created_immutable) |
---|
5219 | |
---|
5220 | d.addCallback(lambda ignored: |
---|
5221 | hunk ./src/allmydata/test/test_download.py 118 |
---|
5222 | def _created_mutable(n): |
---|
5223 | f.write('mutable_uri = "%s"\n' % n.get_uri()) |
---|
5224 | f.write('mutable_shares = {\n') |
---|
5225 | - si = uri.from_string(n.get_uri()).get_storage_index() |
---|
5226 | - si_dir = storage_index_to_dir(si) |
---|
5227 | - for (i,ss,ssdir) in self.iterate_servers(): |
---|
5228 | - sharedir = os.path.join(ssdir, "shares", si_dir) |
---|
5229 | - shares = {} |
---|
5230 | - for fn in os.listdir(sharedir): |
---|
5231 | - shnum = int(fn) |
---|
5232 | - sharedata = open(os.path.join(sharedir, fn), "rb").read() |
---|
5233 | - shares[shnum] = sharedata |
---|
5234 | - fileutil.rm_dir(sharedir) |
---|
5235 | - if shares: |
---|
5236 | - f.write(' %d: { # client[%d]\n' % (i, i)) |
---|
5237 | - for shnum in sorted(shares.keys()): |
---|
5238 | - f.write(' %d: base32.a2b("%s"),\n' % |
---|
5239 | - (shnum, base32.b2a(shares[shnum]))) |
---|
5240 | - f.write(' },\n') |
---|
5241 | - f.write('}\n') |
---|
5242 | - |
---|
5243 | - f.close() |
---|
5244 | + _write_py(n.get_uri()) |
---|
5245 | d.addCallback(_created_mutable) |
---|
5246 | |
---|
5247 | def _done(ignored): |
---|
5248 | hunk ./src/allmydata/test/test_download.py 123 |
---|
5249 | f.close() |
---|
5250 | - d.addCallback(_done) |
---|
5251 | + d.addBoth(_done) |
---|
5252 | |
---|
5253 | return d |
---|
5254 | |
---|
5255 | hunk ./src/allmydata/test/test_download.py 127 |
---|
5256 | + def _write_shares(self, uri, shares): |
---|
5257 | + si = uri.from_string(uri).get_storage_index() |
---|
5258 | + for i in shares: |
---|
5259 | + shares_for_server = shares[i] |
---|
5260 | + for shnum in shares_for_server: |
---|
5261 | + share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir |
---|
5262 | + fileutil.fp_make_dirs(share_dir) |
---|
5263 | + share_dir.child(str(shnum)).setContent(shares[shnum]) |
---|
5264 | + |
---|
5265 | def load_shares(self, ignored=None): |
---|
5266 | # this uses the data generated by create_shares() to populate the |
---|
5267 | # storage servers with pre-generated shares |
---|
5268 | hunk ./src/allmydata/test/test_download.py 139 |
---|
5269 | - si = uri.from_string(immutable_uri).get_storage_index() |
---|
5270 | - si_dir = storage_index_to_dir(si) |
---|
5271 | - for i in immutable_shares: |
---|
5272 | - shares = immutable_shares[i] |
---|
5273 | - for shnum in shares: |
---|
5274 | - dn = os.path.join(self.get_serverdir(i), "shares", si_dir) |
---|
5275 | - fileutil.make_dirs(dn) |
---|
5276 | - fn = os.path.join(dn, str(shnum)) |
---|
5277 | - f = open(fn, "wb") |
---|
5278 | - f.write(shares[shnum]) |
---|
5279 | - f.close() |
---|
5280 | - |
---|
5281 | - si = uri.from_string(mutable_uri).get_storage_index() |
---|
5282 | - si_dir = storage_index_to_dir(si) |
---|
5283 | - for i in mutable_shares: |
---|
5284 | - shares = mutable_shares[i] |
---|
5285 | - for shnum in shares: |
---|
5286 | - dn = os.path.join(self.get_serverdir(i), "shares", si_dir) |
---|
5287 | - fileutil.make_dirs(dn) |
---|
5288 | - fn = os.path.join(dn, str(shnum)) |
---|
5289 | - f = open(fn, "wb") |
---|
5290 | - f.write(shares[shnum]) |
---|
5291 | - f.close() |
---|
5292 | + self._write_shares(immutable_uri, immutable_shares) |
---|
5293 | + self._write_shares(mutable_uri, mutable_shares) |
---|
5294 | |
---|
5295 | def download_immutable(self, ignored=None): |
---|
5296 | n = self.c0.create_node_from_uri(immutable_uri) |
---|
5297 | hunk ./src/allmydata/test/test_download.py 183 |
---|
5298 | |
---|
5299 | self.load_shares() |
---|
5300 | si = uri.from_string(immutable_uri).get_storage_index() |
---|
5301 | - si_dir = storage_index_to_dir(si) |
---|
5302 | |
---|
5303 | n = self.c0.create_node_from_uri(immutable_uri) |
---|
5304 | d = download_to_data(n) |
---|
5305 | hunk ./src/allmydata/test/test_download.py 198 |
---|
5306 | for clientnum in immutable_shares: |
---|
5307 | for shnum in immutable_shares[clientnum]: |
---|
5308 | if s._shnum == shnum: |
---|
5309 | - fn = os.path.join(self.get_serverdir(clientnum), |
---|
5310 | - "shares", si_dir, str(shnum)) |
---|
5311 | - os.unlink(fn) |
---|
5312 | + share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir |
---|
5313 | + share_dir.child(str(shnum)).remove() |
---|
5314 | d.addCallback(_clobber_some_shares) |
---|
5315 | d.addCallback(lambda ign: download_to_data(n)) |
---|
5316 | d.addCallback(_got_data) |
---|
5317 | hunk ./src/allmydata/test/test_download.py 212 |
---|
5318 | for shnum in immutable_shares[clientnum]: |
---|
5319 | if shnum == save_me: |
---|
5320 | continue |
---|
5321 | - fn = os.path.join(self.get_serverdir(clientnum), |
---|
5322 | - "shares", si_dir, str(shnum)) |
---|
5323 | - if os.path.exists(fn): |
---|
5324 | - os.unlink(fn) |
---|
5325 | + share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir |
---|
5326 | + fileutil.fp_remove(share_dir.child(str(shnum))) |
---|
5327 | # now the download should fail with NotEnoughSharesError |
---|
5328 | return self.shouldFail(NotEnoughSharesError, "1shares", None, |
---|
5329 | download_to_data, n) |
---|
5330 | hunk ./src/allmydata/test/test_download.py 223 |
---|
5331 | # delete the last remaining share |
---|
5332 | for clientnum in immutable_shares: |
---|
5333 | for shnum in immutable_shares[clientnum]: |
---|
5334 | - fn = os.path.join(self.get_serverdir(clientnum), |
---|
5335 | - "shares", si_dir, str(shnum)) |
---|
5336 | - if os.path.exists(fn): |
---|
5337 | - os.unlink(fn) |
---|
5338 | + share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir |
---|
5339 | + share_dir.child(str(shnum)).remove() |
---|
5340 | # now a new download should fail with NoSharesError. We want a |
---|
5341 | # new ImmutableFileNode so it will forget about the old shares. |
---|
5342 | # If we merely called create_node_from_uri() without first |
---|
5343 | hunk ./src/allmydata/test/test_download.py 801 |
---|
5344 | # will report two shares, and the ShareFinder will handle the |
---|
5345 | # duplicate by attaching both to the same CommonShare instance. |
---|
5346 | si = uri.from_string(immutable_uri).get_storage_index() |
---|
5347 | - si_dir = storage_index_to_dir(si) |
---|
5348 | - sh0_file = [sharefile |
---|
5349 | - for (shnum, serverid, sharefile) |
---|
5350 | - in self.find_uri_shares(immutable_uri) |
---|
5351 | - if shnum == 0][0] |
---|
5352 | - sh0_data = open(sh0_file, "rb").read() |
---|
5353 | + sh0_fp = [sharefp for (shnum, serverid, sharefp) |
---|
5354 | + in self.find_uri_shares(immutable_uri) |
---|
5355 | + if shnum == 0][0] |
---|
5356 | + sh0_data = sh0_fp.getContent() |
---|
5357 | for clientnum in immutable_shares: |
---|
5358 | if 0 in immutable_shares[clientnum]: |
---|
5359 | continue |
---|
5360 | hunk ./src/allmydata/test/test_download.py 808 |
---|
5361 | - cdir = self.get_serverdir(clientnum) |
---|
5362 | - target = os.path.join(cdir, "shares", si_dir, "0") |
---|
5363 | - outf = open(target, "wb") |
---|
5364 | - outf.write(sh0_data) |
---|
5365 | - outf.close() |
---|
5366 | + cdir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir |
---|
5367 | + fileutil.fp_make_dirs(cdir) |
---|
5368 | + cdir.child(str(shnum)).setContent(sh0_data) |
---|
5369 | |
---|
5370 | d = self.download_immutable() |
---|
5371 | return d |
---|
5372 | hunk ./src/allmydata/test/test_encode.py 134 |
---|
5373 | d.addCallback(_try) |
---|
5374 | return d |
---|
5375 | |
---|
5376 | - def get_share_hashes(self, at_least_these=()): |
---|
5377 | + def get_share_hashes(self): |
---|
5378 | d = self._start() |
---|
5379 | def _try(unused=None): |
---|
5380 | if self.mode == "bad sharehash": |
---|
5381 | hunk ./src/allmydata/test/test_hung_server.py 3 |
---|
5382 | # -*- coding: utf-8 -*- |
---|
5383 | |
---|
5384 | -import os, shutil |
---|
5385 | from twisted.trial import unittest |
---|
5386 | from twisted.internet import defer |
---|
5387 | hunk ./src/allmydata/test/test_hung_server.py 5 |
---|
5388 | -from allmydata import uri |
---|
5389 | + |
---|
5390 | from allmydata.util.consumer import download_to_data |
---|
5391 | from allmydata.immutable import upload |
---|
5392 | from allmydata.mutable.common import UnrecoverableFileError |
---|
5393 | hunk ./src/allmydata/test/test_hung_server.py 10 |
---|
5394 | from allmydata.mutable.publish import MutableData |
---|
5395 | -from allmydata.storage.common import storage_index_to_dir |
---|
5396 | from allmydata.test.no_network import GridTestMixin |
---|
5397 | from allmydata.test.common import ShouldFailMixin |
---|
5398 | from allmydata.util.pollmixin import PollMixin |
---|
5399 | hunk ./src/allmydata/test/test_hung_server.py 18 |
---|
5400 | immutable_plaintext = "data" * 10000 |
---|
5401 | mutable_plaintext = "muta" * 10000 |
---|
5402 | |
---|
5403 | + |
---|
5404 | class HungServerDownloadTest(GridTestMixin, ShouldFailMixin, PollMixin, |
---|
5405 | unittest.TestCase): |
---|
5406 | # Many of these tests take around 60 seconds on François's ARM buildslave: |
---|
5407 | hunk ./src/allmydata/test/test_hung_server.py 31 |
---|
5408 | timeout = 240 |
---|
5409 | |
---|
5410 | def _break(self, servers): |
---|
5411 | - for (id, ss) in servers: |
---|
5412 | - self.g.break_server(id) |
---|
5413 | + for ss in servers: |
---|
5414 | + self.g.break_server(ss.get_serverid()) |
---|
5415 | |
---|
5416 | def _hang(self, servers, **kwargs): |
---|
5417 | hunk ./src/allmydata/test/test_hung_server.py 35 |
---|
5418 | - for (id, ss) in servers: |
---|
5419 | - self.g.hang_server(id, **kwargs) |
---|
5420 | + for ss in servers: |
---|
5421 | + self.g.hang_server(ss.get_serverid(), **kwargs) |
---|
5422 | |
---|
5423 | def _unhang(self, servers, **kwargs): |
---|
5424 | hunk ./src/allmydata/test/test_hung_server.py 39 |
---|
5425 | - for (id, ss) in servers: |
---|
5426 | - self.g.unhang_server(id, **kwargs) |
---|
5427 | + for ss in servers: |
---|
5428 | + self.g.unhang_server(ss.get_serverid(), **kwargs) |
---|
5429 | |
---|
5430 | def _hang_shares(self, shnums, **kwargs): |
---|
5431 | # hang all servers who are holding the given shares |
---|
5432 | hunk ./src/allmydata/test/test_hung_server.py 52 |
---|
5433 | hung_serverids.add(i_serverid) |
---|
5434 | |
---|
5435 | def _delete_all_shares_from(self, servers): |
---|
5436 | - serverids = [id for (id, ss) in servers] |
---|
5437 | - for (i_shnum, i_serverid, i_sharefile) in self.shares: |
---|
5438 | + serverids = [ss.get_serverid() for ss in servers] |
---|
5439 | + for (i_shnum, i_serverid, i_sharefp) in self.shares: |
---|
5440 | if i_serverid in serverids: |
---|
5441 | hunk ./src/allmydata/test/test_hung_server.py 55 |
---|
5442 | - os.unlink(i_sharefile) |
---|
5443 | + i_sharefp.remove() |
---|
5444 | |
---|
5445 | def _corrupt_all_shares_in(self, servers, corruptor_func): |
---|
5446 | hunk ./src/allmydata/test/test_hung_server.py 58 |
---|
5447 | - serverids = [id for (id, ss) in servers] |
---|
5448 | - for (i_shnum, i_serverid, i_sharefile) in self.shares: |
---|
5449 | + serverids = [ss.get_serverid() for ss in servers] |
---|
5450 | + for (i_shnum, i_serverid, i_sharefp) in self.shares: |
---|
5451 | if i_serverid in serverids: |
---|
5452 | hunk ./src/allmydata/test/test_hung_server.py 61 |
---|
5453 | - self._corrupt_share((i_shnum, i_sharefile), corruptor_func) |
---|
5454 | + self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func) |
---|
5455 | |
---|
5456 | def _copy_all_shares_from(self, from_servers, to_server): |
---|
5457 | hunk ./src/allmydata/test/test_hung_server.py 64 |
---|
5458 | - serverids = [id for (id, ss) in from_servers] |
---|
5459 | - for (i_shnum, i_serverid, i_sharefile) in self.shares: |
---|
5460 | + serverids = [ss.get_serverid() for ss in from_servers] |
---|
5461 | + for (i_shnum, i_serverid, i_sharefp) in self.shares: |
---|
5462 | if i_serverid in serverids: |
---|
5463 | hunk ./src/allmydata/test/test_hung_server.py 67 |
---|
5464 | - self._copy_share((i_shnum, i_sharefile), to_server) |
---|
5465 | + self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server) |
---|
5466 | |
---|
5467 | hunk ./src/allmydata/test/test_hung_server.py 69 |
---|
5468 | - def _copy_share(self, share, to_server): |
---|
5469 | - (sharenum, sharefile) = share |
---|
5470 | - (id, ss) = to_server |
---|
5471 | - shares_dir = os.path.join(ss.original.storedir, "shares") |
---|
5472 | - si = uri.from_string(self.uri).get_storage_index() |
---|
5473 | - si_dir = os.path.join(shares_dir, storage_index_to_dir(si)) |
---|
5474 | - if not os.path.exists(si_dir): |
---|
5475 | - os.makedirs(si_dir) |
---|
5476 | - new_sharefile = os.path.join(si_dir, str(sharenum)) |
---|
5477 | - shutil.copy(sharefile, new_sharefile) |
---|
5478 | self.shares = self.find_uri_shares(self.uri) |
---|
5479 | hunk ./src/allmydata/test/test_hung_server.py 70 |
---|
5480 | - # Make sure that the storage server has the share. |
---|
5481 | - self.failUnless((sharenum, ss.original.my_nodeid, new_sharefile) |
---|
5482 | - in self.shares) |
---|
5483 | - |
---|
5484 | - def _corrupt_share(self, share, corruptor_func): |
---|
5485 | - (sharenum, sharefile) = share |
---|
5486 | - data = open(sharefile, "rb").read() |
---|
5487 | - newdata = corruptor_func(data) |
---|
5488 | - os.unlink(sharefile) |
---|
5489 | - wf = open(sharefile, "wb") |
---|
5490 | - wf.write(newdata) |
---|
5491 | - wf.close() |
---|
5492 | |
---|
5493 | def _set_up(self, mutable, testdir, num_clients=1, num_servers=10): |
---|
5494 | self.mutable = mutable |
---|
5495 | hunk ./src/allmydata/test/test_hung_server.py 82 |
---|
5496 | |
---|
5497 | self.c0 = self.g.clients[0] |
---|
5498 | nm = self.c0.nodemaker |
---|
5499 | - self.servers = sorted([(s.get_serverid(), s.get_rref()) |
---|
5500 | - for s in nm.storage_broker.get_connected_servers()]) |
---|
5501 | + unsorted = [(s.get_serverid(), s.get_rref()) for s in nm.storage_broker.get_connected_servers()] |
---|
5502 | + self.servers = [ss for (id, ss) in sorted(unsorted)] |
---|
5503 | self.servers = self.servers[5:] + self.servers[:5] |
---|
5504 | |
---|
5505 | if mutable: |
---|
5506 | hunk ./src/allmydata/test/test_hung_server.py 244 |
---|
5507 | # stuck-but-not-overdue, and 4 live requests. All 4 live requests |
---|
5508 | # will retire before the download is complete and the ShareFinder |
---|
5509 | # is shut off. That will leave 4 OVERDUE and 1 |
---|
5510 | - # stuck-but-not-overdue, for a total of 5 requests in in |
---|
5511 | + # stuck-but-not-overdue, for a total of 5 requests in |
---|
5512 | # _sf.pending_requests |
---|
5513 | for t in self._sf.overdue_timers.values()[:4]: |
---|
5514 | t.reset(-1.0) |
---|
5515 | hunk ./src/allmydata/test/test_mutable.py 21 |
---|
5516 | from foolscap.api import eventually, fireEventually |
---|
5517 | from foolscap.logging import log |
---|
5518 | from allmydata.storage_client import StorageFarmBroker |
---|
5519 | -from allmydata.storage.common import storage_index_to_dir |
---|
5520 | from allmydata.scripts import debug |
---|
5521 | |
---|
5522 | from allmydata.mutable.filenode import MutableFileNode, BackoffAgent |
---|
5523 | hunk ./src/allmydata/test/test_mutable.py 3669 |
---|
5524 | # Now execute each assignment by writing the storage. |
---|
5525 | for (share, servernum) in assignments: |
---|
5526 | sharedata = base64.b64decode(self.sdmf_old_shares[share]) |
---|
5527 | - storedir = self.get_serverdir(servernum) |
---|
5528 | - storage_path = os.path.join(storedir, "shares", |
---|
5529 | - storage_index_to_dir(si)) |
---|
5530 | - fileutil.make_dirs(storage_path) |
---|
5531 | - fileutil.write(os.path.join(storage_path, "%d" % share), |
---|
5532 | - sharedata) |
---|
5533 | + storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir |
---|
5534 | + fileutil.fp_make_dirs(storage_dir) |
---|
5535 | + storage_dir.child("%d" % share).setContent(sharedata) |
---|
5536 | # ...and verify that the shares are there. |
---|
5537 | shares = self.find_uri_shares(self.sdmf_old_cap) |
---|
5538 | assert len(shares) == 10 |
---|
5539 | hunk ./src/allmydata/test/test_provisioning.py 13 |
---|
5540 | from nevow import inevow |
---|
5541 | from zope.interface import implements |
---|
5542 | |
---|
5543 | -class MyRequest: |
---|
5544 | +class MockRequest: |
---|
5545 | implements(inevow.IRequest) |
---|
5546 | pass |
---|
5547 | |
---|
5548 | hunk ./src/allmydata/test/test_provisioning.py 26 |
---|
5549 | def test_load(self): |
---|
5550 | pt = provisioning.ProvisioningTool() |
---|
5551 | self.fields = {} |
---|
5552 | - #r = MyRequest() |
---|
5553 | + #r = MockRequest() |
---|
5554 | #r.fields = self.fields |
---|
5555 | #ctx = RequestContext() |
---|
5556 | #unfilled = pt.renderSynchronously(ctx) |
---|
5557 | hunk ./src/allmydata/test/test_repairer.py 537 |
---|
5558 | # happiness setting. |
---|
5559 | def _delete_some_servers(ignored): |
---|
5560 | for i in xrange(7): |
---|
5561 | - self.g.remove_server(self.g.servers_by_number[i].my_nodeid) |
---|
5562 | + self.remove_server(i) |
---|
5563 | |
---|
5564 | assert len(self.g.servers_by_number) == 3 |
---|
5565 | |
---|
5566 | hunk ./src/allmydata/test/test_storage.py 14 |
---|
5567 | from allmydata import interfaces |
---|
5568 | from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format |
---|
5569 | from allmydata.storage.server import StorageServer |
---|
5570 | -from allmydata.storage.mutable import MutableShareFile |
---|
5571 | -from allmydata.storage.immutable import BucketWriter, BucketReader |
---|
5572 | -from allmydata.storage.common import DataTooLargeError, storage_index_to_dir, \ |
---|
5573 | +from allmydata.storage.backends.disk.mutable import MutableDiskShare |
---|
5574 | +from allmydata.storage.bucket import BucketWriter, BucketReader |
---|
5575 | +from allmydata.storage.common import DataTooLargeError, \ |
---|
5576 | UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError |
---|
5577 | from allmydata.storage.lease import LeaseInfo |
---|
5578 | from allmydata.storage.crawler import BucketCountingCrawler |
---|
5579 | hunk ./src/allmydata/test/test_storage.py 474 |
---|
5580 | w[0].remote_write(0, "\xff"*10) |
---|
5581 | w[0].remote_close() |
---|
5582 | |
---|
5583 | - fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0") |
---|
5584 | - f = open(fn, "rb+") |
---|
5585 | + fp = ss.backend.get_shareset("si1").sharehomedir.child("0") |
---|
5586 | + f = fp.open("rb+") |
---|
5587 | f.seek(0) |
---|
5588 | f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1 |
---|
5589 | f.close() |
---|
5590 | hunk ./src/allmydata/test/test_storage.py 814 |
---|
5591 | def test_bad_magic(self): |
---|
5592 | ss = self.create("test_bad_magic") |
---|
5593 | self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10) |
---|
5594 | - fn = os.path.join(ss.sharedir, storage_index_to_dir("si1"), "0") |
---|
5595 | - f = open(fn, "rb+") |
---|
5596 | + fp = ss.backend.get_shareset("si1").sharehomedir.child("0") |
---|
5597 | + f = fp.open("rb+") |
---|
5598 | f.seek(0) |
---|
5599 | f.write("BAD MAGIC") |
---|
5600 | f.close() |
---|
5601 | hunk ./src/allmydata/test/test_storage.py 842 |
---|
5602 | |
---|
5603 | # Trying to make the container too large (by sending a write vector |
---|
5604 | # whose offset is too high) will raise an exception. |
---|
5605 | - TOOBIG = MutableShareFile.MAX_SIZE + 10 |
---|
5606 | + TOOBIG = MutableDiskShare.MAX_SIZE + 10 |
---|
5607 | self.failUnlessRaises(DataTooLargeError, |
---|
5608 | rstaraw, "si1", secrets, |
---|
5609 | {0: ([], [(TOOBIG,data)], None)}, |
---|
5610 | hunk ./src/allmydata/test/test_storage.py 1229 |
---|
5611 | |
---|
5612 | # create a random non-numeric file in the bucket directory, to |
---|
5613 | # exercise the code that's supposed to ignore those. |
---|
5614 | - bucket_dir = os.path.join(self.workdir("test_leases"), |
---|
5615 | - "shares", storage_index_to_dir("si1")) |
---|
5616 | - f = open(os.path.join(bucket_dir, "ignore_me.txt"), "w") |
---|
5617 | - f.write("you ought to be ignoring me\n") |
---|
5618 | - f.close() |
---|
5619 | + bucket_dir = ss.backend.get_shareset("si1").sharehomedir |
---|
5620 | + bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n") |
---|
5621 | |
---|
5622 | hunk ./src/allmydata/test/test_storage.py 1232 |
---|
5623 | - s0 = MutableShareFile(os.path.join(bucket_dir, "0")) |
---|
5624 | + s0 = MutableDiskShare(os.path.join(bucket_dir, "0")) |
---|
5625 | self.failUnlessEqual(len(list(s0.get_leases())), 1) |
---|
5626 | |
---|
5627 | # add-lease on a missing storage index is silently ignored |
---|
5628 | hunk ./src/allmydata/test/test_storage.py 3118 |
---|
5629 | [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis |
---|
5630 | |
---|
5631 | # add a non-sharefile to exercise another code path |
---|
5632 | - fn = os.path.join(ss.sharedir, |
---|
5633 | - storage_index_to_dir(immutable_si_0), |
---|
5634 | - "not-a-share") |
---|
5635 | - f = open(fn, "wb") |
---|
5636 | - f.write("I am not a share.\n") |
---|
5637 | - f.close() |
---|
5638 | + fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share") |
---|
5639 | + fp.setContent("I am not a share.\n") |
---|
5640 | |
---|
5641 | # this is before the crawl has started, so we're not in a cycle yet |
---|
5642 | initial_state = lc.get_state() |
---|
5643 | hunk ./src/allmydata/test/test_storage.py 3282 |
---|
5644 | def test_expire_age(self): |
---|
5645 | basedir = "storage/LeaseCrawler/expire_age" |
---|
5646 | fileutil.make_dirs(basedir) |
---|
5647 | - # setting expiration_time to 2000 means that any lease which is more |
---|
5648 | - # than 2000s old will be expired. |
---|
5649 | - ss = InstrumentedStorageServer(basedir, "\x00" * 20, |
---|
5650 | - expiration_enabled=True, |
---|
5651 | - expiration_mode="age", |
---|
5652 | - expiration_override_lease_duration=2000) |
---|
5653 | + # setting 'override_lease_duration' to 2000 means that any lease that |
---|
5654 | + # is more than 2000 seconds old will be expired. |
---|
5655 | + expiration_policy = { |
---|
5656 | + 'enabled': True, |
---|
5657 | + 'mode': 'age', |
---|
5658 | + 'override_lease_duration': 2000, |
---|
5659 | + 'sharetypes': ('mutable', 'immutable'), |
---|
5660 | + } |
---|
5661 | + ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy) |
---|
5662 | # make it start sooner than usual. |
---|
5663 | lc = ss.lease_checker |
---|
5664 | lc.slow_start = 0 |
---|
5665 | hunk ./src/allmydata/test/test_storage.py 3423 |
---|
5666 | def test_expire_cutoff_date(self): |
---|
5667 | basedir = "storage/LeaseCrawler/expire_cutoff_date" |
---|
5668 | fileutil.make_dirs(basedir) |
---|
5669 | - # setting cutoff-date to 2000 seconds ago means that any lease which |
---|
5670 | - # is more than 2000s old will be expired. |
---|
5671 | + # setting 'cutoff_date' to 2000 seconds ago means that any lease that |
---|
5672 | + # is more than 2000 seconds old will be expired. |
---|
5673 | now = time.time() |
---|
5674 | then = int(now - 2000) |
---|
5675 | hunk ./src/allmydata/test/test_storage.py 3427 |
---|
5676 | - ss = InstrumentedStorageServer(basedir, "\x00" * 20, |
---|
5677 | - expiration_enabled=True, |
---|
5678 | - expiration_mode="cutoff-date", |
---|
5679 | - expiration_cutoff_date=then) |
---|
5680 | + expiration_policy = { |
---|
5681 | + 'enabled': True, |
---|
5682 | + 'mode': 'cutoff-date', |
---|
5683 | + 'cutoff_date': then, |
---|
5684 | + 'sharetypes': ('mutable', 'immutable'), |
---|
5685 | + } |
---|
5686 | + ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy) |
---|
5687 | # make it start sooner than usual. |
---|
5688 | lc = ss.lease_checker |
---|
5689 | lc.slow_start = 0 |
---|
5690 | hunk ./src/allmydata/test/test_storage.py 3575 |
---|
5691 | def test_only_immutable(self): |
---|
5692 | basedir = "storage/LeaseCrawler/only_immutable" |
---|
5693 | fileutil.make_dirs(basedir) |
---|
5694 | + # setting 'cutoff_date' to 2000 seconds ago means that any lease that |
---|
5695 | + # is more than 2000 seconds old will be expired. |
---|
5696 | now = time.time() |
---|
5697 | then = int(now - 2000) |
---|
5698 | hunk ./src/allmydata/test/test_storage.py 3579 |
---|
5699 | - ss = StorageServer(basedir, "\x00" * 20, |
---|
5700 | - expiration_enabled=True, |
---|
5701 | - expiration_mode="cutoff-date", |
---|
5702 | - expiration_cutoff_date=then, |
---|
5703 | - expiration_sharetypes=("immutable",)) |
---|
5704 | + expiration_policy = { |
---|
5705 | + 'enabled': True, |
---|
5706 | + 'mode': 'cutoff-date', |
---|
5707 | + 'cutoff_date': then, |
---|
5708 | + 'sharetypes': ('immutable',), |
---|
5709 | + } |
---|
5710 | + ss = StorageServer(basedir, "\x00" * 20, expiration_policy) |
---|
5711 | lc = ss.lease_checker |
---|
5712 | lc.slow_start = 0 |
---|
5713 | webstatus = StorageStatus(ss) |
---|
5714 | hunk ./src/allmydata/test/test_storage.py 3636 |
---|
5715 | def test_only_mutable(self): |
---|
5716 | basedir = "storage/LeaseCrawler/only_mutable" |
---|
5717 | fileutil.make_dirs(basedir) |
---|
5718 | + # setting 'cutoff_date' to 2000 seconds ago means that any lease that |
---|
5719 | + # is more than 2000 seconds old will be expired. |
---|
5720 | now = time.time() |
---|
5721 | then = int(now - 2000) |
---|
5722 | hunk ./src/allmydata/test/test_storage.py 3640 |
---|
5723 | - ss = StorageServer(basedir, "\x00" * 20, |
---|
5724 | - expiration_enabled=True, |
---|
5725 | - expiration_mode="cutoff-date", |
---|
5726 | - expiration_cutoff_date=then, |
---|
5727 | - expiration_sharetypes=("mutable",)) |
---|
5728 | + expiration_policy = { |
---|
5729 | + 'enabled': True, |
---|
5730 | + 'mode': 'cutoff-date', |
---|
5731 | + 'cutoff_date': then, |
---|
5732 | + 'sharetypes': ('mutable',), |
---|
5733 | + } |
---|
5734 | + ss = StorageServer(basedir, "\x00" * 20, expiration_policy) |
---|
5735 | lc = ss.lease_checker |
---|
5736 | lc.slow_start = 0 |
---|
5737 | webstatus = StorageStatus(ss) |
---|
5738 | hunk ./src/allmydata/test/test_storage.py 3819 |
---|
5739 | def test_no_st_blocks(self): |
---|
5740 | basedir = "storage/LeaseCrawler/no_st_blocks" |
---|
5741 | fileutil.make_dirs(basedir) |
---|
5742 | - ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, |
---|
5743 | - expiration_mode="age", |
---|
5744 | - expiration_override_lease_duration=-1000) |
---|
5745 | - # a negative expiration_time= means the "configured-" |
---|
5746 | + # A negative 'override_lease_duration' means that the "configured-" |
---|
5747 | # space-recovered counts will be non-zero, since all shares will have |
---|
5748 | hunk ./src/allmydata/test/test_storage.py 3821 |
---|
5749 | - # expired by then |
---|
5750 | + # expired by then. |
---|
5751 | + expiration_policy = { |
---|
5752 | + 'enabled': True, |
---|
5753 | + 'mode': 'age', |
---|
5754 | + 'override_lease_duration': -1000, |
---|
5755 | + 'sharetypes': ('mutable', 'immutable'), |
---|
5756 | + } |
---|
5757 | + ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy) |
---|
5758 | |
---|
5759 | # make it start sooner than usual. |
---|
5760 | lc = ss.lease_checker |
---|
5761 | hunk ./src/allmydata/test/test_storage.py 3877 |
---|
5762 | [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis |
---|
5763 | first = min(self.sis) |
---|
5764 | first_b32 = base32.b2a(first) |
---|
5765 | - fn = os.path.join(ss.sharedir, storage_index_to_dir(first), "0") |
---|
5766 | - f = open(fn, "rb+") |
---|
5767 | + fp = ss.backend.get_shareset(first).sharehomedir.child("0") |
---|
5768 | + f = fp.open("rb+") |
---|
5769 | f.seek(0) |
---|
5770 | f.write("BAD MAGIC") |
---|
5771 | f.close() |
---|
5772 | hunk ./src/allmydata/test/test_storage.py 3890 |
---|
5773 | |
---|
5774 | # also create an empty bucket |
---|
5775 | empty_si = base32.b2a("\x04"*16) |
---|
5776 | - empty_bucket_dir = os.path.join(ss.sharedir, |
---|
5777 | - storage_index_to_dir(empty_si)) |
---|
5778 | - fileutil.make_dirs(empty_bucket_dir) |
---|
5779 | + empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir |
---|
5780 | + fileutil.fp_make_dirs(empty_bucket_dir) |
---|
5781 | |
---|
5782 | ss.setServiceParent(self.s) |
---|
5783 | |
---|
5784 | hunk ./src/allmydata/test/test_system.py 10 |
---|
5785 | |
---|
5786 | import allmydata |
---|
5787 | from allmydata import uri |
---|
5788 | -from allmydata.storage.mutable import MutableShareFile |
---|
5789 | +from allmydata.storage.backends.disk.mutable import MutableDiskShare |
---|
5790 | from allmydata.storage.server import si_a2b |
---|
5791 | from allmydata.immutable import offloaded, upload |
---|
5792 | from allmydata.immutable.literal import LiteralFileNode |
---|
5793 | hunk ./src/allmydata/test/test_system.py 421 |
---|
5794 | return shares |
---|
5795 | |
---|
5796 | def _corrupt_mutable_share(self, filename, which): |
---|
5797 | - msf = MutableShareFile(filename) |
---|
5798 | + msf = MutableDiskShare(filename) |
---|
5799 | datav = msf.readv([ (0, 1000000) ]) |
---|
5800 | final_share = datav[0] |
---|
5801 | assert len(final_share) < 1000000 # ought to be truncated |
---|
5802 | hunk ./src/allmydata/test/test_upload.py 22 |
---|
5803 | from allmydata.util.happinessutil import servers_of_happiness, \ |
---|
5804 | shares_by_server, merge_servers |
---|
5805 | from allmydata.storage_client import StorageFarmBroker |
---|
5806 | -from allmydata.storage.server import storage_index_to_dir |
---|
5807 | |
---|
5808 | MiB = 1024*1024 |
---|
5809 | |
---|
5810 | hunk ./src/allmydata/test/test_upload.py 821 |
---|
5811 | |
---|
5812 | def _copy_share_to_server(self, share_number, server_number): |
---|
5813 | ss = self.g.servers_by_number[server_number] |
---|
5814 | - # Copy share i from the directory associated with the first |
---|
5815 | - # storage server to the directory associated with this one. |
---|
5816 | - assert self.g, "I tried to find a grid at self.g, but failed" |
---|
5817 | - assert self.shares, "I tried to find shares at self.shares, but failed" |
---|
5818 | - old_share_location = self.shares[share_number][2] |
---|
5819 | - new_share_location = os.path.join(ss.storedir, "shares") |
---|
5820 | - si = uri.from_string(self.uri).get_storage_index() |
---|
5821 | - new_share_location = os.path.join(new_share_location, |
---|
5822 | - storage_index_to_dir(si)) |
---|
5823 | - if not os.path.exists(new_share_location): |
---|
5824 | - os.makedirs(new_share_location) |
---|
5825 | - new_share_location = os.path.join(new_share_location, |
---|
5826 | - str(share_number)) |
---|
5827 | - if old_share_location != new_share_location: |
---|
5828 | - shutil.copy(old_share_location, new_share_location) |
---|
5829 | - shares = self.find_uri_shares(self.uri) |
---|
5830 | - # Make sure that the storage server has the share. |
---|
5831 | - self.failUnless((share_number, ss.my_nodeid, new_share_location) |
---|
5832 | - in shares) |
---|
5833 | + self.copy_share(self.shares[share_number], ss) |
---|
5834 | |
---|
5835 | def _setup_grid(self): |
---|
5836 | """ |
---|
5837 | hunk ./src/allmydata/test/test_upload.py 1103 |
---|
5838 | self._copy_share_to_server(i, 2) |
---|
5839 | d.addCallback(_copy_shares) |
---|
5840 | # Remove the first server, and add a placeholder with share 0 |
---|
5841 | - d.addCallback(lambda ign: |
---|
5842 | - self.g.remove_server(self.g.servers_by_number[0].my_nodeid)) |
---|
5843 | + d.addCallback(lambda ign: self.remove_server(0)) |
---|
5844 | d.addCallback(lambda ign: |
---|
5845 | self._add_server_with_share(server_number=4, share_number=0)) |
---|
5846 | # Now try uploading. |
---|
5847 | hunk ./src/allmydata/test/test_upload.py 1134 |
---|
5848 | d.addCallback(lambda ign: |
---|
5849 | self._add_server(server_number=4)) |
---|
5850 | d.addCallback(_copy_shares) |
---|
5851 | - d.addCallback(lambda ign: |
---|
5852 | - self.g.remove_server(self.g.servers_by_number[0].my_nodeid)) |
---|
5853 | + d.addCallback(lambda ign: self.remove_server(0)) |
---|
5854 | d.addCallback(_reset_encoding_parameters) |
---|
5855 | d.addCallback(lambda client: |
---|
5856 | client.upload(upload.Data("data" * 10000, convergence=""))) |
---|
5857 | hunk ./src/allmydata/test/test_upload.py 1196 |
---|
5858 | self._copy_share_to_server(i, 2) |
---|
5859 | d.addCallback(_copy_shares) |
---|
5860 | # Remove server 0, and add another in its place |
---|
5861 | - d.addCallback(lambda ign: |
---|
5862 | - self.g.remove_server(self.g.servers_by_number[0].my_nodeid)) |
---|
5863 | + d.addCallback(lambda ign: self.remove_server(0)) |
---|
5864 | d.addCallback(lambda ign: |
---|
5865 | self._add_server_with_share(server_number=4, share_number=0, |
---|
5866 | readonly=True)) |
---|
5867 | hunk ./src/allmydata/test/test_upload.py 1237 |
---|
5868 | for i in xrange(1, 10): |
---|
5869 | self._copy_share_to_server(i, 2) |
---|
5870 | d.addCallback(_copy_shares) |
---|
5871 | - d.addCallback(lambda ign: |
---|
5872 | - self.g.remove_server(self.g.servers_by_number[0].my_nodeid)) |
---|
5873 | + d.addCallback(lambda ign: self.remove_server(0)) |
---|
5874 | def _reset_encoding_parameters(ign, happy=4): |
---|
5875 | client = self.g.clients[0] |
---|
5876 | client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy |
---|
5877 | hunk ./src/allmydata/test/test_upload.py 1273 |
---|
5878 | # remove the original server |
---|
5879 | # (necessary to ensure that the Tahoe2ServerSelector will distribute |
---|
5880 | # all the shares) |
---|
5881 | - def _remove_server(ign): |
---|
5882 | - server = self.g.servers_by_number[0] |
---|
5883 | - self.g.remove_server(server.my_nodeid) |
---|
5884 | - d.addCallback(_remove_server) |
---|
5885 | + d.addCallback(lambda ign: self.remove_server(0)) |
---|
5886 | # This should succeed; we still have 4 servers, and the |
---|
5887 | # happiness of the upload is 4. |
---|
5888 | d.addCallback(lambda ign: |
---|
5889 | hunk ./src/allmydata/test/test_upload.py 1285 |
---|
5890 | d.addCallback(lambda ign: |
---|
5891 | self._setup_and_upload()) |
---|
5892 | d.addCallback(_do_server_setup) |
---|
5893 | - d.addCallback(_remove_server) |
---|
5894 | + d.addCallback(lambda ign: self.remove_server(0)) |
---|
5895 | d.addCallback(lambda ign: |
---|
5896 | self.shouldFail(UploadUnhappinessError, |
---|
5897 | "test_dropped_servers_in_encoder", |
---|
5898 | hunk ./src/allmydata/test/test_upload.py 1307 |
---|
5899 | self._add_server_with_share(4, 7, readonly=True) |
---|
5900 | self._add_server_with_share(5, 8, readonly=True) |
---|
5901 | d.addCallback(_do_server_setup_2) |
---|
5902 | - d.addCallback(_remove_server) |
---|
5903 | + d.addCallback(lambda ign: self.remove_server(0)) |
---|
5904 | d.addCallback(lambda ign: |
---|
5905 | self._do_upload_with_broken_servers(1)) |
---|
5906 | d.addCallback(_set_basedir) |
---|
5907 | hunk ./src/allmydata/test/test_upload.py 1314 |
---|
5908 | d.addCallback(lambda ign: |
---|
5909 | self._setup_and_upload()) |
---|
5910 | d.addCallback(_do_server_setup_2) |
---|
5911 | - d.addCallback(_remove_server) |
---|
5912 | + d.addCallback(lambda ign: self.remove_server(0)) |
---|
5913 | d.addCallback(lambda ign: |
---|
5914 | self.shouldFail(UploadUnhappinessError, |
---|
5915 | "test_dropped_servers_in_encoder", |
---|
5916 | hunk ./src/allmydata/test/test_upload.py 1528 |
---|
5917 | for i in xrange(1, 10): |
---|
5918 | self._copy_share_to_server(i, 1) |
---|
5919 | d.addCallback(_copy_shares) |
---|
5920 | - d.addCallback(lambda ign: |
---|
5921 | - self.g.remove_server(self.g.servers_by_number[0].my_nodeid)) |
---|
5922 | + d.addCallback(lambda ign: self.remove_server(0)) |
---|
5923 | def _prepare_client(ign): |
---|
5924 | client = self.g.clients[0] |
---|
5925 | client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4 |
---|
5926 | hunk ./src/allmydata/test/test_upload.py 1550 |
---|
5927 | def _setup(ign): |
---|
5928 | for i in xrange(1, 11): |
---|
5929 | self._add_server(server_number=i) |
---|
5930 | - self.g.remove_server(self.g.servers_by_number[0].my_nodeid) |
---|
5931 | + self.remove_server(0) |
---|
5932 | c = self.g.clients[0] |
---|
5933 | # We set happy to an unsatisfiable value so that we can check the |
---|
5934 | # counting in the exception message. The same progress message |
---|
5935 | hunk ./src/allmydata/test/test_upload.py 1577 |
---|
5936 | self._add_server(server_number=i) |
---|
5937 | self._add_server(server_number=11, readonly=True) |
---|
5938 | self._add_server(server_number=12, readonly=True) |
---|
5939 | - self.g.remove_server(self.g.servers_by_number[0].my_nodeid) |
---|
5940 | + self.remove_server(0) |
---|
5941 | c = self.g.clients[0] |
---|
5942 | c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45 |
---|
5943 | return c |
---|
5944 | hunk ./src/allmydata/test/test_upload.py 1605 |
---|
5945 | # the first one that the selector sees. |
---|
5946 | for i in xrange(10): |
---|
5947 | self._copy_share_to_server(i, 9) |
---|
5948 | - # Remove server 0, and its contents |
---|
5949 | - self.g.remove_server(self.g.servers_by_number[0].my_nodeid) |
---|
5950 | + self.remove_server(0) |
---|
5951 | # Make happiness unsatisfiable |
---|
5952 | c = self.g.clients[0] |
---|
5953 | c.DEFAULT_ENCODING_PARAMETERS['happy'] = 45 |
---|
5954 | hunk ./src/allmydata/test/test_upload.py 1625 |
---|
5955 | def _then(ign): |
---|
5956 | for i in xrange(1, 11): |
---|
5957 | self._add_server(server_number=i, readonly=True) |
---|
5958 | - self.g.remove_server(self.g.servers_by_number[0].my_nodeid) |
---|
5959 | + self.remove_server(0) |
---|
5960 | c = self.g.clients[0] |
---|
5961 | c.DEFAULT_ENCODING_PARAMETERS['k'] = 2 |
---|
5962 | c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4 |
---|
5963 | hunk ./src/allmydata/test/test_upload.py 1661 |
---|
5964 | self._add_server(server_number=4, readonly=True)) |
---|
5965 | d.addCallback(lambda ign: |
---|
5966 | self._add_server(server_number=5, readonly=True)) |
---|
5967 | - d.addCallback(lambda ign: |
---|
5968 | - self.g.remove_server(self.g.servers_by_number[0].my_nodeid)) |
---|
5969 | + d.addCallback(lambda ign: self.remove_server(0)) |
---|
5970 | def _reset_encoding_parameters(ign, happy=4): |
---|
5971 | client = self.g.clients[0] |
---|
5972 | client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy |
---|
5973 | hunk ./src/allmydata/test/test_upload.py 1696 |
---|
5974 | d.addCallback(lambda ign: |
---|
5975 | self._add_server(server_number=2)) |
---|
5976 | def _break_server_2(ign): |
---|
5977 | - serverid = self.g.servers_by_number[2].my_nodeid |
---|
5978 | + serverid = self.get_server(2).get_serverid() |
---|
5979 | self.g.break_server(serverid) |
---|
5980 | d.addCallback(_break_server_2) |
---|
5981 | d.addCallback(lambda ign: |
---|
5982 | hunk ./src/allmydata/test/test_upload.py 1705 |
---|
5983 | self._add_server(server_number=4, readonly=True)) |
---|
5984 | d.addCallback(lambda ign: |
---|
5985 | self._add_server(server_number=5, readonly=True)) |
---|
5986 | - d.addCallback(lambda ign: |
---|
5987 | - self.g.remove_server(self.g.servers_by_number[0].my_nodeid)) |
---|
5988 | + d.addCallback(lambda ign: self.remove_server(0)) |
---|
5989 | d.addCallback(_reset_encoding_parameters) |
---|
5990 | d.addCallback(lambda client: |
---|
5991 | self.shouldFail(UploadUnhappinessError, "test_selection_exceptions", |
---|
5992 | hunk ./src/allmydata/test/test_upload.py 1816 |
---|
5993 | # Copy shares |
---|
5994 | self._copy_share_to_server(1, 1) |
---|
5995 | self._copy_share_to_server(2, 1) |
---|
5996 | - # Remove server 0 |
---|
5997 | - self.g.remove_server(self.g.servers_by_number[0].my_nodeid) |
---|
5998 | + self.remove_server(0) |
---|
5999 | client = self.g.clients[0] |
---|
6000 | client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3 |
---|
6001 | return client |
---|
6002 | hunk ./src/allmydata/test/test_upload.py 1930 |
---|
6003 | readonly=True) |
---|
6004 | self._add_server_with_share(server_number=4, share_number=3, |
---|
6005 | readonly=True) |
---|
6006 | - # Remove server 0. |
---|
6007 | - self.g.remove_server(self.g.servers_by_number[0].my_nodeid) |
---|
6008 | + self.remove_server(0) |
---|
6009 | # Set the client appropriately |
---|
6010 | c = self.g.clients[0] |
---|
6011 | c.DEFAULT_ENCODING_PARAMETERS['happy'] = 4 |
---|
6012 | hunk ./src/allmydata/test/test_util.py 9 |
---|
6013 | from twisted.trial import unittest |
---|
6014 | from twisted.internet import defer, reactor |
---|
6015 | from twisted.python.failure import Failure |
---|
6016 | +from twisted.python.filepath import FilePath |
---|
6017 | from twisted.python import log |
---|
6018 | from pycryptopp.hash.sha256 import SHA256 as _hash |
---|
6019 | |
---|
6020 | hunk ./src/allmydata/test/test_util.py 508 |
---|
6021 | os.chdir(saved_cwd) |
---|
6022 | |
---|
6023 | def test_disk_stats(self): |
---|
6024 | - avail = fileutil.get_available_space('.', 2**14) |
---|
6025 | + avail = fileutil.get_available_space(FilePath('.'), 2**14) |
---|
6026 | if avail == 0: |
---|
6027 | raise unittest.SkipTest("This test will spuriously fail there is no disk space left.") |
---|
6028 | |
---|
6029 | hunk ./src/allmydata/test/test_util.py 512 |
---|
6030 | - disk = fileutil.get_disk_stats('.', 2**13) |
---|
6031 | + disk = fileutil.get_disk_stats(FilePath('.'), 2**13) |
---|
6032 | self.failUnless(disk['total'] > 0, disk['total']) |
---|
6033 | self.failUnless(disk['used'] > 0, disk['used']) |
---|
6034 | self.failUnless(disk['free_for_root'] > 0, disk['free_for_root']) |
---|
6035 | hunk ./src/allmydata/test/test_util.py 521 |
---|
6036 | |
---|
6037 | def test_disk_stats_avail_nonnegative(self): |
---|
6038 | # This test will spuriously fail if you have more than 2^128 |
---|
6039 | - # bytes of available space on your filesystem. |
---|
6040 | - disk = fileutil.get_disk_stats('.', 2**128) |
---|
6041 | + # bytes of available space on your filesystem (lucky you). |
---|
6042 | + disk = fileutil.get_disk_stats(FilePath('.'), 2**128) |
---|
6043 | self.failUnlessEqual(disk['avail'], 0) |
---|
6044 | |
---|
6045 | class PollMixinTests(unittest.TestCase): |
---|
6046 | hunk ./src/allmydata/test/test_web.py 12 |
---|
6047 | from twisted.python import failure, log |
---|
6048 | from nevow import rend |
---|
6049 | from allmydata import interfaces, uri, webish, dirnode |
---|
6050 | -from allmydata.storage.shares import get_share_file |
---|
6051 | from allmydata.storage_client import StorageFarmBroker |
---|
6052 | from allmydata.immutable import upload |
---|
6053 | from allmydata.immutable.downloader.status import DownloadStatus |
---|
6054 | hunk ./src/allmydata/test/test_web.py 4111 |
---|
6055 | good_shares = self.find_uri_shares(self.uris["good"]) |
---|
6056 | self.failUnlessReallyEqual(len(good_shares), 10) |
---|
6057 | sick_shares = self.find_uri_shares(self.uris["sick"]) |
---|
6058 | - os.unlink(sick_shares[0][2]) |
---|
6059 | + sick_shares[0][2].remove() |
---|
6060 | dead_shares = self.find_uri_shares(self.uris["dead"]) |
---|
6061 | for i in range(1, 10): |
---|
6062 | hunk ./src/allmydata/test/test_web.py 4114 |
---|
6063 | - os.unlink(dead_shares[i][2]) |
---|
6064 | + dead_shares[i][2].remove() |
---|
6065 | c_shares = self.find_uri_shares(self.uris["corrupt"]) |
---|
6066 | cso = CorruptShareOptions() |
---|
6067 | cso.stdout = StringIO() |
---|
6068 | hunk ./src/allmydata/test/test_web.py 4118 |
---|
6069 | - cso.parseOptions([c_shares[0][2]]) |
---|
6070 | + cso.parseOptions([c_shares[0][2].path]) |
---|
6071 | corrupt_share(cso) |
---|
6072 | d.addCallback(_clobber_shares) |
---|
6073 | |
---|
6074 | hunk ./src/allmydata/test/test_web.py 4253 |
---|
6075 | good_shares = self.find_uri_shares(self.uris["good"]) |
---|
6076 | self.failUnlessReallyEqual(len(good_shares), 10) |
---|
6077 | sick_shares = self.find_uri_shares(self.uris["sick"]) |
---|
6078 | - os.unlink(sick_shares[0][2]) |
---|
6079 | + sick_shares[0][2].remove() |
---|
6080 | dead_shares = self.find_uri_shares(self.uris["dead"]) |
---|
6081 | for i in range(1, 10): |
---|
6082 | hunk ./src/allmydata/test/test_web.py 4256 |
---|
6083 | - os.unlink(dead_shares[i][2]) |
---|
6084 | + dead_shares[i][2].remove() |
---|
6085 | c_shares = self.find_uri_shares(self.uris["corrupt"]) |
---|
6086 | cso = CorruptShareOptions() |
---|
6087 | cso.stdout = StringIO() |
---|
6088 | hunk ./src/allmydata/test/test_web.py 4260 |
---|
6089 | - cso.parseOptions([c_shares[0][2]]) |
---|
6090 | + cso.parseOptions([c_shares[0][2].path]) |
---|
6091 | corrupt_share(cso) |
---|
6092 | d.addCallback(_clobber_shares) |
---|
6093 | |
---|
6094 | hunk ./src/allmydata/test/test_web.py 4319 |
---|
6095 | |
---|
6096 | def _clobber_shares(ignored): |
---|
6097 | sick_shares = self.find_uri_shares(self.uris["sick"]) |
---|
6098 | - os.unlink(sick_shares[0][2]) |
---|
6099 | + sick_shares[0][2].remove() |
---|
6100 | d.addCallback(_clobber_shares) |
---|
6101 | |
---|
6102 | d.addCallback(self.CHECK, "sick", "t=check&repair=true&output=json") |
---|
6103 | hunk ./src/allmydata/test/test_web.py 4811 |
---|
6104 | good_shares = self.find_uri_shares(self.uris["good"]) |
---|
6105 | self.failUnlessReallyEqual(len(good_shares), 10) |
---|
6106 | sick_shares = self.find_uri_shares(self.uris["sick"]) |
---|
6107 | - os.unlink(sick_shares[0][2]) |
---|
6108 | + sick_shares[0][2].remove() |
---|
6109 | #dead_shares = self.find_uri_shares(self.uris["dead"]) |
---|
6110 | #for i in range(1, 10): |
---|
6111 | hunk ./src/allmydata/test/test_web.py 4814 |
---|
6112 | - # os.unlink(dead_shares[i][2]) |
---|
6113 | + # dead_shares[i][2].remove() |
---|
6114 | |
---|
6115 | #c_shares = self.find_uri_shares(self.uris["corrupt"]) |
---|
6116 | #cso = CorruptShareOptions() |
---|
6117 | hunk ./src/allmydata/test/test_web.py 4819 |
---|
6118 | #cso.stdout = StringIO() |
---|
6119 | - #cso.parseOptions([c_shares[0][2]]) |
---|
6120 | + #cso.parseOptions([c_shares[0][2].path]) |
---|
6121 | #corrupt_share(cso) |
---|
6122 | d.addCallback(_clobber_shares) |
---|
6123 | |
---|
6124 | hunk ./src/allmydata/test/test_web.py 4870 |
---|
6125 | d.addErrback(self.explain_web_error) |
---|
6126 | return d |
---|
6127 | |
---|
6128 | - def _count_leases(self, ignored, which): |
---|
6129 | - u = self.uris[which] |
---|
6130 | - shares = self.find_uri_shares(u) |
---|
6131 | - lease_counts = [] |
---|
6132 | - for shnum, serverid, fn in shares: |
---|
6133 | - sf = get_share_file(fn) |
---|
6134 | - num_leases = len(list(sf.get_leases())) |
---|
6135 | - lease_counts.append( (fn, num_leases) ) |
---|
6136 | - return lease_counts |
---|
6137 | - |
---|
6138 | - def _assert_leasecount(self, lease_counts, expected): |
---|
6139 | + def _assert_leasecount(self, ignored, which, expected): |
---|
6140 | + lease_counts = self.count_leases(self.uris[which]) |
---|
6141 | for (fn, num_leases) in lease_counts: |
---|
6142 | if num_leases != expected: |
---|
6143 | self.fail("expected %d leases, have %d, on %s" % |
---|
6144 | hunk ./src/allmydata/test/test_web.py 4903 |
---|
6145 | self.fileurls[which] = "uri/" + urllib.quote(self.uris[which]) |
---|
6146 | d.addCallback(_compute_fileurls) |
---|
6147 | |
---|
6148 | - d.addCallback(self._count_leases, "one") |
---|
6149 | - d.addCallback(self._assert_leasecount, 1) |
---|
6150 | - d.addCallback(self._count_leases, "two") |
---|
6151 | - d.addCallback(self._assert_leasecount, 1) |
---|
6152 | - d.addCallback(self._count_leases, "mutable") |
---|
6153 | - d.addCallback(self._assert_leasecount, 1) |
---|
6154 | + d.addCallback(self._assert_leasecount, "one", 1) |
---|
6155 | + d.addCallback(self._assert_leasecount, "two", 1) |
---|
6156 | + d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
6157 | |
---|
6158 | d.addCallback(self.CHECK, "one", "t=check") # no add-lease |
---|
6159 | def _got_html_good(res): |
---|
6160 | hunk ./src/allmydata/test/test_web.py 4913 |
---|
6161 | self.failIf("Not Healthy" in res, res) |
---|
6162 | d.addCallback(_got_html_good) |
---|
6163 | |
---|
6164 | - d.addCallback(self._count_leases, "one") |
---|
6165 | - d.addCallback(self._assert_leasecount, 1) |
---|
6166 | - d.addCallback(self._count_leases, "two") |
---|
6167 | - d.addCallback(self._assert_leasecount, 1) |
---|
6168 | - d.addCallback(self._count_leases, "mutable") |
---|
6169 | - d.addCallback(self._assert_leasecount, 1) |
---|
6170 | + d.addCallback(self._assert_leasecount, "one", 1) |
---|
6171 | + d.addCallback(self._assert_leasecount, "two", 1) |
---|
6172 | + d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
6173 | |
---|
6174 | # this CHECK uses the original client, which uses the same |
---|
6175 | # lease-secrets, so it will just renew the original lease |
---|
6176 | hunk ./src/allmydata/test/test_web.py 4922 |
---|
6177 | d.addCallback(self.CHECK, "one", "t=check&add-lease=true") |
---|
6178 | d.addCallback(_got_html_good) |
---|
6179 | |
---|
6180 | - d.addCallback(self._count_leases, "one") |
---|
6181 | - d.addCallback(self._assert_leasecount, 1) |
---|
6182 | - d.addCallback(self._count_leases, "two") |
---|
6183 | - d.addCallback(self._assert_leasecount, 1) |
---|
6184 | - d.addCallback(self._count_leases, "mutable") |
---|
6185 | - d.addCallback(self._assert_leasecount, 1) |
---|
6186 | + d.addCallback(self._assert_leasecount, "one", 1) |
---|
6187 | + d.addCallback(self._assert_leasecount, "two", 1) |
---|
6188 | + d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
6189 | |
---|
6190 | # this CHECK uses an alternate client, which adds a second lease |
---|
6191 | d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1) |
---|
6192 | hunk ./src/allmydata/test/test_web.py 4930 |
---|
6193 | d.addCallback(_got_html_good) |
---|
6194 | |
---|
6195 | - d.addCallback(self._count_leases, "one") |
---|
6196 | - d.addCallback(self._assert_leasecount, 2) |
---|
6197 | - d.addCallback(self._count_leases, "two") |
---|
6198 | - d.addCallback(self._assert_leasecount, 1) |
---|
6199 | - d.addCallback(self._count_leases, "mutable") |
---|
6200 | - d.addCallback(self._assert_leasecount, 1) |
---|
6201 | + d.addCallback(self._assert_leasecount, "one", 2) |
---|
6202 | + d.addCallback(self._assert_leasecount, "two", 1) |
---|
6203 | + d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
6204 | |
---|
6205 | d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true") |
---|
6206 | d.addCallback(_got_html_good) |
---|
6207 | hunk ./src/allmydata/test/test_web.py 4937 |
---|
6208 | |
---|
6209 | - d.addCallback(self._count_leases, "one") |
---|
6210 | - d.addCallback(self._assert_leasecount, 2) |
---|
6211 | - d.addCallback(self._count_leases, "two") |
---|
6212 | - d.addCallback(self._assert_leasecount, 1) |
---|
6213 | - d.addCallback(self._count_leases, "mutable") |
---|
6214 | - d.addCallback(self._assert_leasecount, 1) |
---|
6215 | + d.addCallback(self._assert_leasecount, "one", 2) |
---|
6216 | + d.addCallback(self._assert_leasecount, "two", 1) |
---|
6217 | + d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
6218 | |
---|
6219 | d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true", |
---|
6220 | clientnum=1) |
---|
6221 | hunk ./src/allmydata/test/test_web.py 4945 |
---|
6222 | d.addCallback(_got_html_good) |
---|
6223 | |
---|
6224 | - d.addCallback(self._count_leases, "one") |
---|
6225 | - d.addCallback(self._assert_leasecount, 2) |
---|
6226 | - d.addCallback(self._count_leases, "two") |
---|
6227 | - d.addCallback(self._assert_leasecount, 1) |
---|
6228 | - d.addCallback(self._count_leases, "mutable") |
---|
6229 | - d.addCallback(self._assert_leasecount, 2) |
---|
6230 | + d.addCallback(self._assert_leasecount, "one", 2) |
---|
6231 | + d.addCallback(self._assert_leasecount, "two", 1) |
---|
6232 | + d.addCallback(self._assert_leasecount, "mutable", 2) |
---|
6233 | |
---|
6234 | d.addErrback(self.explain_web_error) |
---|
6235 | return d |
---|
6236 | hunk ./src/allmydata/test/test_web.py 4989 |
---|
6237 | self.failUnlessReallyEqual(len(units), 4+1) |
---|
6238 | d.addCallback(_done) |
---|
6239 | |
---|
6240 | - d.addCallback(self._count_leases, "root") |
---|
6241 | - d.addCallback(self._assert_leasecount, 1) |
---|
6242 | - d.addCallback(self._count_leases, "one") |
---|
6243 | - d.addCallback(self._assert_leasecount, 1) |
---|
6244 | - d.addCallback(self._count_leases, "mutable") |
---|
6245 | - d.addCallback(self._assert_leasecount, 1) |
---|
6246 | + d.addCallback(self._assert_leasecount, "root", 1) |
---|
6247 | + d.addCallback(self._assert_leasecount, "one", 1) |
---|
6248 | + d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
6249 | |
---|
6250 | d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true") |
---|
6251 | d.addCallback(_done) |
---|
6252 | hunk ./src/allmydata/test/test_web.py 4996 |
---|
6253 | |
---|
6254 | - d.addCallback(self._count_leases, "root") |
---|
6255 | - d.addCallback(self._assert_leasecount, 1) |
---|
6256 | - d.addCallback(self._count_leases, "one") |
---|
6257 | - d.addCallback(self._assert_leasecount, 1) |
---|
6258 | - d.addCallback(self._count_leases, "mutable") |
---|
6259 | - d.addCallback(self._assert_leasecount, 1) |
---|
6260 | + d.addCallback(self._assert_leasecount, "root", 1) |
---|
6261 | + d.addCallback(self._assert_leasecount, "one", 1) |
---|
6262 | + d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
6263 | |
---|
6264 | d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true", |
---|
6265 | clientnum=1) |
---|
6266 | hunk ./src/allmydata/test/test_web.py 5004 |
---|
6267 | d.addCallback(_done) |
---|
6268 | |
---|
6269 | - d.addCallback(self._count_leases, "root") |
---|
6270 | - d.addCallback(self._assert_leasecount, 2) |
---|
6271 | - d.addCallback(self._count_leases, "one") |
---|
6272 | - d.addCallback(self._assert_leasecount, 2) |
---|
6273 | - d.addCallback(self._count_leases, "mutable") |
---|
6274 | - d.addCallback(self._assert_leasecount, 2) |
---|
6275 | + d.addCallback(self._assert_leasecount, "root", 2) |
---|
6276 | + d.addCallback(self._assert_leasecount, "one", 2) |
---|
6277 | + d.addCallback(self._assert_leasecount, "mutable", 2) |
---|
6278 | |
---|
6279 | d.addErrback(self.explain_web_error) |
---|
6280 | return d |
---|
6281 | merger 0.0 ( |
---|
6282 | hunk ./src/allmydata/uri.py 829 |
---|
6283 | + def is_readonly(self): |
---|
6284 | + return True |
---|
6285 | + |
---|
6286 | + def get_readonly(self): |
---|
6287 | + return self |
---|
6288 | + |
---|
6289 | + |
---|
6290 | hunk ./src/allmydata/uri.py 829 |
---|
6291 | + def is_readonly(self): |
---|
6292 | + return True |
---|
6293 | + |
---|
6294 | + def get_readonly(self): |
---|
6295 | + return self |
---|
6296 | + |
---|
6297 | + |
---|
6298 | ) |
---|
6299 | merger 0.0 ( |
---|
6300 | hunk ./src/allmydata/uri.py 848 |
---|
6301 | + def is_readonly(self): |
---|
6302 | + return True |
---|
6303 | + |
---|
6304 | + def get_readonly(self): |
---|
6305 | + return self |
---|
6306 | + |
---|
6307 | hunk ./src/allmydata/uri.py 848 |
---|
6308 | + def is_readonly(self): |
---|
6309 | + return True |
---|
6310 | + |
---|
6311 | + def get_readonly(self): |
---|
6312 | + return self |
---|
6313 | + |
---|
6314 | ) |
---|
6315 | hunk ./src/allmydata/util/encodingutil.py 221 |
---|
6316 | def quote_path(path, quotemarks=True): |
---|
6317 | return quote_output("/".join(map(to_str, path)), quotemarks=quotemarks) |
---|
6318 | |
---|
6319 | +def quote_filepath(fp, quotemarks=True, encoding=None): |
---|
6320 | + path = fp.path |
---|
6321 | + if isinstance(path, str): |
---|
6322 | + try: |
---|
6323 | + path = path.decode(filesystem_encoding) |
---|
6324 | + except UnicodeDecodeError: |
---|
6325 | + return 'b"%s"' % (ESCAPABLE_8BIT.sub(_str_escape, path),) |
---|
6326 | + |
---|
6327 | + return quote_output(path, quotemarks=quotemarks, encoding=encoding) |
---|
6328 | + |
---|
6329 | |
---|
6330 | def unicode_platform(): |
---|
6331 | """ |
---|
6332 | hunk ./src/allmydata/util/fileutil.py 5 |
---|
6333 | Futz with files like a pro. |
---|
6334 | """ |
---|
6335 | |
---|
6336 | -import sys, exceptions, os, stat, tempfile, time, binascii |
---|
6337 | +import errno, sys, exceptions, os, stat, tempfile, time, binascii |
---|
6338 | + |
---|
6339 | +from allmydata.util.assertutil import precondition |
---|
6340 | |
---|
6341 | from twisted.python import log |
---|
6342 | hunk ./src/allmydata/util/fileutil.py 10 |
---|
6343 | +from twisted.python.filepath import FilePath, UnlistableError |
---|
6344 | |
---|
6345 | from pycryptopp.cipher.aes import AES |
---|
6346 | |
---|
6347 | hunk ./src/allmydata/util/fileutil.py 189 |
---|
6348 | raise tx |
---|
6349 | raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning... |
---|
6350 | |
---|
6351 | -def rm_dir(dirname): |
---|
6352 | +def fp_make_dirs(dirfp): |
---|
6353 | + """ |
---|
6354 | + An idempotent version of FilePath.makedirs(). If the dir already |
---|
6355 | + exists, do nothing and return without raising an exception. If this |
---|
6356 | + call creates the dir, return without raising an exception. If there is |
---|
6357 | + an error that prevents creation or if the directory gets deleted after |
---|
6358 | + fp_make_dirs() creates it and before fp_make_dirs() checks that it |
---|
6359 | + exists, raise an exception. |
---|
6360 | + """ |
---|
6361 | + log.msg( "xxx 0 %s" % (dirfp,)) |
---|
6362 | + tx = None |
---|
6363 | + try: |
---|
6364 | + dirfp.makedirs() |
---|
6365 | + except OSError, x: |
---|
6366 | + tx = x |
---|
6367 | + |
---|
6368 | + if not dirfp.isdir(): |
---|
6369 | + if tx: |
---|
6370 | + raise tx |
---|
6371 | + raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning... |
---|
6372 | + |
---|
6373 | +def fp_rmdir_if_empty(dirfp): |
---|
6374 | + """ Remove the directory if it is empty. """ |
---|
6375 | + try: |
---|
6376 | + os.rmdir(dirfp.path) |
---|
6377 | + except OSError, e: |
---|
6378 | + if e.errno != errno.ENOTEMPTY: |
---|
6379 | + raise |
---|
6380 | + else: |
---|
6381 | + dirfp.changed() |
---|
6382 | + |
---|
6383 | +def rmtree(dirname): |
---|
6384 | """ |
---|
6385 | A threadsafe and idempotent version of shutil.rmtree(). If the dir is |
---|
6386 | already gone, do nothing and return without raising an exception. If this |
---|
6387 | hunk ./src/allmydata/util/fileutil.py 239 |
---|
6388 | else: |
---|
6389 | remove(fullname) |
---|
6390 | os.rmdir(dirname) |
---|
6391 | - except Exception, le: |
---|
6392 | - # Ignore "No such file or directory" |
---|
6393 | - if (not isinstance(le, OSError)) or le.args[0] != 2: |
---|
6394 | + except EnvironmentError, le: |
---|
6395 | + # Ignore "No such file or directory", collect any other exception. |
---|
6396 | + if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT): |
---|
6397 | excs.append(le) |
---|
6398 | hunk ./src/allmydata/util/fileutil.py 243 |
---|
6399 | + except Exception, le: |
---|
6400 | + excs.append(le) |
---|
6401 | |
---|
6402 | # Okay, now we've recursively removed everything, ignoring any "No |
---|
6403 | # such file or directory" errors, and collecting any other errors. |
---|
6404 | hunk ./src/allmydata/util/fileutil.py 256 |
---|
6405 | raise OSError, "Failed to remove dir for unknown reason." |
---|
6406 | raise OSError, excs |
---|
6407 | |
---|
6408 | +def fp_remove(fp): |
---|
6409 | + """ |
---|
6410 | + An idempotent version of shutil.rmtree(). If the file/dir is already |
---|
6411 | + gone, do nothing and return without raising an exception. If this call |
---|
6412 | + removes the file/dir, return without raising an exception. If there is |
---|
6413 | + an error that prevents removal, or if a file or directory at the same |
---|
6414 | + path gets created again by someone else after this deletes it and before |
---|
6415 | + this checks that it is gone, raise an exception. |
---|
6416 | + """ |
---|
6417 | + try: |
---|
6418 | + fp.remove() |
---|
6419 | + except UnlistableError, e: |
---|
6420 | + if e.originalException.errno != errno.ENOENT: |
---|
6421 | + raise |
---|
6422 | + except OSError, e: |
---|
6423 | + if e.errno != errno.ENOENT: |
---|
6424 | + raise |
---|
6425 | + |
---|
6426 | +def rm_dir(dirname): |
---|
6427 | + # Renamed to be like shutil.rmtree and unlike rmdir. |
---|
6428 | + return rmtree(dirname) |
---|
6429 | |
---|
6430 | def remove_if_possible(f): |
---|
6431 | try: |
---|
6432 | hunk ./src/allmydata/util/fileutil.py 387 |
---|
6433 | import traceback |
---|
6434 | traceback.print_exc() |
---|
6435 | |
---|
6436 | -def get_disk_stats(whichdir, reserved_space=0): |
---|
6437 | +def get_disk_stats(whichdirfp, reserved_space=0): |
---|
6438 | """Return disk statistics for the storage disk, in the form of a dict |
---|
6439 | with the following fields. |
---|
6440 | total: total bytes on disk |
---|
6441 | hunk ./src/allmydata/util/fileutil.py 408 |
---|
6442 | you can pass how many bytes you would like to leave unused on this |
---|
6443 | filesystem as reserved_space. |
---|
6444 | """ |
---|
6445 | + precondition(isinstance(whichdirfp, FilePath), whichdirfp) |
---|
6446 | |
---|
6447 | if have_GetDiskFreeSpaceExW: |
---|
6448 | # If this is a Windows system and GetDiskFreeSpaceExW is available, use it. |
---|
6449 | hunk ./src/allmydata/util/fileutil.py 419 |
---|
6450 | n_free_for_nonroot = c_ulonglong(0) |
---|
6451 | n_total = c_ulonglong(0) |
---|
6452 | n_free_for_root = c_ulonglong(0) |
---|
6453 | - retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot), |
---|
6454 | - byref(n_total), |
---|
6455 | - byref(n_free_for_root)) |
---|
6456 | + retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot), |
---|
6457 | + byref(n_total), |
---|
6458 | + byref(n_free_for_root)) |
---|
6459 | if retval == 0: |
---|
6460 | raise OSError("Windows error %d attempting to get disk statistics for %r" |
---|
6461 | hunk ./src/allmydata/util/fileutil.py 424 |
---|
6462 | - % (GetLastError(), whichdir)) |
---|
6463 | + % (GetLastError(), whichdirfp.path)) |
---|
6464 | free_for_nonroot = n_free_for_nonroot.value |
---|
6465 | total = n_total.value |
---|
6466 | free_for_root = n_free_for_root.value |
---|
6467 | hunk ./src/allmydata/util/fileutil.py 433 |
---|
6468 | # <http://docs.python.org/library/os.html#os.statvfs> |
---|
6469 | # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html> |
---|
6470 | # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html> |
---|
6471 | - s = os.statvfs(whichdir) |
---|
6472 | + s = os.statvfs(whichdirfp.path) |
---|
6473 | |
---|
6474 | # on my mac laptop: |
---|
6475 | # statvfs(2) is a wrapper around statfs(2). |
---|
6476 | hunk ./src/allmydata/util/fileutil.py 460 |
---|
6477 | 'avail': avail, |
---|
6478 | } |
---|
6479 | |
---|
6480 | -def get_available_space(whichdir, reserved_space): |
---|
6481 | +def get_available_space(whichdirfp, reserved_space): |
---|
6482 | """Returns available space for share storage in bytes, or None if no |
---|
6483 | API to get this information is available. |
---|
6484 | |
---|
6485 | hunk ./src/allmydata/util/fileutil.py 472 |
---|
6486 | you can pass how many bytes you would like to leave unused on this |
---|
6487 | filesystem as reserved_space. |
---|
6488 | """ |
---|
6489 | + precondition(isinstance(whichdirfp, FilePath), whichdirfp) |
---|
6490 | try: |
---|
6491 | hunk ./src/allmydata/util/fileutil.py 474 |
---|
6492 | - return get_disk_stats(whichdir, reserved_space)['avail'] |
---|
6493 | + return get_disk_stats(whichdirfp, reserved_space)['avail'] |
---|
6494 | except AttributeError: |
---|
6495 | return None |
---|
6496 | hunk ./src/allmydata/util/fileutil.py 477 |
---|
6497 | - except EnvironmentError: |
---|
6498 | - log.msg("OS call to get disk statistics failed") |
---|
6499 | + |
---|
6500 | + |
---|
6501 | +def get_used_space(fp): |
---|
6502 | + if fp is None: |
---|
6503 | return 0 |
---|
6504 | hunk ./src/allmydata/util/fileutil.py 482 |
---|
6505 | + try: |
---|
6506 | + s = os.stat(fp.path) |
---|
6507 | + except EnvironmentError: |
---|
6508 | + if not fp.exists(): |
---|
6509 | + return 0 |
---|
6510 | + raise |
---|
6511 | + else: |
---|
6512 | + # POSIX defines st_blocks (originally a BSDism): |
---|
6513 | + # <http://pubs.opengroup.org/onlinepubs/009695399/basedefs/sys/stat.h.html> |
---|
6514 | + # but does not require stat() to give it a "meaningful value" |
---|
6515 | + # <http://pubs.opengroup.org/onlinepubs/009695399/functions/stat.html> |
---|
6516 | + # and says: |
---|
6517 | + # "The unit for the st_blocks member of the stat structure is not defined |
---|
6518 | + # within IEEE Std 1003.1-2001. In some implementations it is 512 bytes. |
---|
6519 | + # It may differ on a file system basis. There is no correlation between |
---|
6520 | + # values of the st_blocks and st_blksize, and the f_bsize (from <sys/statvfs.h>) |
---|
6521 | + # structure members." |
---|
6522 | + # |
---|
6523 | + # The Linux docs define it as "the number of blocks allocated to the file, |
---|
6524 | + # [in] 512-byte units." It is also defined that way on MacOS X. Python does |
---|
6525 | + # not set the attribute on Windows. |
---|
6526 | + # |
---|
6527 | + # We consider platforms that define st_blocks but give it a wrong value, or |
---|
6528 | + # measure it in a unit other than 512 bytes, to be broken. See also |
---|
6529 | + # <http://bugs.python.org/issue12350>. |
---|
6530 | + |
---|
6531 | + if hasattr(s, 'st_blocks'): |
---|
6532 | + return s.st_blocks * 512 |
---|
6533 | + else: |
---|
6534 | + return s.st_size |
---|
6535 | } |
---|
6536 | [Work-in-progress, includes fix to bug involving BucketWriter. refs #999 |
---|
6537 | david-sarah@jacaranda.org**20110920033803 |
---|
6538 | Ignore-this: 64e9e019421454e4d08141d10b6e4eed |
---|
6539 | ] { |
---|
6540 | hunk ./src/allmydata/client.py 9 |
---|
6541 | from twisted.internet import reactor, defer |
---|
6542 | from twisted.application import service |
---|
6543 | from twisted.application.internet import TimerService |
---|
6544 | +from twisted.python.filepath import FilePath |
---|
6545 | from foolscap.api import Referenceable |
---|
6546 | from pycryptopp.publickey import rsa |
---|
6547 | |
---|
6548 | hunk ./src/allmydata/client.py 15 |
---|
6549 | import allmydata |
---|
6550 | from allmydata.storage.server import StorageServer |
---|
6551 | +from allmydata.storage.backends.disk.disk_backend import DiskBackend |
---|
6552 | from allmydata import storage_client |
---|
6553 | from allmydata.immutable.upload import Uploader |
---|
6554 | from allmydata.immutable.offloaded import Helper |
---|
6555 | hunk ./src/allmydata/client.py 213 |
---|
6556 | return |
---|
6557 | readonly = self.get_config("storage", "readonly", False, boolean=True) |
---|
6558 | |
---|
6559 | - storedir = os.path.join(self.basedir, self.STOREDIR) |
---|
6560 | + storedir = FilePath(self.basedir).child(self.STOREDIR) |
---|
6561 | |
---|
6562 | data = self.get_config("storage", "reserved_space", None) |
---|
6563 | reserved = None |
---|
6564 | hunk ./src/allmydata/client.py 255 |
---|
6565 | 'cutoff_date': cutoff_date, |
---|
6566 | 'sharetypes': tuple(sharetypes), |
---|
6567 | } |
---|
6568 | - ss = StorageServer(storedir, self.nodeid, |
---|
6569 | - reserved_space=reserved, |
---|
6570 | - discard_storage=discard, |
---|
6571 | - readonly_storage=readonly, |
---|
6572 | + |
---|
6573 | + backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved, |
---|
6574 | + discard_storage=discard) |
---|
6575 | + ss = StorageServer(nodeid, backend, storedir, |
---|
6576 | stats_provider=self.stats_provider, |
---|
6577 | expiration_policy=expiration_policy) |
---|
6578 | self.add_service(ss) |
---|
6579 | hunk ./src/allmydata/interfaces.py 348 |
---|
6580 | |
---|
6581 | def get_shares(): |
---|
6582 | """ |
---|
6583 | - Generates the IStoredShare objects held in this shareset. |
---|
6584 | + Generates IStoredShare objects for all completed shares in this shareset. |
---|
6585 | """ |
---|
6586 | |
---|
6587 | def has_incoming(shnum): |
---|
6588 | hunk ./src/allmydata/storage/backends/base.py 69 |
---|
6589 | # def _create_mutable_share(self, storageserver, shnum, write_enabler): |
---|
6590 | # """create a mutable share with the given shnum and write_enabler""" |
---|
6591 | |
---|
6592 | - # secrets might be a triple with cancel_secret in secrets[2], but if |
---|
6593 | - # so we ignore the cancel_secret. |
---|
6594 | write_enabler = secrets[0] |
---|
6595 | renew_secret = secrets[1] |
---|
6596 | hunk ./src/allmydata/storage/backends/base.py 71 |
---|
6597 | + cancel_secret = '\x00'*32 |
---|
6598 | + if len(secrets) > 2: |
---|
6599 | + cancel_secret = secrets[2] |
---|
6600 | |
---|
6601 | si_s = self.get_storage_index_string() |
---|
6602 | shares = {} |
---|
6603 | hunk ./src/allmydata/storage/backends/base.py 110 |
---|
6604 | read_data[shnum] = share.readv(read_vector) |
---|
6605 | |
---|
6606 | ownerid = 1 # TODO |
---|
6607 | - lease_info = LeaseInfo(ownerid, renew_secret, |
---|
6608 | + lease_info = LeaseInfo(ownerid, renew_secret, cancel_secret, |
---|
6609 | expiration_time, storageserver.get_serverid()) |
---|
6610 | |
---|
6611 | if testv_is_good: |
---|
6612 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 34 |
---|
6613 | return newfp.child(sia) |
---|
6614 | |
---|
6615 | |
---|
6616 | -def get_share(fp): |
---|
6617 | +def get_share(storageindex, shnum, fp): |
---|
6618 | f = fp.open('rb') |
---|
6619 | try: |
---|
6620 | prefix = f.read(32) |
---|
6621 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 42 |
---|
6622 | f.close() |
---|
6623 | |
---|
6624 | if prefix == MutableDiskShare.MAGIC: |
---|
6625 | - return MutableDiskShare(fp) |
---|
6626 | + return MutableDiskShare(storageindex, shnum, fp) |
---|
6627 | else: |
---|
6628 | # assume it's immutable |
---|
6629 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 45 |
---|
6630 | - return ImmutableDiskShare(fp) |
---|
6631 | + return ImmutableDiskShare(storageindex, shnum, fp) |
---|
6632 | |
---|
6633 | |
---|
6634 | class DiskBackend(Backend): |
---|
6635 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 174 |
---|
6636 | if not NUM_RE.match(shnumstr): |
---|
6637 | continue |
---|
6638 | sharehome = self._sharehomedir.child(shnumstr) |
---|
6639 | - yield self.get_share(sharehome) |
---|
6640 | + yield get_share(self.get_storage_index(), int(shnumstr), sharehome) |
---|
6641 | except UnlistableError: |
---|
6642 | # There is no shares directory at all. |
---|
6643 | pass |
---|
6644 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 185 |
---|
6645 | return self._incominghomedir.child(str(shnum)).exists() |
---|
6646 | |
---|
6647 | def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary): |
---|
6648 | - sharehome = self._sharehomedir.child(str(shnum)) |
---|
6649 | + finalhome = self._sharehomedir.child(str(shnum)) |
---|
6650 | incominghome = self._incominghomedir.child(str(shnum)) |
---|
6651 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 187 |
---|
6652 | - immsh = ImmutableDiskShare(self.get_storage_index(), shnum, sharehome, incominghome, |
---|
6653 | - max_size=max_space_per_bucket, create=True) |
---|
6654 | + immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome, |
---|
6655 | + max_size=max_space_per_bucket) |
---|
6656 | bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary) |
---|
6657 | if self._discard_storage: |
---|
6658 | bw.throw_out_all_data = True |
---|
6659 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 198 |
---|
6660 | fileutil.fp_make_dirs(self._sharehomedir) |
---|
6661 | sharehome = self._sharehomedir.child(str(shnum)) |
---|
6662 | serverid = storageserver.get_serverid() |
---|
6663 | - return create_mutable_disk_share(sharehome, serverid, write_enabler, storageserver) |
---|
6664 | + return create_mutable_disk_share(self.get_storage_index(), shnum, sharehome, serverid, write_enabler, storageserver) |
---|
6665 | |
---|
6666 | def _clean_up_after_unlink(self): |
---|
6667 | fileutil.fp_rmdir_if_empty(self._sharehomedir) |
---|
6668 | hunk ./src/allmydata/storage/backends/disk/immutable.py 48 |
---|
6669 | LEASE_SIZE = struct.calcsize(">L32s32sL") |
---|
6670 | |
---|
6671 | |
---|
6672 | - def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False): |
---|
6673 | - """ If max_size is not None then I won't allow more than |
---|
6674 | - max_size to be written to me. If create=True then max_size |
---|
6675 | - must not be None. """ |
---|
6676 | - precondition((max_size is not None) or (not create), max_size, create) |
---|
6677 | + def __init__(self, storageindex, shnum, home, finalhome=None, max_size=None): |
---|
6678 | + """ |
---|
6679 | + If max_size is not None then I won't allow more than max_size to be written to me. |
---|
6680 | + If finalhome is not None (meaning that we are creating the share) then max_size |
---|
6681 | + must not be None. |
---|
6682 | + """ |
---|
6683 | + precondition((max_size is not None) or (finalhome is None), max_size, finalhome) |
---|
6684 | self._storageindex = storageindex |
---|
6685 | self._max_size = max_size |
---|
6686 | hunk ./src/allmydata/storage/backends/disk/immutable.py 57 |
---|
6687 | - self._incominghome = incominghome |
---|
6688 | - self._home = finalhome |
---|
6689 | + |
---|
6690 | + # If we are creating the share, _finalhome refers to the final path and |
---|
6691 | + # _home to the incoming path. Otherwise, _finalhome is None. |
---|
6692 | + self._finalhome = finalhome |
---|
6693 | + self._home = home |
---|
6694 | self._shnum = shnum |
---|
6695 | hunk ./src/allmydata/storage/backends/disk/immutable.py 63 |
---|
6696 | - if create: |
---|
6697 | - # touch the file, so later callers will see that we're working on |
---|
6698 | + |
---|
6699 | + if self._finalhome is not None: |
---|
6700 | + # Touch the file, so later callers will see that we're working on |
---|
6701 | # it. Also construct the metadata. |
---|
6702 | hunk ./src/allmydata/storage/backends/disk/immutable.py 67 |
---|
6703 | - assert not finalhome.exists() |
---|
6704 | - fp_make_dirs(self._incominghome.parent()) |
---|
6705 | + assert not self._finalhome.exists() |
---|
6706 | + fp_make_dirs(self._home.parent()) |
---|
6707 | # The second field -- the four-byte share data length -- is no |
---|
6708 | # longer used as of Tahoe v1.3.0, but we continue to write it in |
---|
6709 | # there in case someone downgrades a storage server from >= |
---|
6710 | hunk ./src/allmydata/storage/backends/disk/immutable.py 78 |
---|
6711 | # the largest length that can fit into the field. That way, even |
---|
6712 | # if this does happen, the old < v1.3.0 server will still allow |
---|
6713 | # clients to read the first part of the share. |
---|
6714 | - self._incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) ) |
---|
6715 | + self._home.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) ) |
---|
6716 | self._lease_offset = max_size + 0x0c |
---|
6717 | self._num_leases = 0 |
---|
6718 | else: |
---|
6719 | hunk ./src/allmydata/storage/backends/disk/immutable.py 101 |
---|
6720 | % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home))) |
---|
6721 | |
---|
6722 | def close(self): |
---|
6723 | - fileutil.fp_make_dirs(self._home.parent()) |
---|
6724 | - self._incominghome.moveTo(self._home) |
---|
6725 | - try: |
---|
6726 | - # self._incominghome is like storage/shares/incoming/ab/abcde/4 . |
---|
6727 | - # We try to delete the parent (.../ab/abcde) to avoid leaving |
---|
6728 | - # these directories lying around forever, but the delete might |
---|
6729 | - # fail if we're working on another share for the same storage |
---|
6730 | - # index (like ab/abcde/5). The alternative approach would be to |
---|
6731 | - # use a hierarchy of objects (PrefixHolder, BucketHolder, |
---|
6732 | - # ShareWriter), each of which is responsible for a single |
---|
6733 | - # directory on disk, and have them use reference counting of |
---|
6734 | - # their children to know when they should do the rmdir. This |
---|
6735 | - # approach is simpler, but relies on os.rmdir refusing to delete |
---|
6736 | - # a non-empty directory. Do *not* use fileutil.fp_remove() here! |
---|
6737 | - fileutil.fp_rmdir_if_empty(self._incominghome.parent()) |
---|
6738 | - # we also delete the grandparent (prefix) directory, .../ab , |
---|
6739 | - # again to avoid leaving directories lying around. This might |
---|
6740 | - # fail if there is another bucket open that shares a prefix (like |
---|
6741 | - # ab/abfff). |
---|
6742 | - fileutil.fp_rmdir_if_empty(self._incominghome.parent().parent()) |
---|
6743 | - # we leave the great-grandparent (incoming/) directory in place. |
---|
6744 | - except EnvironmentError: |
---|
6745 | - # ignore the "can't rmdir because the directory is not empty" |
---|
6746 | - # exceptions, those are normal consequences of the |
---|
6747 | - # above-mentioned conditions. |
---|
6748 | - pass |
---|
6749 | - pass |
---|
6750 | + fileutil.fp_make_dirs(self._finalhome.parent()) |
---|
6751 | + self._home.moveTo(self._finalhome) |
---|
6752 | + |
---|
6753 | + # self._home is like storage/shares/incoming/ab/abcde/4 . |
---|
6754 | + # We try to delete the parent (.../ab/abcde) to avoid leaving |
---|
6755 | + # these directories lying around forever, but the delete might |
---|
6756 | + # fail if we're working on another share for the same storage |
---|
6757 | + # index (like ab/abcde/5). The alternative approach would be to |
---|
6758 | + # use a hierarchy of objects (PrefixHolder, BucketHolder, |
---|
6759 | + # ShareWriter), each of which is responsible for a single |
---|
6760 | + # directory on disk, and have them use reference counting of |
---|
6761 | + # their children to know when they should do the rmdir. This |
---|
6762 | + # approach is simpler, but relies on os.rmdir (used by |
---|
6763 | + # fp_rmdir_if_empty) refusing to delete a non-empty directory. |
---|
6764 | + # Do *not* use fileutil.fp_remove() here! |
---|
6765 | + parent = self._home.parent() |
---|
6766 | + fileutil.fp_rmdir_if_empty(parent) |
---|
6767 | + |
---|
6768 | + # we also delete the grandparent (prefix) directory, .../ab , |
---|
6769 | + # again to avoid leaving directories lying around. This might |
---|
6770 | + # fail if there is another bucket open that shares a prefix (like |
---|
6771 | + # ab/abfff). |
---|
6772 | + fileutil.fp_rmdir_if_empty(parent.parent()) |
---|
6773 | + |
---|
6774 | + # we leave the great-grandparent (incoming/) directory in place. |
---|
6775 | + |
---|
6776 | + # allow lease changes after closing. |
---|
6777 | + self._home = self._finalhome |
---|
6778 | + self._finalhome = None |
---|
6779 | |
---|
6780 | def get_used_space(self): |
---|
6781 | hunk ./src/allmydata/storage/backends/disk/immutable.py 132 |
---|
6782 | - return (fileutil.get_used_space(self._home) + |
---|
6783 | - fileutil.get_used_space(self._incominghome)) |
---|
6784 | + return (fileutil.get_used_space(self._finalhome) + |
---|
6785 | + fileutil.get_used_space(self._home)) |
---|
6786 | |
---|
6787 | def get_storage_index(self): |
---|
6788 | return self._storageindex |
---|
6789 | hunk ./src/allmydata/storage/backends/disk/immutable.py 175 |
---|
6790 | precondition(offset >= 0, offset) |
---|
6791 | if self._max_size is not None and offset+length > self._max_size: |
---|
6792 | raise DataTooLargeError(self._max_size, offset, length) |
---|
6793 | - f = self._incominghome.open(mode='rb+') |
---|
6794 | + f = self._home.open(mode='rb+') |
---|
6795 | try: |
---|
6796 | real_offset = self._data_offset+offset |
---|
6797 | f.seek(real_offset) |
---|
6798 | hunk ./src/allmydata/storage/backends/disk/immutable.py 205 |
---|
6799 | |
---|
6800 | # These lease operations are intended for use by disk_backend.py. |
---|
6801 | # Other clients should not depend on the fact that the disk backend |
---|
6802 | - # stores leases in share files. |
---|
6803 | + # stores leases in share files. XXX bucket.py also relies on this. |
---|
6804 | |
---|
6805 | def get_leases(self): |
---|
6806 | """Yields a LeaseInfo instance for all leases.""" |
---|
6807 | hunk ./src/allmydata/storage/backends/disk/immutable.py 221 |
---|
6808 | f.close() |
---|
6809 | |
---|
6810 | def add_lease(self, lease_info): |
---|
6811 | - f = self._incominghome.open(mode='rb') |
---|
6812 | + f = self._home.open(mode='rb+') |
---|
6813 | try: |
---|
6814 | num_leases = self._read_num_leases(f) |
---|
6815 | hunk ./src/allmydata/storage/backends/disk/immutable.py 224 |
---|
6816 | - finally: |
---|
6817 | - f.close() |
---|
6818 | - f = self._home.open(mode='wb+') |
---|
6819 | - try: |
---|
6820 | self._write_lease_record(f, num_leases, lease_info) |
---|
6821 | self._write_num_leases(f, num_leases+1) |
---|
6822 | finally: |
---|
6823 | hunk ./src/allmydata/storage/backends/disk/mutable.py 440 |
---|
6824 | pass |
---|
6825 | |
---|
6826 | |
---|
6827 | -def create_mutable_disk_share(fp, serverid, write_enabler, parent): |
---|
6828 | - ms = MutableDiskShare(fp, parent) |
---|
6829 | +def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent): |
---|
6830 | + ms = MutableDiskShare(storageindex, shnum, fp, parent) |
---|
6831 | ms.create(serverid, write_enabler) |
---|
6832 | del ms |
---|
6833 | hunk ./src/allmydata/storage/backends/disk/mutable.py 444 |
---|
6834 | - return MutableDiskShare(fp, parent) |
---|
6835 | + return MutableDiskShare(storageindex, shnum, fp, parent) |
---|
6836 | hunk ./src/allmydata/storage/bucket.py 44 |
---|
6837 | start = time.time() |
---|
6838 | |
---|
6839 | self._share.close() |
---|
6840 | - filelen = self._share.stat() |
---|
6841 | + # XXX should this be self._share.get_used_space() ? |
---|
6842 | + consumed_size = self._share.get_size() |
---|
6843 | self._share = None |
---|
6844 | |
---|
6845 | self.closed = True |
---|
6846 | hunk ./src/allmydata/storage/bucket.py 51 |
---|
6847 | self._canary.dontNotifyOnDisconnect(self._disconnect_marker) |
---|
6848 | |
---|
6849 | - self.ss.bucket_writer_closed(self, filelen) |
---|
6850 | + self.ss.bucket_writer_closed(self, consumed_size) |
---|
6851 | self.ss.add_latency("close", time.time() - start) |
---|
6852 | self.ss.count("close") |
---|
6853 | |
---|
6854 | hunk ./src/allmydata/storage/server.py 182 |
---|
6855 | renew_secret, cancel_secret, |
---|
6856 | sharenums, allocated_size, |
---|
6857 | canary, owner_num=0): |
---|
6858 | - # cancel_secret is no longer used. |
---|
6859 | # owner_num is not for clients to set, but rather it should be |
---|
6860 | # curried into a StorageServer instance dedicated to a particular |
---|
6861 | # owner. |
---|
6862 | hunk ./src/allmydata/storage/server.py 195 |
---|
6863 | # Note that the lease should not be added until the BucketWriter |
---|
6864 | # has been closed. |
---|
6865 | expire_time = time.time() + 31*24*60*60 |
---|
6866 | - lease_info = LeaseInfo(owner_num, renew_secret, |
---|
6867 | + lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret, |
---|
6868 | expire_time, self._serverid) |
---|
6869 | |
---|
6870 | max_space_per_bucket = allocated_size |
---|
6871 | hunk ./src/allmydata/test/no_network.py 349 |
---|
6872 | return self.g.servers_by_number[i] |
---|
6873 | |
---|
6874 | def get_serverdir(self, i): |
---|
6875 | - return self.g.servers_by_number[i].backend.storedir |
---|
6876 | + return self.g.servers_by_number[i].backend._storedir |
---|
6877 | |
---|
6878 | def remove_server(self, i): |
---|
6879 | self.g.remove_server(self.g.servers_by_number[i].get_serverid()) |
---|
6880 | hunk ./src/allmydata/test/no_network.py 357 |
---|
6881 | def iterate_servers(self): |
---|
6882 | for i in sorted(self.g.servers_by_number.keys()): |
---|
6883 | ss = self.g.servers_by_number[i] |
---|
6884 | - yield (i, ss, ss.backend.storedir) |
---|
6885 | + yield (i, ss, ss.backend._storedir) |
---|
6886 | |
---|
6887 | def find_uri_shares(self, uri): |
---|
6888 | si = tahoe_uri.from_string(uri).get_storage_index() |
---|
6889 | hunk ./src/allmydata/test/no_network.py 384 |
---|
6890 | return shares |
---|
6891 | |
---|
6892 | def copy_share(self, from_share, uri, to_server): |
---|
6893 | - si = uri.from_string(self.uri).get_storage_index() |
---|
6894 | + si = tahoe_uri.from_string(uri).get_storage_index() |
---|
6895 | (i_shnum, i_serverid, i_sharefp) = from_share |
---|
6896 | shares_dir = to_server.backend.get_shareset(si)._sharehomedir |
---|
6897 | i_sharefp.copyTo(shares_dir.child(str(i_shnum))) |
---|
6898 | hunk ./src/allmydata/test/test_download.py 127 |
---|
6899 | |
---|
6900 | return d |
---|
6901 | |
---|
6902 | - def _write_shares(self, uri, shares): |
---|
6903 | - si = uri.from_string(uri).get_storage_index() |
---|
6904 | + def _write_shares(self, fileuri, shares): |
---|
6905 | + si = uri.from_string(fileuri).get_storage_index() |
---|
6906 | for i in shares: |
---|
6907 | shares_for_server = shares[i] |
---|
6908 | for shnum in shares_for_server: |
---|
6909 | hunk ./src/allmydata/test/test_hung_server.py 36 |
---|
6910 | |
---|
6911 | def _hang(self, servers, **kwargs): |
---|
6912 | for ss in servers: |
---|
6913 | - self.g.hang_server(ss.get_serverid(), **kwargs) |
---|
6914 | + self.g.hang_server(ss.original.get_serverid(), **kwargs) |
---|
6915 | |
---|
6916 | def _unhang(self, servers, **kwargs): |
---|
6917 | for ss in servers: |
---|
6918 | hunk ./src/allmydata/test/test_hung_server.py 40 |
---|
6919 | - self.g.unhang_server(ss.get_serverid(), **kwargs) |
---|
6920 | + self.g.unhang_server(ss.original.get_serverid(), **kwargs) |
---|
6921 | |
---|
6922 | def _hang_shares(self, shnums, **kwargs): |
---|
6923 | # hang all servers who are holding the given shares |
---|
6924 | hunk ./src/allmydata/test/test_hung_server.py 52 |
---|
6925 | hung_serverids.add(i_serverid) |
---|
6926 | |
---|
6927 | def _delete_all_shares_from(self, servers): |
---|
6928 | - serverids = [ss.get_serverid() for ss in servers] |
---|
6929 | + serverids = [ss.original.get_serverid() for ss in servers] |
---|
6930 | for (i_shnum, i_serverid, i_sharefp) in self.shares: |
---|
6931 | if i_serverid in serverids: |
---|
6932 | i_sharefp.remove() |
---|
6933 | hunk ./src/allmydata/test/test_hung_server.py 58 |
---|
6934 | |
---|
6935 | def _corrupt_all_shares_in(self, servers, corruptor_func): |
---|
6936 | - serverids = [ss.get_serverid() for ss in servers] |
---|
6937 | + serverids = [ss.original.get_serverid() for ss in servers] |
---|
6938 | for (i_shnum, i_serverid, i_sharefp) in self.shares: |
---|
6939 | if i_serverid in serverids: |
---|
6940 | self.corrupt_share((i_shnum, i_serverid, i_sharefp), corruptor_func) |
---|
6941 | hunk ./src/allmydata/test/test_hung_server.py 64 |
---|
6942 | |
---|
6943 | def _copy_all_shares_from(self, from_servers, to_server): |
---|
6944 | - serverids = [ss.get_serverid() for ss in from_servers] |
---|
6945 | + serverids = [ss.original.get_serverid() for ss in from_servers] |
---|
6946 | for (i_shnum, i_serverid, i_sharefp) in self.shares: |
---|
6947 | if i_serverid in serverids: |
---|
6948 | self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server) |
---|
6949 | hunk ./src/allmydata/test/test_mutable.py 2990 |
---|
6950 | fso = debug.FindSharesOptions() |
---|
6951 | storage_index = base32.b2a(n.get_storage_index()) |
---|
6952 | fso.si_s = storage_index |
---|
6953 | - fso.nodedirs = [unicode(os.path.dirname(os.path.abspath(storedir))) |
---|
6954 | + fso.nodedirs = [unicode(storedir.parent().path) |
---|
6955 | for (i,ss,storedir) |
---|
6956 | in self.iterate_servers()] |
---|
6957 | fso.stdout = StringIO() |
---|
6958 | hunk ./src/allmydata/test/test_upload.py 818 |
---|
6959 | if share_number is not None: |
---|
6960 | self._copy_share_to_server(share_number, server_number) |
---|
6961 | |
---|
6962 | - |
---|
6963 | def _copy_share_to_server(self, share_number, server_number): |
---|
6964 | ss = self.g.servers_by_number[server_number] |
---|
6965 | hunk ./src/allmydata/test/test_upload.py 820 |
---|
6966 | - self.copy_share(self.shares[share_number], ss) |
---|
6967 | + self.copy_share(self.shares[share_number], self.uri, ss) |
---|
6968 | |
---|
6969 | def _setup_grid(self): |
---|
6970 | """ |
---|
6971 | } |
---|
6972 | [docs/backends: document the configuration options for the pluggable backends scheme. refs #999 |
---|
6973 | david-sarah@jacaranda.org**20110920171737 |
---|
6974 | Ignore-this: 5947e864682a43cb04e557334cda7c19 |
---|
6975 | ] { |
---|
6976 | adddir ./docs/backends |
---|
6977 | addfile ./docs/backends/S3.rst |
---|
6978 | hunk ./docs/backends/S3.rst 1 |
---|
6979 | +==================================================== |
---|
6980 | +Storing Shares in Amazon Simple Storage Service (S3) |
---|
6981 | +==================================================== |
---|
6982 | + |
---|
6983 | +S3 is a commercial storage service provided by Amazon, described at |
---|
6984 | +`<https://aws.amazon.com/s3/>`_. |
---|
6985 | + |
---|
6986 | +The Tahoe-LAFS storage server can be configured to store its shares in |
---|
6987 | +an S3 bucket, rather than on local filesystem. To enable this, add the |
---|
6988 | +following keys to the server's ``tahoe.cfg`` file: |
---|
6989 | + |
---|
6990 | +``[storage]`` |
---|
6991 | + |
---|
6992 | +``backend = s3`` |
---|
6993 | + |
---|
6994 | + This turns off the local filesystem backend and enables use of S3. |
---|
6995 | + |
---|
6996 | +``s3.access_key_id = (string, required)`` |
---|
6997 | +``s3.secret_access_key = (string, required)`` |
---|
6998 | + |
---|
6999 | + These two give the storage server permission to access your Amazon |
---|
7000 | + Web Services account, allowing them to upload and download shares |
---|
7001 | + from S3. |
---|
7002 | + |
---|
7003 | +``s3.bucket = (string, required)`` |
---|
7004 | + |
---|
7005 | + This controls which bucket will be used to hold shares. The Tahoe-LAFS |
---|
7006 | + storage server will only modify and access objects in the configured S3 |
---|
7007 | + bucket. |
---|
7008 | + |
---|
7009 | +``s3.url = (URL string, optional)`` |
---|
7010 | + |
---|
7011 | + This URL tells the storage server how to access the S3 service. It |
---|
7012 | + defaults to ``http://s3.amazonaws.com``, but by setting it to something |
---|
7013 | + else, you may be able to use some other S3-like service if it is |
---|
7014 | + sufficiently compatible. |
---|
7015 | + |
---|
7016 | +``s3.max_space = (str, optional)`` |
---|
7017 | + |
---|
7018 | + This tells the server to limit how much space can be used in the S3 |
---|
7019 | + bucket. Before each share is uploaded, the server will ask S3 for the |
---|
7020 | + current bucket usage, and will only accept the share if it does not cause |
---|
7021 | + the usage to grow above this limit. |
---|
7022 | + |
---|
7023 | + The string contains a number, with an optional case-insensitive scale |
---|
7024 | + suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So |
---|
7025 | + "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the |
---|
7026 | + same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same |
---|
7027 | + thing. |
---|
7028 | + |
---|
7029 | + If ``s3.max_space`` is omitted, the default behavior is to allow |
---|
7030 | + unlimited usage. |
---|
7031 | + |
---|
7032 | + |
---|
7033 | +Once configured, the WUI "storage server" page will provide information about |
---|
7034 | +how much space is being used and how many shares are being stored. |
---|
7035 | + |
---|
7036 | + |
---|
7037 | +Issues |
---|
7038 | +------ |
---|
7039 | + |
---|
7040 | +Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS |
---|
7041 | +is configured to store shares in S3 rather than on local disk, some common |
---|
7042 | +operations may behave differently: |
---|
7043 | + |
---|
7044 | +* Lease crawling/expiration is not yet implemented. As a result, shares will |
---|
7045 | + be retained forever, and the Storage Server status web page will not show |
---|
7046 | + information about the number of mutable/immutable shares present. |
---|
7047 | + |
---|
7048 | +* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for |
---|
7049 | + each share upload, causing the upload process to run slightly slower and |
---|
7050 | + incur more S3 request charges. |
---|
7051 | addfile ./docs/backends/disk.rst |
---|
7052 | hunk ./docs/backends/disk.rst 1 |
---|
7053 | +==================================== |
---|
7054 | +Storing Shares on a Local Filesystem |
---|
7055 | +==================================== |
---|
7056 | + |
---|
7057 | +The "disk" backend stores shares on the local filesystem. Versions of |
---|
7058 | +Tahoe-LAFS <= 1.9.0 always stored shares in this way. |
---|
7059 | + |
---|
7060 | +``[storage]`` |
---|
7061 | + |
---|
7062 | +``backend = disk`` |
---|
7063 | + |
---|
7064 | + This enables use of the disk backend, and is the default. |
---|
7065 | + |
---|
7066 | +``reserved_space = (str, optional)`` |
---|
7067 | + |
---|
7068 | + If provided, this value defines how much disk space is reserved: the |
---|
7069 | + storage server will not accept any share that causes the amount of free |
---|
7070 | + disk space to drop below this value. (The free space is measured by a |
---|
7071 | + call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the |
---|
7072 | + space available to the user account under which the storage server runs.) |
---|
7073 | + |
---|
7074 | + This string contains a number, with an optional case-insensitive scale |
---|
7075 | + suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So |
---|
7076 | + "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the |
---|
7077 | + same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same |
---|
7078 | + thing. |
---|
7079 | + |
---|
7080 | + "``tahoe create-node``" generates a tahoe.cfg with |
---|
7081 | + "``reserved_space=1G``", but you may wish to raise, lower, or remove the |
---|
7082 | + reservation to suit your needs. |
---|
7083 | + |
---|
7084 | +``expire.enabled =`` |
---|
7085 | + |
---|
7086 | +``expire.mode =`` |
---|
7087 | + |
---|
7088 | +``expire.override_lease_duration =`` |
---|
7089 | + |
---|
7090 | +``expire.cutoff_date =`` |
---|
7091 | + |
---|
7092 | +``expire.immutable =`` |
---|
7093 | + |
---|
7094 | +``expire.mutable =`` |
---|
7095 | + |
---|
7096 | + These settings control garbage collection, causing the server to |
---|
7097 | + delete shares that no longer have an up-to-date lease on them. Please |
---|
7098 | + see `<garbage-collection.rst>`_ for full details. |
---|
7099 | hunk ./docs/configuration.rst 436 |
---|
7100 | <http://tahoe-lafs.org/trac/tahoe-lafs/ticket/390>`_ for the current |
---|
7101 | status of this bug. The default value is ``False``. |
---|
7102 | |
---|
7103 | -``reserved_space = (str, optional)`` |
---|
7104 | +``backend = (string, optional)`` |
---|
7105 | |
---|
7106 | hunk ./docs/configuration.rst 438 |
---|
7107 | - If provided, this value defines how much disk space is reserved: the |
---|
7108 | - storage server will not accept any share that causes the amount of free |
---|
7109 | - disk space to drop below this value. (The free space is measured by a |
---|
7110 | - call to statvfs(2) on Unix, or GetDiskFreeSpaceEx on Windows, and is the |
---|
7111 | - space available to the user account under which the storage server runs.) |
---|
7112 | + Storage servers can store the data into different "backends". Clients |
---|
7113 | + need not be aware of which backend is used by a server. The default |
---|
7114 | + value is ``disk``. |
---|
7115 | |
---|
7116 | hunk ./docs/configuration.rst 442 |
---|
7117 | - This string contains a number, with an optional case-insensitive scale |
---|
7118 | - suffix like "K" or "M" or "G", and an optional "B" or "iB" suffix. So |
---|
7119 | - "100MB", "100M", "100000000B", "100000000", and "100000kb" all mean the |
---|
7120 | - same thing. Likewise, "1MiB", "1024KiB", and "1048576B" all mean the same |
---|
7121 | - thing. |
---|
7122 | +``backend = disk`` |
---|
7123 | |
---|
7124 | hunk ./docs/configuration.rst 444 |
---|
7125 | - "``tahoe create-node``" generates a tahoe.cfg with |
---|
7126 | - "``reserved_space=1G``", but you may wish to raise, lower, or remove the |
---|
7127 | - reservation to suit your needs. |
---|
7128 | + The default is to store shares on the local filesystem (in |
---|
7129 | + BASEDIR/storage/shares/). For configuration details (including how to |
---|
7130 | + reserve a minimum amount of free space), see `<backends/disk.rst>`_. |
---|
7131 | |
---|
7132 | hunk ./docs/configuration.rst 448 |
---|
7133 | -``expire.enabled =`` |
---|
7134 | +``backend = S3`` |
---|
7135 | |
---|
7136 | hunk ./docs/configuration.rst 450 |
---|
7137 | -``expire.mode =`` |
---|
7138 | - |
---|
7139 | -``expire.override_lease_duration =`` |
---|
7140 | - |
---|
7141 | -``expire.cutoff_date =`` |
---|
7142 | - |
---|
7143 | -``expire.immutable =`` |
---|
7144 | - |
---|
7145 | -``expire.mutable =`` |
---|
7146 | - |
---|
7147 | - These settings control garbage collection, in which the server will |
---|
7148 | - delete shares that no longer have an up-to-date lease on them. Please see |
---|
7149 | - `<garbage-collection.rst>`_ for full details. |
---|
7150 | + The storage server can store all shares to an Amazon Simple Storage |
---|
7151 | + Service (S3) bucket. For configuration details, see `<backends/S3.rst>`_. |
---|
7152 | |
---|
7153 | |
---|
7154 | Running A Helper |
---|
7155 | } |
---|
7156 | [Fix some incorrect attribute accesses. refs #999 |
---|
7157 | david-sarah@jacaranda.org**20110921031207 |
---|
7158 | Ignore-this: f1ea4c3ea191f6d4b719afaebd2b2bcd |
---|
7159 | ] { |
---|
7160 | hunk ./src/allmydata/client.py 258 |
---|
7161 | |
---|
7162 | backend = DiskBackend(storedir, readonly=readonly, reserved_space=reserved, |
---|
7163 | discard_storage=discard) |
---|
7164 | - ss = StorageServer(nodeid, backend, storedir, |
---|
7165 | + ss = StorageServer(self.nodeid, backend, storedir, |
---|
7166 | stats_provider=self.stats_provider, |
---|
7167 | expiration_policy=expiration_policy) |
---|
7168 | self.add_service(ss) |
---|
7169 | hunk ./src/allmydata/interfaces.py 449 |
---|
7170 | Returns the storage index. |
---|
7171 | """ |
---|
7172 | |
---|
7173 | + def get_storage_index_string(): |
---|
7174 | + """ |
---|
7175 | + Returns the base32-encoded storage index. |
---|
7176 | + """ |
---|
7177 | + |
---|
7178 | def get_shnum(): |
---|
7179 | """ |
---|
7180 | Returns the share number. |
---|
7181 | hunk ./src/allmydata/storage/backends/disk/immutable.py 138 |
---|
7182 | def get_storage_index(self): |
---|
7183 | return self._storageindex |
---|
7184 | |
---|
7185 | + def get_storage_index_string(self): |
---|
7186 | + return si_b2a(self._storageindex) |
---|
7187 | + |
---|
7188 | def get_shnum(self): |
---|
7189 | return self._shnum |
---|
7190 | |
---|
7191 | hunk ./src/allmydata/storage/backends/disk/mutable.py 119 |
---|
7192 | def get_storage_index(self): |
---|
7193 | return self._storageindex |
---|
7194 | |
---|
7195 | + def get_storage_index_string(self): |
---|
7196 | + return si_b2a(self._storageindex) |
---|
7197 | + |
---|
7198 | def get_shnum(self): |
---|
7199 | return self._shnum |
---|
7200 | |
---|
7201 | hunk ./src/allmydata/storage/bucket.py 86 |
---|
7202 | def __init__(self, ss, share): |
---|
7203 | self.ss = ss |
---|
7204 | self._share = share |
---|
7205 | - self.storageindex = share.storageindex |
---|
7206 | - self.shnum = share.shnum |
---|
7207 | + self.storageindex = share.get_storage_index() |
---|
7208 | + self.shnum = share.get_shnum() |
---|
7209 | |
---|
7210 | def __repr__(self): |
---|
7211 | return "<%s %s %s>" % (self.__class__.__name__, |
---|
7212 | hunk ./src/allmydata/storage/expirer.py 6 |
---|
7213 | from twisted.python import log as twlog |
---|
7214 | |
---|
7215 | from allmydata.storage.crawler import ShareCrawler |
---|
7216 | -from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \ |
---|
7217 | +from allmydata.storage.common import UnknownMutableContainerVersionError, \ |
---|
7218 | UnknownImmutableContainerVersionError |
---|
7219 | |
---|
7220 | |
---|
7221 | hunk ./src/allmydata/storage/expirer.py 124 |
---|
7222 | struct.error): |
---|
7223 | twlog.msg("lease-checker error processing %r" % (share,)) |
---|
7224 | twlog.err() |
---|
7225 | - which = (si_b2a(share.storageindex), share.get_shnum()) |
---|
7226 | + which = (share.get_storage_index_string(), share.get_shnum()) |
---|
7227 | self.state["cycle-to-date"]["corrupt-shares"].append(which) |
---|
7228 | wks = (1, 1, 1, "unknown") |
---|
7229 | would_keep_shares.append(wks) |
---|
7230 | hunk ./src/allmydata/storage/server.py 221 |
---|
7231 | alreadygot = set() |
---|
7232 | for share in shareset.get_shares(): |
---|
7233 | share.add_or_renew_lease(lease_info) |
---|
7234 | - alreadygot.add(share.shnum) |
---|
7235 | + alreadygot.add(share.get_shnum()) |
---|
7236 | |
---|
7237 | for shnum in sharenums - alreadygot: |
---|
7238 | if shareset.has_incoming(shnum): |
---|
7239 | hunk ./src/allmydata/storage/server.py 324 |
---|
7240 | |
---|
7241 | try: |
---|
7242 | shareset = self.backend.get_shareset(storageindex) |
---|
7243 | - return shareset.readv(self, shares, readv) |
---|
7244 | + return shareset.readv(shares, readv) |
---|
7245 | finally: |
---|
7246 | self.add_latency("readv", time.time() - start) |
---|
7247 | |
---|
7248 | hunk ./src/allmydata/storage/shares.py 1 |
---|
7249 | -#! /usr/bin/python |
---|
7250 | - |
---|
7251 | -from allmydata.storage.mutable import MutableShareFile |
---|
7252 | -from allmydata.storage.immutable import ShareFile |
---|
7253 | - |
---|
7254 | -def get_share_file(filename): |
---|
7255 | - f = open(filename, "rb") |
---|
7256 | - prefix = f.read(32) |
---|
7257 | - f.close() |
---|
7258 | - if prefix == MutableShareFile.MAGIC: |
---|
7259 | - return MutableShareFile(filename) |
---|
7260 | - # otherwise assume it's immutable |
---|
7261 | - return ShareFile(filename) |
---|
7262 | - |
---|
7263 | rmfile ./src/allmydata/storage/shares.py |
---|
7264 | hunk ./src/allmydata/test/no_network.py 387 |
---|
7265 | si = tahoe_uri.from_string(uri).get_storage_index() |
---|
7266 | (i_shnum, i_serverid, i_sharefp) = from_share |
---|
7267 | shares_dir = to_server.backend.get_shareset(si)._sharehomedir |
---|
7268 | + fileutil.fp_make_dirs(shares_dir) |
---|
7269 | i_sharefp.copyTo(shares_dir.child(str(i_shnum))) |
---|
7270 | |
---|
7271 | def restore_all_shares(self, shares): |
---|
7272 | hunk ./src/allmydata/test/no_network.py 391 |
---|
7273 | - for share, data in shares.items(): |
---|
7274 | - share.home.setContent(data) |
---|
7275 | + for sharepath, data in shares.items(): |
---|
7276 | + FilePath(sharepath).setContent(data) |
---|
7277 | |
---|
7278 | def delete_share(self, (shnum, serverid, sharefp)): |
---|
7279 | sharefp.remove() |
---|
7280 | hunk ./src/allmydata/test/test_upload.py 744 |
---|
7281 | servertoshnums = {} # k: server, v: set(shnum) |
---|
7282 | |
---|
7283 | for i, c in self.g.servers_by_number.iteritems(): |
---|
7284 | - for (dirp, dirns, fns) in os.walk(c.sharedir): |
---|
7285 | + for (dirp, dirns, fns) in os.walk(c.backend._sharedir.path): |
---|
7286 | for fn in fns: |
---|
7287 | try: |
---|
7288 | sharenum = int(fn) |
---|
7289 | } |
---|
7290 | [docs/backends/S3.rst: remove Issues section. refs #999 |
---|
7291 | david-sarah@jacaranda.org**20110921031625 |
---|
7292 | Ignore-this: c83d8f52b790bc32488869e6ee1df8c2 |
---|
7293 | ] hunk ./docs/backends/S3.rst 57 |
---|
7294 | |
---|
7295 | Once configured, the WUI "storage server" page will provide information about |
---|
7296 | how much space is being used and how many shares are being stored. |
---|
7297 | - |
---|
7298 | - |
---|
7299 | -Issues |
---|
7300 | ------- |
---|
7301 | - |
---|
7302 | -Objects in an S3 bucket cannot be read for free. As a result, when Tahoe-LAFS |
---|
7303 | -is configured to store shares in S3 rather than on local disk, some common |
---|
7304 | -operations may behave differently: |
---|
7305 | - |
---|
7306 | -* Lease crawling/expiration is not yet implemented. As a result, shares will |
---|
7307 | - be retained forever, and the Storage Server status web page will not show |
---|
7308 | - information about the number of mutable/immutable shares present. |
---|
7309 | - |
---|
7310 | -* Enabling ``s3.max_space`` causes an extra S3 usage query to be sent for |
---|
7311 | - each share upload, causing the upload process to run slightly slower and |
---|
7312 | - incur more S3 request charges. |
---|
7313 | [docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999 |
---|
7314 | david-sarah@jacaranda.org**20110921031705 |
---|
7315 | Ignore-this: a74ed8e01b0a1ab5f07a1487d7bf138 |
---|
7316 | ] { |
---|
7317 | hunk ./docs/backends/S3.rst 38 |
---|
7318 | else, you may be able to use some other S3-like service if it is |
---|
7319 | sufficiently compatible. |
---|
7320 | |
---|
7321 | -``s3.max_space = (str, optional)`` |
---|
7322 | +``s3.max_space = (quantity of space, optional)`` |
---|
7323 | |
---|
7324 | This tells the server to limit how much space can be used in the S3 |
---|
7325 | bucket. Before each share is uploaded, the server will ask S3 for the |
---|
7326 | hunk ./docs/backends/disk.rst 14 |
---|
7327 | |
---|
7328 | This enables use of the disk backend, and is the default. |
---|
7329 | |
---|
7330 | -``reserved_space = (str, optional)`` |
---|
7331 | +``reserved_space = (quantity of space, optional)`` |
---|
7332 | |
---|
7333 | If provided, this value defines how much disk space is reserved: the |
---|
7334 | storage server will not accept any share that causes the amount of free |
---|
7335 | } |
---|
7336 | [More fixes to tests needed for pluggable backends. refs #999 |
---|
7337 | david-sarah@jacaranda.org**20110921184649 |
---|
7338 | Ignore-this: 9be0d3a98e350fd4e17a07d2c00bb4ca |
---|
7339 | ] { |
---|
7340 | hunk ./src/allmydata/scripts/debug.py 8 |
---|
7341 | from twisted.python import usage, failure |
---|
7342 | from twisted.internet import defer |
---|
7343 | from twisted.scripts import trial as twisted_trial |
---|
7344 | +from twisted.python.filepath import FilePath |
---|
7345 | |
---|
7346 | |
---|
7347 | class DumpOptions(usage.Options): |
---|
7348 | hunk ./src/allmydata/scripts/debug.py 38 |
---|
7349 | self['filename'] = argv_to_abspath(filename) |
---|
7350 | |
---|
7351 | def dump_share(options): |
---|
7352 | - from allmydata.storage.mutable import MutableShareFile |
---|
7353 | + from allmydata.storage.backends.disk.disk_backend import get_share |
---|
7354 | from allmydata.util.encodingutil import quote_output |
---|
7355 | |
---|
7356 | out = options.stdout |
---|
7357 | hunk ./src/allmydata/scripts/debug.py 46 |
---|
7358 | # check the version, to see if we have a mutable or immutable share |
---|
7359 | print >>out, "share filename: %s" % quote_output(options['filename']) |
---|
7360 | |
---|
7361 | - f = open(options['filename'], "rb") |
---|
7362 | - prefix = f.read(32) |
---|
7363 | - f.close() |
---|
7364 | - if prefix == MutableShareFile.MAGIC: |
---|
7365 | - return dump_mutable_share(options) |
---|
7366 | - # otherwise assume it's immutable |
---|
7367 | - return dump_immutable_share(options) |
---|
7368 | - |
---|
7369 | -def dump_immutable_share(options): |
---|
7370 | - from allmydata.storage.immutable import ShareFile |
---|
7371 | + share = get_share("", 0, fp) |
---|
7372 | + if share.sharetype == "mutable": |
---|
7373 | + return dump_mutable_share(options, share) |
---|
7374 | + else: |
---|
7375 | + assert share.sharetype == "immutable", share.sharetype |
---|
7376 | + return dump_immutable_share(options) |
---|
7377 | |
---|
7378 | hunk ./src/allmydata/scripts/debug.py 53 |
---|
7379 | +def dump_immutable_share(options, share): |
---|
7380 | out = options.stdout |
---|
7381 | hunk ./src/allmydata/scripts/debug.py 55 |
---|
7382 | - f = ShareFile(options['filename']) |
---|
7383 | if not options["leases-only"]: |
---|
7384 | hunk ./src/allmydata/scripts/debug.py 56 |
---|
7385 | - dump_immutable_chk_share(f, out, options) |
---|
7386 | - dump_immutable_lease_info(f, out) |
---|
7387 | + dump_immutable_chk_share(share, out, options) |
---|
7388 | + dump_immutable_lease_info(share, out) |
---|
7389 | print >>out |
---|
7390 | return 0 |
---|
7391 | |
---|
7392 | hunk ./src/allmydata/scripts/debug.py 166 |
---|
7393 | return when |
---|
7394 | |
---|
7395 | |
---|
7396 | -def dump_mutable_share(options): |
---|
7397 | - from allmydata.storage.mutable import MutableShareFile |
---|
7398 | +def dump_mutable_share(options, m): |
---|
7399 | from allmydata.util import base32, idlib |
---|
7400 | out = options.stdout |
---|
7401 | hunk ./src/allmydata/scripts/debug.py 169 |
---|
7402 | - m = MutableShareFile(options['filename']) |
---|
7403 | f = open(options['filename'], "rb") |
---|
7404 | WE, nodeid = m._read_write_enabler_and_nodeid(f) |
---|
7405 | num_extra_leases = m._read_num_extra_leases(f) |
---|
7406 | hunk ./src/allmydata/scripts/debug.py 641 |
---|
7407 | /home/warner/testnet/node-1/storage/shares/44k/44kai1tui348689nrw8fjegc8c/9 |
---|
7408 | /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2 |
---|
7409 | """ |
---|
7410 | - from allmydata.storage.server import si_a2b, storage_index_to_dir |
---|
7411 | - from allmydata.util.encodingutil import listdir_unicode |
---|
7412 | + from allmydata.storage.server import si_a2b |
---|
7413 | + from allmydata.storage.backends.disk_backend import si_si2dir |
---|
7414 | + from allmydata.util.encodingutil import quote_filepath |
---|
7415 | |
---|
7416 | out = options.stdout |
---|
7417 | hunk ./src/allmydata/scripts/debug.py 646 |
---|
7418 | - sharedir = storage_index_to_dir(si_a2b(options.si_s)) |
---|
7419 | - for d in options.nodedirs: |
---|
7420 | - d = os.path.join(d, "storage/shares", sharedir) |
---|
7421 | - if os.path.exists(d): |
---|
7422 | - for shnum in listdir_unicode(d): |
---|
7423 | - print >>out, os.path.join(d, shnum) |
---|
7424 | + si = si_a2b(options.si_s) |
---|
7425 | + for nodedir in options.nodedirs: |
---|
7426 | + sharedir = si_si2dir(nodedir.child("storage").child("shares"), si) |
---|
7427 | + if sharedir.exists(): |
---|
7428 | + for sharefp in sharedir.children(): |
---|
7429 | + print >>out, quote_filepath(sharefp, quotemarks=False) |
---|
7430 | |
---|
7431 | return 0 |
---|
7432 | |
---|
7433 | hunk ./src/allmydata/scripts/debug.py 878 |
---|
7434 | print >>err, "Error processing %s" % quote_output(si_dir) |
---|
7435 | failure.Failure().printTraceback(err) |
---|
7436 | |
---|
7437 | + |
---|
7438 | class CorruptShareOptions(usage.Options): |
---|
7439 | def getSynopsis(self): |
---|
7440 | return "Usage: tahoe debug corrupt-share SHARE_FILENAME" |
---|
7441 | hunk ./src/allmydata/scripts/debug.py 902 |
---|
7442 | Obviously, this command should not be used in normal operation. |
---|
7443 | """ |
---|
7444 | return t |
---|
7445 | + |
---|
7446 | def parseArgs(self, filename): |
---|
7447 | self['filename'] = filename |
---|
7448 | |
---|
7449 | hunk ./src/allmydata/scripts/debug.py 907 |
---|
7450 | def corrupt_share(options): |
---|
7451 | + do_corrupt_share(options.stdout, FilePath(options['filename']), options['offset']) |
---|
7452 | + |
---|
7453 | +def do_corrupt_share(out, fp, offset="block-random"): |
---|
7454 | import random |
---|
7455 | hunk ./src/allmydata/scripts/debug.py 911 |
---|
7456 | - from allmydata.storage.mutable import MutableShareFile |
---|
7457 | - from allmydata.storage.immutable import ShareFile |
---|
7458 | + from allmydata.storage.backends.disk.mutable import MutableDiskShare |
---|
7459 | + from allmydata.storage.backends.disk.immutable import ImmutableDiskShare |
---|
7460 | from allmydata.mutable.layout import unpack_header |
---|
7461 | from allmydata.immutable.layout import ReadBucketProxy |
---|
7462 | hunk ./src/allmydata/scripts/debug.py 915 |
---|
7463 | - out = options.stdout |
---|
7464 | - fn = options['filename'] |
---|
7465 | - assert options["offset"] == "block-random", "other offsets not implemented" |
---|
7466 | + |
---|
7467 | + assert offset == "block-random", "other offsets not implemented" |
---|
7468 | + |
---|
7469 | # first, what kind of share is it? |
---|
7470 | |
---|
7471 | def flip_bit(start, end): |
---|
7472 | hunk ./src/allmydata/scripts/debug.py 924 |
---|
7473 | offset = random.randrange(start, end) |
---|
7474 | bit = random.randrange(0, 8) |
---|
7475 | print >>out, "[%d..%d): %d.b%d" % (start, end, offset, bit) |
---|
7476 | - f = open(fn, "rb+") |
---|
7477 | - f.seek(offset) |
---|
7478 | - d = f.read(1) |
---|
7479 | - d = chr(ord(d) ^ 0x01) |
---|
7480 | - f.seek(offset) |
---|
7481 | - f.write(d) |
---|
7482 | - f.close() |
---|
7483 | + f = fp.open("rb+") |
---|
7484 | + try: |
---|
7485 | + f.seek(offset) |
---|
7486 | + d = f.read(1) |
---|
7487 | + d = chr(ord(d) ^ 0x01) |
---|
7488 | + f.seek(offset) |
---|
7489 | + f.write(d) |
---|
7490 | + finally: |
---|
7491 | + f.close() |
---|
7492 | |
---|
7493 | hunk ./src/allmydata/scripts/debug.py 934 |
---|
7494 | - f = open(fn, "rb") |
---|
7495 | - prefix = f.read(32) |
---|
7496 | - f.close() |
---|
7497 | - if prefix == MutableShareFile.MAGIC: |
---|
7498 | - # mutable |
---|
7499 | - m = MutableShareFile(fn) |
---|
7500 | - f = open(fn, "rb") |
---|
7501 | - f.seek(m.DATA_OFFSET) |
---|
7502 | - data = f.read(2000) |
---|
7503 | - # make sure this slot contains an SMDF share |
---|
7504 | - assert data[0] == "\x00", "non-SDMF mutable shares not supported" |
---|
7505 | + f = fp.open("rb") |
---|
7506 | + try: |
---|
7507 | + prefix = f.read(32) |
---|
7508 | + finally: |
---|
7509 | f.close() |
---|
7510 | hunk ./src/allmydata/scripts/debug.py 939 |
---|
7511 | + if prefix == MutableDiskShare.MAGIC: |
---|
7512 | + # mutable |
---|
7513 | + m = MutableDiskShare("", 0, fp) |
---|
7514 | + f = fp.open("rb") |
---|
7515 | + try: |
---|
7516 | + f.seek(m.DATA_OFFSET) |
---|
7517 | + data = f.read(2000) |
---|
7518 | + # make sure this slot contains an SMDF share |
---|
7519 | + assert data[0] == "\x00", "non-SDMF mutable shares not supported" |
---|
7520 | + finally: |
---|
7521 | + f.close() |
---|
7522 | |
---|
7523 | (version, ig_seqnum, ig_roothash, ig_IV, ig_k, ig_N, ig_segsize, |
---|
7524 | ig_datalen, offsets) = unpack_header(data) |
---|
7525 | hunk ./src/allmydata/scripts/debug.py 960 |
---|
7526 | flip_bit(start, end) |
---|
7527 | else: |
---|
7528 | # otherwise assume it's immutable |
---|
7529 | - f = ShareFile(fn) |
---|
7530 | + f = ImmutableDiskShare("", 0, fp) |
---|
7531 | bp = ReadBucketProxy(None, None, '') |
---|
7532 | offsets = bp._parse_offsets(f.read_share_data(0, 0x24)) |
---|
7533 | start = f._data_offset + offsets["data"] |
---|
7534 | hunk ./src/allmydata/storage/backends/base.py 92 |
---|
7535 | (testv, datav, new_length) = test_and_write_vectors[sharenum] |
---|
7536 | if sharenum in shares: |
---|
7537 | if not shares[sharenum].check_testv(testv): |
---|
7538 | - self.log("testv failed: [%d]: %r" % (sharenum, testv)) |
---|
7539 | + storageserver.log("testv failed: [%d]: %r" % (sharenum, testv)) |
---|
7540 | testv_is_good = False |
---|
7541 | break |
---|
7542 | else: |
---|
7543 | hunk ./src/allmydata/storage/backends/base.py 99 |
---|
7544 | # compare the vectors against an empty share, in which all |
---|
7545 | # reads return empty strings |
---|
7546 | if not EmptyShare().check_testv(testv): |
---|
7547 | - self.log("testv failed (empty): [%d] %r" % (sharenum, |
---|
7548 | - testv)) |
---|
7549 | + storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv)) |
---|
7550 | testv_is_good = False |
---|
7551 | break |
---|
7552 | |
---|
7553 | hunk ./src/allmydata/test/test_cli.py 2892 |
---|
7554 | # delete one, corrupt a second |
---|
7555 | shares = self.find_uri_shares(self.uri) |
---|
7556 | self.failUnlessReallyEqual(len(shares), 10) |
---|
7557 | - os.unlink(shares[0][2]) |
---|
7558 | - cso = debug.CorruptShareOptions() |
---|
7559 | - cso.stdout = StringIO() |
---|
7560 | - cso.parseOptions([shares[1][2]]) |
---|
7561 | + shares[0][2].remove() |
---|
7562 | + stdout = StringIO() |
---|
7563 | + sharefile = shares[1][2] |
---|
7564 | storage_index = uri.from_string(self.uri).get_storage_index() |
---|
7565 | self._corrupt_share_line = " server %s, SI %s, shnum %d" % \ |
---|
7566 | (base32.b2a(shares[1][1]), |
---|
7567 | hunk ./src/allmydata/test/test_cli.py 2900 |
---|
7568 | base32.b2a(storage_index), |
---|
7569 | shares[1][0]) |
---|
7570 | - debug.corrupt_share(cso) |
---|
7571 | + debug.do_corrupt_share(stdout, sharefile) |
---|
7572 | d.addCallback(_clobber_shares) |
---|
7573 | |
---|
7574 | d.addCallback(lambda ign: self.do_cli("check", "--verify", self.uri)) |
---|
7575 | hunk ./src/allmydata/test/test_cli.py 3017 |
---|
7576 | def _clobber_shares(ignored): |
---|
7577 | shares = self.find_uri_shares(self.uris[u"g\u00F6\u00F6d"]) |
---|
7578 | self.failUnlessReallyEqual(len(shares), 10) |
---|
7579 | - os.unlink(shares[0][2]) |
---|
7580 | + shares[0][2].remove() |
---|
7581 | |
---|
7582 | shares = self.find_uri_shares(self.uris["mutable"]) |
---|
7583 | hunk ./src/allmydata/test/test_cli.py 3020 |
---|
7584 | - cso = debug.CorruptShareOptions() |
---|
7585 | - cso.stdout = StringIO() |
---|
7586 | - cso.parseOptions([shares[1][2]]) |
---|
7587 | + stdout = StringIO() |
---|
7588 | + sharefile = shares[1][2] |
---|
7589 | storage_index = uri.from_string(self.uris["mutable"]).get_storage_index() |
---|
7590 | self._corrupt_share_line = " corrupt: server %s, SI %s, shnum %d" % \ |
---|
7591 | (base32.b2a(shares[1][1]), |
---|
7592 | hunk ./src/allmydata/test/test_cli.py 3027 |
---|
7593 | base32.b2a(storage_index), |
---|
7594 | shares[1][0]) |
---|
7595 | - debug.corrupt_share(cso) |
---|
7596 | + debug.do_corrupt_share(stdout, sharefile) |
---|
7597 | d.addCallback(_clobber_shares) |
---|
7598 | |
---|
7599 | # root |
---|
7600 | hunk ./src/allmydata/test/test_client.py 90 |
---|
7601 | "enabled = true\n" + \ |
---|
7602 | "reserved_space = 1000\n") |
---|
7603 | c = client.Client(basedir) |
---|
7604 | - self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 1000) |
---|
7605 | + self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 1000) |
---|
7606 | |
---|
7607 | def test_reserved_2(self): |
---|
7608 | basedir = "client.Basic.test_reserved_2" |
---|
7609 | hunk ./src/allmydata/test/test_client.py 101 |
---|
7610 | "enabled = true\n" + \ |
---|
7611 | "reserved_space = 10K\n") |
---|
7612 | c = client.Client(basedir) |
---|
7613 | - self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 10*1000) |
---|
7614 | + self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 10*1000) |
---|
7615 | |
---|
7616 | def test_reserved_3(self): |
---|
7617 | basedir = "client.Basic.test_reserved_3" |
---|
7618 | hunk ./src/allmydata/test/test_client.py 112 |
---|
7619 | "enabled = true\n" + \ |
---|
7620 | "reserved_space = 5mB\n") |
---|
7621 | c = client.Client(basedir) |
---|
7622 | - self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, |
---|
7623 | + self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, |
---|
7624 | 5*1000*1000) |
---|
7625 | |
---|
7626 | def test_reserved_4(self): |
---|
7627 | hunk ./src/allmydata/test/test_client.py 124 |
---|
7628 | "enabled = true\n" + \ |
---|
7629 | "reserved_space = 78Gb\n") |
---|
7630 | c = client.Client(basedir) |
---|
7631 | - self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, |
---|
7632 | + self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, |
---|
7633 | 78*1000*1000*1000) |
---|
7634 | |
---|
7635 | def test_reserved_bad(self): |
---|
7636 | hunk ./src/allmydata/test/test_client.py 136 |
---|
7637 | "enabled = true\n" + \ |
---|
7638 | "reserved_space = bogus\n") |
---|
7639 | c = client.Client(basedir) |
---|
7640 | - self.failUnlessEqual(c.getServiceNamed("storage").reserved_space, 0) |
---|
7641 | + self.failUnlessEqual(c.getServiceNamed("storage").backend._reserved_space, 0) |
---|
7642 | |
---|
7643 | def _permute(self, sb, key): |
---|
7644 | return [ s.get_serverid() for s in sb.get_servers_for_psi(key) ] |
---|
7645 | hunk ./src/allmydata/test/test_crawler.py 7 |
---|
7646 | from twisted.trial import unittest |
---|
7647 | from twisted.application import service |
---|
7648 | from twisted.internet import defer |
---|
7649 | +from twisted.python.filepath import FilePath |
---|
7650 | from foolscap.api import eventually, fireEventually |
---|
7651 | |
---|
7652 | from allmydata.util import fileutil, hashutil, pollmixin |
---|
7653 | hunk ./src/allmydata/test/test_crawler.py 13 |
---|
7654 | from allmydata.storage.server import StorageServer, si_b2a |
---|
7655 | from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded |
---|
7656 | +from allmydata.storage.backends.disk.disk_backend import DiskBackend |
---|
7657 | |
---|
7658 | from allmydata.test.test_storage import FakeCanary |
---|
7659 | from allmydata.test.common_util import StallMixin |
---|
7660 | hunk ./src/allmydata/test/test_crawler.py 115 |
---|
7661 | |
---|
7662 | def test_immediate(self): |
---|
7663 | self.basedir = "crawler/Basic/immediate" |
---|
7664 | - fileutil.make_dirs(self.basedir) |
---|
7665 | serverid = "\x00" * 20 |
---|
7666 | hunk ./src/allmydata/test/test_crawler.py 116 |
---|
7667 | - ss = StorageServer(self.basedir, serverid) |
---|
7668 | + fp = FilePath(self.basedir) |
---|
7669 | + backend = DiskBackend(fp) |
---|
7670 | + ss = StorageServer(serverid, backend, fp) |
---|
7671 | ss.setServiceParent(self.s) |
---|
7672 | |
---|
7673 | sis = [self.write(i, ss, serverid) for i in range(10)] |
---|
7674 | hunk ./src/allmydata/test/test_crawler.py 122 |
---|
7675 | - statefile = os.path.join(self.basedir, "statefile") |
---|
7676 | + statefp = fp.child("statefile") |
---|
7677 | |
---|
7678 | hunk ./src/allmydata/test/test_crawler.py 124 |
---|
7679 | - c = BucketEnumeratingCrawler(ss, statefile, allowed_cpu_percentage=.1) |
---|
7680 | + c = BucketEnumeratingCrawler(backend, statefp, allowed_cpu_percentage=.1) |
---|
7681 | c.load_state() |
---|
7682 | |
---|
7683 | c.start_current_prefix(time.time()) |
---|
7684 | hunk ./src/allmydata/test/test_crawler.py 137 |
---|
7685 | self.failUnlessEqual(sorted(sis), sorted(c.all_buckets)) |
---|
7686 | |
---|
7687 | # check that a new crawler picks up on the state file properly |
---|
7688 | - c2 = BucketEnumeratingCrawler(ss, statefile) |
---|
7689 | + c2 = BucketEnumeratingCrawler(backend, statefp) |
---|
7690 | c2.load_state() |
---|
7691 | |
---|
7692 | c2.start_current_prefix(time.time()) |
---|
7693 | hunk ./src/allmydata/test/test_crawler.py 145 |
---|
7694 | |
---|
7695 | def test_service(self): |
---|
7696 | self.basedir = "crawler/Basic/service" |
---|
7697 | - fileutil.make_dirs(self.basedir) |
---|
7698 | serverid = "\x00" * 20 |
---|
7699 | hunk ./src/allmydata/test/test_crawler.py 146 |
---|
7700 | - ss = StorageServer(self.basedir, serverid) |
---|
7701 | + fp = FilePath(self.basedir) |
---|
7702 | + backend = DiskBackend(fp) |
---|
7703 | + ss = StorageServer(serverid, backend, fp) |
---|
7704 | ss.setServiceParent(self.s) |
---|
7705 | |
---|
7706 | sis = [self.write(i, ss, serverid) for i in range(10)] |
---|
7707 | hunk ./src/allmydata/test/test_crawler.py 153 |
---|
7708 | |
---|
7709 | - statefile = os.path.join(self.basedir, "statefile") |
---|
7710 | - c = BucketEnumeratingCrawler(ss, statefile) |
---|
7711 | + statefp = fp.child("statefile") |
---|
7712 | + c = BucketEnumeratingCrawler(backend, statefp) |
---|
7713 | c.setServiceParent(self.s) |
---|
7714 | |
---|
7715 | # it should be legal to call get_state() and get_progress() right |
---|
7716 | hunk ./src/allmydata/test/test_crawler.py 174 |
---|
7717 | |
---|
7718 | def test_paced(self): |
---|
7719 | self.basedir = "crawler/Basic/paced" |
---|
7720 | - fileutil.make_dirs(self.basedir) |
---|
7721 | serverid = "\x00" * 20 |
---|
7722 | hunk ./src/allmydata/test/test_crawler.py 175 |
---|
7723 | - ss = StorageServer(self.basedir, serverid) |
---|
7724 | + fp = FilePath(self.basedir) |
---|
7725 | + backend = DiskBackend(fp) |
---|
7726 | + ss = StorageServer(serverid, backend, fp) |
---|
7727 | ss.setServiceParent(self.s) |
---|
7728 | |
---|
7729 | # put four buckets in each prefixdir |
---|
7730 | hunk ./src/allmydata/test/test_crawler.py 186 |
---|
7731 | for tail in range(4): |
---|
7732 | sis.append(self.write(i, ss, serverid, tail)) |
---|
7733 | |
---|
7734 | - statefile = os.path.join(self.basedir, "statefile") |
---|
7735 | + statefp = fp.child("statefile") |
---|
7736 | |
---|
7737 | hunk ./src/allmydata/test/test_crawler.py 188 |
---|
7738 | - c = PacedCrawler(ss, statefile) |
---|
7739 | + c = PacedCrawler(backend, statefp) |
---|
7740 | c.load_state() |
---|
7741 | try: |
---|
7742 | c.start_current_prefix(time.time()) |
---|
7743 | hunk ./src/allmydata/test/test_crawler.py 213 |
---|
7744 | del c |
---|
7745 | |
---|
7746 | # start a new crawler, it should start from the beginning |
---|
7747 | - c = PacedCrawler(ss, statefile) |
---|
7748 | + c = PacedCrawler(backend, statefp) |
---|
7749 | c.load_state() |
---|
7750 | try: |
---|
7751 | c.start_current_prefix(time.time()) |
---|
7752 | hunk ./src/allmydata/test/test_crawler.py 226 |
---|
7753 | c.cpu_slice = PacedCrawler.cpu_slice |
---|
7754 | |
---|
7755 | # a third crawler should pick up from where it left off |
---|
7756 | - c2 = PacedCrawler(ss, statefile) |
---|
7757 | + c2 = PacedCrawler(backend, statefp) |
---|
7758 | c2.all_buckets = c.all_buckets[:] |
---|
7759 | c2.load_state() |
---|
7760 | c2.countdown = -1 |
---|
7761 | hunk ./src/allmydata/test/test_crawler.py 237 |
---|
7762 | |
---|
7763 | # now stop it at the end of a bucket (countdown=4), to exercise a |
---|
7764 | # different place that checks the time |
---|
7765 | - c = PacedCrawler(ss, statefile) |
---|
7766 | + c = PacedCrawler(backend, statefp) |
---|
7767 | c.load_state() |
---|
7768 | c.countdown = 4 |
---|
7769 | try: |
---|
7770 | hunk ./src/allmydata/test/test_crawler.py 256 |
---|
7771 | |
---|
7772 | # stop it again at the end of the bucket, check that a new checker |
---|
7773 | # picks up correctly |
---|
7774 | - c = PacedCrawler(ss, statefile) |
---|
7775 | + c = PacedCrawler(backend, statefp) |
---|
7776 | c.load_state() |
---|
7777 | c.countdown = 4 |
---|
7778 | try: |
---|
7779 | hunk ./src/allmydata/test/test_crawler.py 266 |
---|
7780 | # that should stop at the end of one of the buckets. |
---|
7781 | c.save_state() |
---|
7782 | |
---|
7783 | - c2 = PacedCrawler(ss, statefile) |
---|
7784 | + c2 = PacedCrawler(backend, statefp) |
---|
7785 | c2.all_buckets = c.all_buckets[:] |
---|
7786 | c2.load_state() |
---|
7787 | c2.countdown = -1 |
---|
7788 | hunk ./src/allmydata/test/test_crawler.py 277 |
---|
7789 | |
---|
7790 | def test_paced_service(self): |
---|
7791 | self.basedir = "crawler/Basic/paced_service" |
---|
7792 | - fileutil.make_dirs(self.basedir) |
---|
7793 | serverid = "\x00" * 20 |
---|
7794 | hunk ./src/allmydata/test/test_crawler.py 278 |
---|
7795 | - ss = StorageServer(self.basedir, serverid) |
---|
7796 | + fp = FilePath(self.basedir) |
---|
7797 | + backend = DiskBackend(fp) |
---|
7798 | + ss = StorageServer(serverid, backend, fp) |
---|
7799 | ss.setServiceParent(self.s) |
---|
7800 | |
---|
7801 | sis = [self.write(i, ss, serverid) for i in range(10)] |
---|
7802 | hunk ./src/allmydata/test/test_crawler.py 285 |
---|
7803 | |
---|
7804 | - statefile = os.path.join(self.basedir, "statefile") |
---|
7805 | - c = PacedCrawler(ss, statefile) |
---|
7806 | + statefp = fp.child("statefile") |
---|
7807 | + c = PacedCrawler(backend, statefp) |
---|
7808 | |
---|
7809 | did_check_progress = [False] |
---|
7810 | def check_progress(): |
---|
7811 | hunk ./src/allmydata/test/test_crawler.py 345 |
---|
7812 | # and read the stdout when it runs. |
---|
7813 | |
---|
7814 | self.basedir = "crawler/Basic/cpu_usage" |
---|
7815 | - fileutil.make_dirs(self.basedir) |
---|
7816 | serverid = "\x00" * 20 |
---|
7817 | hunk ./src/allmydata/test/test_crawler.py 346 |
---|
7818 | - ss = StorageServer(self.basedir, serverid) |
---|
7819 | + fp = FilePath(self.basedir) |
---|
7820 | + backend = DiskBackend(fp) |
---|
7821 | + ss = StorageServer(serverid, backend, fp) |
---|
7822 | ss.setServiceParent(self.s) |
---|
7823 | |
---|
7824 | for i in range(10): |
---|
7825 | hunk ./src/allmydata/test/test_crawler.py 354 |
---|
7826 | self.write(i, ss, serverid) |
---|
7827 | |
---|
7828 | - statefile = os.path.join(self.basedir, "statefile") |
---|
7829 | - c = ConsumingCrawler(ss, statefile) |
---|
7830 | + statefp = fp.child("statefile") |
---|
7831 | + c = ConsumingCrawler(backend, statefp) |
---|
7832 | c.setServiceParent(self.s) |
---|
7833 | |
---|
7834 | # this will run as fast as it can, consuming about 50ms per call to |
---|
7835 | hunk ./src/allmydata/test/test_crawler.py 391 |
---|
7836 | |
---|
7837 | def test_empty_subclass(self): |
---|
7838 | self.basedir = "crawler/Basic/empty_subclass" |
---|
7839 | - fileutil.make_dirs(self.basedir) |
---|
7840 | serverid = "\x00" * 20 |
---|
7841 | hunk ./src/allmydata/test/test_crawler.py 392 |
---|
7842 | - ss = StorageServer(self.basedir, serverid) |
---|
7843 | + fp = FilePath(self.basedir) |
---|
7844 | + backend = DiskBackend(fp) |
---|
7845 | + ss = StorageServer(serverid, backend, fp) |
---|
7846 | ss.setServiceParent(self.s) |
---|
7847 | |
---|
7848 | for i in range(10): |
---|
7849 | hunk ./src/allmydata/test/test_crawler.py 400 |
---|
7850 | self.write(i, ss, serverid) |
---|
7851 | |
---|
7852 | - statefile = os.path.join(self.basedir, "statefile") |
---|
7853 | - c = ShareCrawler(ss, statefile) |
---|
7854 | + statefp = fp.child("statefile") |
---|
7855 | + c = ShareCrawler(backend, statefp) |
---|
7856 | c.slow_start = 0 |
---|
7857 | c.setServiceParent(self.s) |
---|
7858 | |
---|
7859 | hunk ./src/allmydata/test/test_crawler.py 417 |
---|
7860 | d.addCallback(_done) |
---|
7861 | return d |
---|
7862 | |
---|
7863 | - |
---|
7864 | def test_oneshot(self): |
---|
7865 | self.basedir = "crawler/Basic/oneshot" |
---|
7866 | hunk ./src/allmydata/test/test_crawler.py 419 |
---|
7867 | - fileutil.make_dirs(self.basedir) |
---|
7868 | serverid = "\x00" * 20 |
---|
7869 | hunk ./src/allmydata/test/test_crawler.py 420 |
---|
7870 | - ss = StorageServer(self.basedir, serverid) |
---|
7871 | + fp = FilePath(self.basedir) |
---|
7872 | + backend = DiskBackend(fp) |
---|
7873 | + ss = StorageServer(serverid, backend, fp) |
---|
7874 | ss.setServiceParent(self.s) |
---|
7875 | |
---|
7876 | for i in range(30): |
---|
7877 | hunk ./src/allmydata/test/test_crawler.py 428 |
---|
7878 | self.write(i, ss, serverid) |
---|
7879 | |
---|
7880 | - statefile = os.path.join(self.basedir, "statefile") |
---|
7881 | - c = OneShotCrawler(ss, statefile) |
---|
7882 | + statefp = fp.child("statefile") |
---|
7883 | + c = OneShotCrawler(backend, statefp) |
---|
7884 | c.setServiceParent(self.s) |
---|
7885 | |
---|
7886 | d = c.finished_d |
---|
7887 | hunk ./src/allmydata/test/test_crawler.py 447 |
---|
7888 | self.failUnlessEqual(s["current-cycle"], None) |
---|
7889 | d.addCallback(_check) |
---|
7890 | return d |
---|
7891 | - |
---|
7892 | hunk ./src/allmydata/test/test_deepcheck.py 23 |
---|
7893 | ShouldFailMixin |
---|
7894 | from allmydata.test.common_util import StallMixin |
---|
7895 | from allmydata.test.no_network import GridTestMixin |
---|
7896 | +from allmydata.scripts import debug |
---|
7897 | + |
---|
7898 | |
---|
7899 | timeout = 2400 # One of these took 1046.091s on Zandr's ARM box. |
---|
7900 | |
---|
7901 | hunk ./src/allmydata/test/test_deepcheck.py 905 |
---|
7902 | d.addErrback(self.explain_error) |
---|
7903 | return d |
---|
7904 | |
---|
7905 | - |
---|
7906 | - |
---|
7907 | def set_up_damaged_tree(self): |
---|
7908 | # 6.4s |
---|
7909 | |
---|
7910 | hunk ./src/allmydata/test/test_deepcheck.py 989 |
---|
7911 | |
---|
7912 | return d |
---|
7913 | |
---|
7914 | - def _run_cli(self, argv): |
---|
7915 | - stdout, stderr = StringIO(), StringIO() |
---|
7916 | - # this can only do synchronous operations |
---|
7917 | - assert argv[0] == "debug" |
---|
7918 | - runner.runner(argv, run_by_human=False, stdout=stdout, stderr=stderr) |
---|
7919 | - return stdout.getvalue() |
---|
7920 | - |
---|
7921 | def _delete_some_shares(self, node): |
---|
7922 | self.delete_shares_numbered(node.get_uri(), [0,1]) |
---|
7923 | |
---|
7924 | hunk ./src/allmydata/test/test_deepcheck.py 995 |
---|
7925 | def _corrupt_some_shares(self, node): |
---|
7926 | for (shnum, serverid, sharefile) in self.find_uri_shares(node.get_uri()): |
---|
7927 | if shnum in (0,1): |
---|
7928 | - self._run_cli(["debug", "corrupt-share", sharefile]) |
---|
7929 | + debug.do_corrupt_share(StringIO(), sharefile) |
---|
7930 | |
---|
7931 | def _delete_most_shares(self, node): |
---|
7932 | self.delete_shares_numbered(node.get_uri(), range(1,10)) |
---|
7933 | hunk ./src/allmydata/test/test_deepcheck.py 1000 |
---|
7934 | |
---|
7935 | - |
---|
7936 | def check_is_healthy(self, cr, where): |
---|
7937 | try: |
---|
7938 | self.failUnless(ICheckResults.providedBy(cr), (cr, type(cr), where)) |
---|
7939 | hunk ./src/allmydata/test/test_download.py 134 |
---|
7940 | for shnum in shares_for_server: |
---|
7941 | share_dir = self.get_server(i).backend.get_shareset(si)._sharehomedir |
---|
7942 | fileutil.fp_make_dirs(share_dir) |
---|
7943 | - share_dir.child(str(shnum)).setContent(shares[shnum]) |
---|
7944 | + share_dir.child(str(shnum)).setContent(shares_for_server[shnum]) |
---|
7945 | |
---|
7946 | def load_shares(self, ignored=None): |
---|
7947 | # this uses the data generated by create_shares() to populate the |
---|
7948 | hunk ./src/allmydata/test/test_hung_server.py 32 |
---|
7949 | |
---|
7950 | def _break(self, servers): |
---|
7951 | for ss in servers: |
---|
7952 | - self.g.break_server(ss.get_serverid()) |
---|
7953 | + self.g.break_server(ss.original.get_serverid()) |
---|
7954 | |
---|
7955 | def _hang(self, servers, **kwargs): |
---|
7956 | for ss in servers: |
---|
7957 | hunk ./src/allmydata/test/test_hung_server.py 67 |
---|
7958 | serverids = [ss.original.get_serverid() for ss in from_servers] |
---|
7959 | for (i_shnum, i_serverid, i_sharefp) in self.shares: |
---|
7960 | if i_serverid in serverids: |
---|
7961 | - self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server) |
---|
7962 | + self.copy_share((i_shnum, i_serverid, i_sharefp), self.uri, to_server.original) |
---|
7963 | |
---|
7964 | self.shares = self.find_uri_shares(self.uri) |
---|
7965 | |
---|
7966 | hunk ./src/allmydata/test/test_mutable.py 3669 |
---|
7967 | # Now execute each assignment by writing the storage. |
---|
7968 | for (share, servernum) in assignments: |
---|
7969 | sharedata = base64.b64decode(self.sdmf_old_shares[share]) |
---|
7970 | - storage_dir = self.get_server(servernum).backend.get_shareset(si).sharehomedir |
---|
7971 | + storage_dir = self.get_server(servernum).backend.get_shareset(si)._sharehomedir |
---|
7972 | fileutil.fp_make_dirs(storage_dir) |
---|
7973 | storage_dir.child("%d" % share).setContent(sharedata) |
---|
7974 | # ...and verify that the shares are there. |
---|
7975 | hunk ./src/allmydata/test/test_no_network.py 10 |
---|
7976 | from allmydata.immutable.upload import Data |
---|
7977 | from allmydata.util.consumer import download_to_data |
---|
7978 | |
---|
7979 | + |
---|
7980 | class Harness(unittest.TestCase): |
---|
7981 | def setUp(self): |
---|
7982 | self.s = service.MultiService() |
---|
7983 | hunk ./src/allmydata/test/test_storage.py 1 |
---|
7984 | -import time, os.path, platform, stat, re, simplejson, struct, shutil |
---|
7985 | +import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools |
---|
7986 | |
---|
7987 | import mock |
---|
7988 | |
---|
7989 | hunk ./src/allmydata/test/test_storage.py 6 |
---|
7990 | from twisted.trial import unittest |
---|
7991 | - |
---|
7992 | from twisted.internet import defer |
---|
7993 | from twisted.application import service |
---|
7994 | hunk ./src/allmydata/test/test_storage.py 8 |
---|
7995 | +from twisted.python.filepath import FilePath |
---|
7996 | from foolscap.api import fireEventually |
---|
7997 | hunk ./src/allmydata/test/test_storage.py 10 |
---|
7998 | -import itertools |
---|
7999 | + |
---|
8000 | from allmydata import interfaces |
---|
8001 | from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format |
---|
8002 | from allmydata.storage.server import StorageServer |
---|
8003 | hunk ./src/allmydata/test/test_storage.py 14 |
---|
8004 | +from allmydata.storage.backends.disk.disk_backend import DiskBackend |
---|
8005 | from allmydata.storage.backends.disk.mutable import MutableDiskShare |
---|
8006 | from allmydata.storage.bucket import BucketWriter, BucketReader |
---|
8007 | from allmydata.storage.common import DataTooLargeError, \ |
---|
8008 | hunk ./src/allmydata/test/test_storage.py 310 |
---|
8009 | return self.sparent.stopService() |
---|
8010 | |
---|
8011 | def workdir(self, name): |
---|
8012 | - basedir = os.path.join("storage", "Server", name) |
---|
8013 | - return basedir |
---|
8014 | + return FilePath("storage").child("Server").child(name) |
---|
8015 | |
---|
8016 | def create(self, name, reserved_space=0, klass=StorageServer): |
---|
8017 | workdir = self.workdir(name) |
---|
8018 | hunk ./src/allmydata/test/test_storage.py 314 |
---|
8019 | - ss = klass(workdir, "\x00" * 20, reserved_space=reserved_space, |
---|
8020 | + backend = DiskBackend(workdir, readonly=False, reserved_space=reserved_space) |
---|
8021 | + ss = klass("\x00" * 20, backend, workdir, |
---|
8022 | stats_provider=FakeStatsProvider()) |
---|
8023 | ss.setServiceParent(self.sparent) |
---|
8024 | return ss |
---|
8025 | hunk ./src/allmydata/test/test_storage.py 1386 |
---|
8026 | |
---|
8027 | def tearDown(self): |
---|
8028 | self.sparent.stopService() |
---|
8029 | - shutil.rmtree(self.workdir("MDMFProxies storage test server")) |
---|
8030 | + fileutil.fp_remove(self.workdir("MDMFProxies storage test server")) |
---|
8031 | |
---|
8032 | |
---|
8033 | def write_enabler(self, we_tag): |
---|
8034 | hunk ./src/allmydata/test/test_storage.py 2781 |
---|
8035 | return self.sparent.stopService() |
---|
8036 | |
---|
8037 | def workdir(self, name): |
---|
8038 | - basedir = os.path.join("storage", "Server", name) |
---|
8039 | - return basedir |
---|
8040 | + return FilePath("storage").child("Server").child(name) |
---|
8041 | |
---|
8042 | def create(self, name): |
---|
8043 | workdir = self.workdir(name) |
---|
8044 | hunk ./src/allmydata/test/test_storage.py 2785 |
---|
8045 | - ss = StorageServer(workdir, "\x00" * 20) |
---|
8046 | + backend = DiskBackend(workdir) |
---|
8047 | + ss = StorageServer("\x00" * 20, backend, workdir) |
---|
8048 | ss.setServiceParent(self.sparent) |
---|
8049 | return ss |
---|
8050 | |
---|
8051 | hunk ./src/allmydata/test/test_storage.py 4061 |
---|
8052 | } |
---|
8053 | |
---|
8054 | basedir = "storage/WebStatus/status_right_disk_stats" |
---|
8055 | - fileutil.make_dirs(basedir) |
---|
8056 | - ss = StorageServer(basedir, "\x00" * 20, reserved_space=reserved_space) |
---|
8057 | - expecteddir = ss.sharedir |
---|
8058 | + fp = FilePath(basedir) |
---|
8059 | + backend = DiskBackend(fp, readonly=False, reserved_space=reserved_space) |
---|
8060 | + ss = StorageServer("\x00" * 20, backend, fp) |
---|
8061 | + expecteddir = backend._sharedir |
---|
8062 | ss.setServiceParent(self.s) |
---|
8063 | w = StorageStatus(ss) |
---|
8064 | html = w.renderSynchronously() |
---|
8065 | hunk ./src/allmydata/test/test_storage.py 4084 |
---|
8066 | |
---|
8067 | def test_readonly(self): |
---|
8068 | basedir = "storage/WebStatus/readonly" |
---|
8069 | - fileutil.make_dirs(basedir) |
---|
8070 | - ss = StorageServer(basedir, "\x00" * 20, readonly_storage=True) |
---|
8071 | + fp = FilePath(basedir) |
---|
8072 | + backend = DiskBackend(fp, readonly=True) |
---|
8073 | + ss = StorageServer("\x00" * 20, backend, fp) |
---|
8074 | ss.setServiceParent(self.s) |
---|
8075 | w = StorageStatus(ss) |
---|
8076 | html = w.renderSynchronously() |
---|
8077 | hunk ./src/allmydata/test/test_storage.py 4096 |
---|
8078 | |
---|
8079 | def test_reserved(self): |
---|
8080 | basedir = "storage/WebStatus/reserved" |
---|
8081 | - fileutil.make_dirs(basedir) |
---|
8082 | - ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6) |
---|
8083 | - ss.setServiceParent(self.s) |
---|
8084 | - w = StorageStatus(ss) |
---|
8085 | - html = w.renderSynchronously() |
---|
8086 | - self.failUnlessIn("<h1>Storage Server Status</h1>", html) |
---|
8087 | - s = remove_tags(html) |
---|
8088 | - self.failUnlessIn("Reserved space: - 10.00 MB (10000000)", s) |
---|
8089 | - |
---|
8090 | - def test_huge_reserved(self): |
---|
8091 | - basedir = "storage/WebStatus/reserved" |
---|
8092 | - fileutil.make_dirs(basedir) |
---|
8093 | - ss = StorageServer(basedir, "\x00" * 20, reserved_space=10e6) |
---|
8094 | + fp = FilePath(basedir) |
---|
8095 | + backend = DiskBackend(fp, readonly=False, reserved_space=10e6) |
---|
8096 | + ss = StorageServer("\x00" * 20, backend, fp) |
---|
8097 | ss.setServiceParent(self.s) |
---|
8098 | w = StorageStatus(ss) |
---|
8099 | html = w.renderSynchronously() |
---|
8100 | hunk ./src/allmydata/test/test_upload.py 3 |
---|
8101 | # -*- coding: utf-8 -*- |
---|
8102 | |
---|
8103 | -import os, shutil |
---|
8104 | +import os |
---|
8105 | from cStringIO import StringIO |
---|
8106 | from twisted.trial import unittest |
---|
8107 | from twisted.python.failure import Failure |
---|
8108 | hunk ./src/allmydata/test/test_upload.py 14 |
---|
8109 | from allmydata import uri, monitor, client |
---|
8110 | from allmydata.immutable import upload, encode |
---|
8111 | from allmydata.interfaces import FileTooLargeError, UploadUnhappinessError |
---|
8112 | -from allmydata.util import log |
---|
8113 | +from allmydata.util import log, fileutil |
---|
8114 | from allmydata.util.assertutil import precondition |
---|
8115 | from allmydata.util.deferredutil import DeferredListShouldSucceed |
---|
8116 | from allmydata.test.no_network import GridTestMixin |
---|
8117 | hunk ./src/allmydata/test/test_upload.py 972 |
---|
8118 | readonly=True)) |
---|
8119 | # Remove the first share from server 0. |
---|
8120 | def _remove_share_0_from_server_0(): |
---|
8121 | - share_location = self.shares[0][2] |
---|
8122 | - os.remove(share_location) |
---|
8123 | + self.shares[0][2].remove() |
---|
8124 | d.addCallback(lambda ign: |
---|
8125 | _remove_share_0_from_server_0()) |
---|
8126 | # Set happy = 4 in the client. |
---|
8127 | hunk ./src/allmydata/test/test_upload.py 1847 |
---|
8128 | self._copy_share_to_server(3, 1) |
---|
8129 | storedir = self.get_serverdir(0) |
---|
8130 | # remove the storedir, wiping out any existing shares |
---|
8131 | - shutil.rmtree(storedir) |
---|
8132 | + fileutil.fp_remove(storedir) |
---|
8133 | # create an empty storedir to replace the one we just removed |
---|
8134 | hunk ./src/allmydata/test/test_upload.py 1849 |
---|
8135 | - os.mkdir(storedir) |
---|
8136 | + storedir.mkdir() |
---|
8137 | client = self.g.clients[0] |
---|
8138 | client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4 |
---|
8139 | return client |
---|
8140 | hunk ./src/allmydata/test/test_upload.py 1888 |
---|
8141 | self._copy_share_to_server(3, 1) |
---|
8142 | storedir = self.get_serverdir(0) |
---|
8143 | # remove the storedir, wiping out any existing shares |
---|
8144 | - shutil.rmtree(storedir) |
---|
8145 | + fileutil.fp_remove(storedir) |
---|
8146 | # create an empty storedir to replace the one we just removed |
---|
8147 | hunk ./src/allmydata/test/test_upload.py 1890 |
---|
8148 | - os.mkdir(storedir) |
---|
8149 | + storedir.mkdir() |
---|
8150 | client = self.g.clients[0] |
---|
8151 | client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4 |
---|
8152 | return client |
---|
8153 | hunk ./src/allmydata/test/test_web.py 4870 |
---|
8154 | d.addErrback(self.explain_web_error) |
---|
8155 | return d |
---|
8156 | |
---|
8157 | - def _assert_leasecount(self, ignored, which, expected): |
---|
8158 | + def _assert_leasecount(self, which, expected): |
---|
8159 | lease_counts = self.count_leases(self.uris[which]) |
---|
8160 | for (fn, num_leases) in lease_counts: |
---|
8161 | if num_leases != expected: |
---|
8162 | hunk ./src/allmydata/test/test_web.py 4903 |
---|
8163 | self.fileurls[which] = "uri/" + urllib.quote(self.uris[which]) |
---|
8164 | d.addCallback(_compute_fileurls) |
---|
8165 | |
---|
8166 | - d.addCallback(self._assert_leasecount, "one", 1) |
---|
8167 | - d.addCallback(self._assert_leasecount, "two", 1) |
---|
8168 | - d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
8169 | + d.addCallback(lambda ign: self._assert_leasecount("one", 1)) |
---|
8170 | + d.addCallback(lambda ign: self._assert_leasecount("two", 1)) |
---|
8171 | + d.addCallback(lambda ign: self._assert_leasecount("mutable", 1)) |
---|
8172 | |
---|
8173 | d.addCallback(self.CHECK, "one", "t=check") # no add-lease |
---|
8174 | def _got_html_good(res): |
---|
8175 | hunk ./src/allmydata/test/test_web.py 4913 |
---|
8176 | self.failIf("Not Healthy" in res, res) |
---|
8177 | d.addCallback(_got_html_good) |
---|
8178 | |
---|
8179 | - d.addCallback(self._assert_leasecount, "one", 1) |
---|
8180 | - d.addCallback(self._assert_leasecount, "two", 1) |
---|
8181 | - d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
8182 | + d.addCallback(lambda ign: self._assert_leasecount("one", 1)) |
---|
8183 | + d.addCallback(lambda ign: self._assert_leasecount("two", 1)) |
---|
8184 | + d.addCallback(lambda ign: self._assert_leasecount("mutable", 1)) |
---|
8185 | |
---|
8186 | # this CHECK uses the original client, which uses the same |
---|
8187 | # lease-secrets, so it will just renew the original lease |
---|
8188 | hunk ./src/allmydata/test/test_web.py 4922 |
---|
8189 | d.addCallback(self.CHECK, "one", "t=check&add-lease=true") |
---|
8190 | d.addCallback(_got_html_good) |
---|
8191 | |
---|
8192 | - d.addCallback(self._assert_leasecount, "one", 1) |
---|
8193 | - d.addCallback(self._assert_leasecount, "two", 1) |
---|
8194 | - d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
8195 | + d.addCallback(lambda ign: self._assert_leasecount("one", 1)) |
---|
8196 | + d.addCallback(lambda ign: self._assert_leasecount("two", 1)) |
---|
8197 | + d.addCallback(lambda ign: self._assert_leasecount("mutable", 1)) |
---|
8198 | |
---|
8199 | # this CHECK uses an alternate client, which adds a second lease |
---|
8200 | d.addCallback(self.CHECK, "one", "t=check&add-lease=true", clientnum=1) |
---|
8201 | hunk ./src/allmydata/test/test_web.py 4930 |
---|
8202 | d.addCallback(_got_html_good) |
---|
8203 | |
---|
8204 | - d.addCallback(self._assert_leasecount, "one", 2) |
---|
8205 | - d.addCallback(self._assert_leasecount, "two", 1) |
---|
8206 | - d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
8207 | + d.addCallback(lambda ign: self._assert_leasecount("one", 2)) |
---|
8208 | + d.addCallback(lambda ign: self._assert_leasecount("two", 1)) |
---|
8209 | + d.addCallback(lambda ign: self._assert_leasecount("mutable", 1)) |
---|
8210 | |
---|
8211 | d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true") |
---|
8212 | d.addCallback(_got_html_good) |
---|
8213 | hunk ./src/allmydata/test/test_web.py 4937 |
---|
8214 | |
---|
8215 | - d.addCallback(self._assert_leasecount, "one", 2) |
---|
8216 | - d.addCallback(self._assert_leasecount, "two", 1) |
---|
8217 | - d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
8218 | + d.addCallback(lambda ign: self._assert_leasecount("one", 2)) |
---|
8219 | + d.addCallback(lambda ign: self._assert_leasecount("two", 1)) |
---|
8220 | + d.addCallback(lambda ign: self._assert_leasecount("mutable", 1)) |
---|
8221 | |
---|
8222 | d.addCallback(self.CHECK, "mutable", "t=check&add-lease=true", |
---|
8223 | clientnum=1) |
---|
8224 | hunk ./src/allmydata/test/test_web.py 4945 |
---|
8225 | d.addCallback(_got_html_good) |
---|
8226 | |
---|
8227 | - d.addCallback(self._assert_leasecount, "one", 2) |
---|
8228 | - d.addCallback(self._assert_leasecount, "two", 1) |
---|
8229 | - d.addCallback(self._assert_leasecount, "mutable", 2) |
---|
8230 | + d.addCallback(lambda ign: self._assert_leasecount("one", 2)) |
---|
8231 | + d.addCallback(lambda ign: self._assert_leasecount("two", 1)) |
---|
8232 | + d.addCallback(lambda ign: self._assert_leasecount("mutable", 2)) |
---|
8233 | |
---|
8234 | d.addErrback(self.explain_web_error) |
---|
8235 | return d |
---|
8236 | hunk ./src/allmydata/test/test_web.py 4989 |
---|
8237 | self.failUnlessReallyEqual(len(units), 4+1) |
---|
8238 | d.addCallback(_done) |
---|
8239 | |
---|
8240 | - d.addCallback(self._assert_leasecount, "root", 1) |
---|
8241 | - d.addCallback(self._assert_leasecount, "one", 1) |
---|
8242 | - d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
8243 | + d.addCallback(lambda ign: self._assert_leasecount("root", 1)) |
---|
8244 | + d.addCallback(lambda ign: self._assert_leasecount("one", 1)) |
---|
8245 | + d.addCallback(lambda ign: self._assert_leasecount("mutable", 1)) |
---|
8246 | |
---|
8247 | d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true") |
---|
8248 | d.addCallback(_done) |
---|
8249 | hunk ./src/allmydata/test/test_web.py 4996 |
---|
8250 | |
---|
8251 | - d.addCallback(self._assert_leasecount, "root", 1) |
---|
8252 | - d.addCallback(self._assert_leasecount, "one", 1) |
---|
8253 | - d.addCallback(self._assert_leasecount, "mutable", 1) |
---|
8254 | + d.addCallback(lambda ign: self._assert_leasecount("root", 1)) |
---|
8255 | + d.addCallback(lambda ign: self._assert_leasecount("one", 1)) |
---|
8256 | + d.addCallback(lambda ign: self._assert_leasecount("mutable", 1)) |
---|
8257 | |
---|
8258 | d.addCallback(self.CHECK, "root", "t=stream-deep-check&add-lease=true", |
---|
8259 | clientnum=1) |
---|
8260 | hunk ./src/allmydata/test/test_web.py 5004 |
---|
8261 | d.addCallback(_done) |
---|
8262 | |
---|
8263 | - d.addCallback(self._assert_leasecount, "root", 2) |
---|
8264 | - d.addCallback(self._assert_leasecount, "one", 2) |
---|
8265 | - d.addCallback(self._assert_leasecount, "mutable", 2) |
---|
8266 | + d.addCallback(lambda ign: self._assert_leasecount("root", 2)) |
---|
8267 | + d.addCallback(lambda ign: self._assert_leasecount("one", 2)) |
---|
8268 | + d.addCallback(lambda ign: self._assert_leasecount("mutable", 2)) |
---|
8269 | |
---|
8270 | d.addErrback(self.explain_web_error) |
---|
8271 | return d |
---|
8272 | } |
---|
8273 | [Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999 |
---|
8274 | david-sarah@jacaranda.org**20110921221421 |
---|
8275 | Ignore-this: 600e3ccef8533aa43442fa576c7d88cf |
---|
8276 | ] { |
---|
8277 | hunk ./src/allmydata/scripts/debug.py 642 |
---|
8278 | /home/warner/testnet/node-2/storage/shares/44k/44kai1tui348689nrw8fjegc8c/2 |
---|
8279 | """ |
---|
8280 | from allmydata.storage.server import si_a2b |
---|
8281 | - from allmydata.storage.backends.disk_backend import si_si2dir |
---|
8282 | + from allmydata.storage.backends.disk.disk_backend import si_si2dir |
---|
8283 | from allmydata.util.encodingutil import quote_filepath |
---|
8284 | |
---|
8285 | out = options.stdout |
---|
8286 | hunk ./src/allmydata/scripts/debug.py 648 |
---|
8287 | si = si_a2b(options.si_s) |
---|
8288 | for nodedir in options.nodedirs: |
---|
8289 | - sharedir = si_si2dir(nodedir.child("storage").child("shares"), si) |
---|
8290 | + sharedir = si_si2dir(FilePath(nodedir).child("storage").child("shares"), si) |
---|
8291 | if sharedir.exists(): |
---|
8292 | for sharefp in sharedir.children(): |
---|
8293 | print >>out, quote_filepath(sharefp, quotemarks=False) |
---|
8294 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 189 |
---|
8295 | incominghome = self._incominghomedir.child(str(shnum)) |
---|
8296 | immsh = ImmutableDiskShare(self.get_storage_index(), shnum, incominghome, finalhome, |
---|
8297 | max_size=max_space_per_bucket) |
---|
8298 | - bw = BucketWriter(storageserver, immsh, max_space_per_bucket, lease_info, canary) |
---|
8299 | + bw = BucketWriter(storageserver, immsh, lease_info, canary) |
---|
8300 | if self._discard_storage: |
---|
8301 | bw.throw_out_all_data = True |
---|
8302 | return bw |
---|
8303 | hunk ./src/allmydata/storage/backends/disk/immutable.py 147 |
---|
8304 | def unlink(self): |
---|
8305 | self._home.remove() |
---|
8306 | |
---|
8307 | + def get_allocated_size(self): |
---|
8308 | + return self._max_size |
---|
8309 | + |
---|
8310 | def get_size(self): |
---|
8311 | return self._home.getsize() |
---|
8312 | |
---|
8313 | hunk ./src/allmydata/storage/bucket.py 15 |
---|
8314 | class BucketWriter(Referenceable): |
---|
8315 | implements(RIBucketWriter) |
---|
8316 | |
---|
8317 | - def __init__(self, ss, immutableshare, max_size, lease_info, canary): |
---|
8318 | + def __init__(self, ss, immutableshare, lease_info, canary): |
---|
8319 | self.ss = ss |
---|
8320 | hunk ./src/allmydata/storage/bucket.py 17 |
---|
8321 | - self._max_size = max_size # don't allow the client to write more than this |
---|
8322 | self._canary = canary |
---|
8323 | self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected) |
---|
8324 | self.closed = False |
---|
8325 | hunk ./src/allmydata/storage/bucket.py 27 |
---|
8326 | self._share.add_lease(lease_info) |
---|
8327 | |
---|
8328 | def allocated_size(self): |
---|
8329 | - return self._max_size |
---|
8330 | + return self._share.get_allocated_size() |
---|
8331 | |
---|
8332 | def remote_write(self, offset, data): |
---|
8333 | start = time.time() |
---|
8334 | hunk ./src/allmydata/storage/crawler.py 480 |
---|
8335 | self.state["bucket-counts"][cycle] = {} |
---|
8336 | self.state["bucket-counts"][cycle][prefix] = len(sharesets) |
---|
8337 | if prefix in self.prefixes[:self.num_sample_prefixes]: |
---|
8338 | - self.state["storage-index-samples"][prefix] = (cycle, sharesets) |
---|
8339 | + si_strings = [shareset.get_storage_index_string() for shareset in sharesets] |
---|
8340 | + self.state["storage-index-samples"][prefix] = (cycle, si_strings) |
---|
8341 | |
---|
8342 | def finished_cycle(self, cycle): |
---|
8343 | last_counts = self.state["bucket-counts"].get(cycle, []) |
---|
8344 | hunk ./src/allmydata/storage/expirer.py 281 |
---|
8345 | # copy() needs to become a deepcopy |
---|
8346 | h["space-recovered"] = s["space-recovered"].copy() |
---|
8347 | |
---|
8348 | - history = pickle.load(self.historyfp.getContent()) |
---|
8349 | + history = pickle.loads(self.historyfp.getContent()) |
---|
8350 | history[cycle] = h |
---|
8351 | while len(history) > 10: |
---|
8352 | oldcycles = sorted(history.keys()) |
---|
8353 | hunk ./src/allmydata/storage/expirer.py 355 |
---|
8354 | progress = self.get_progress() |
---|
8355 | |
---|
8356 | state = ShareCrawler.get_state(self) # does a shallow copy |
---|
8357 | - history = pickle.load(self.historyfp.getContent()) |
---|
8358 | + history = pickle.loads(self.historyfp.getContent()) |
---|
8359 | state["history"] = history |
---|
8360 | |
---|
8361 | if not progress["cycle-in-progress"]: |
---|
8362 | hunk ./src/allmydata/test/test_download.py 199 |
---|
8363 | for shnum in immutable_shares[clientnum]: |
---|
8364 | if s._shnum == shnum: |
---|
8365 | share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir |
---|
8366 | - share_dir.child(str(shnum)).remove() |
---|
8367 | + fileutil.fp_remove(share_dir.child(str(shnum))) |
---|
8368 | d.addCallback(_clobber_some_shares) |
---|
8369 | d.addCallback(lambda ign: download_to_data(n)) |
---|
8370 | d.addCallback(_got_data) |
---|
8371 | hunk ./src/allmydata/test/test_download.py 224 |
---|
8372 | for clientnum in immutable_shares: |
---|
8373 | for shnum in immutable_shares[clientnum]: |
---|
8374 | share_dir = self.get_server(clientnum).backend.get_shareset(si)._sharehomedir |
---|
8375 | - share_dir.child(str(shnum)).remove() |
---|
8376 | + fileutil.fp_remove(share_dir.child(str(shnum))) |
---|
8377 | # now a new download should fail with NoSharesError. We want a |
---|
8378 | # new ImmutableFileNode so it will forget about the old shares. |
---|
8379 | # If we merely called create_node_from_uri() without first |
---|
8380 | hunk ./src/allmydata/test/test_repairer.py 415 |
---|
8381 | def _test_corrupt(ignored): |
---|
8382 | olddata = {} |
---|
8383 | shares = self.find_uri_shares(self.uri) |
---|
8384 | - for (shnum, serverid, sharefile) in shares: |
---|
8385 | - olddata[ (shnum, serverid) ] = open(sharefile, "rb").read() |
---|
8386 | + for (shnum, serverid, sharefp) in shares: |
---|
8387 | + olddata[ (shnum, serverid) ] = sharefp.getContent() |
---|
8388 | for sh in shares: |
---|
8389 | self.corrupt_share(sh, common._corrupt_uri_extension) |
---|
8390 | hunk ./src/allmydata/test/test_repairer.py 419 |
---|
8391 | - for (shnum, serverid, sharefile) in shares: |
---|
8392 | - newdata = open(sharefile, "rb").read() |
---|
8393 | + for (shnum, serverid, sharefp) in shares: |
---|
8394 | + newdata = sharefp.getContent() |
---|
8395 | self.failIfEqual(olddata[ (shnum, serverid) ], newdata) |
---|
8396 | d.addCallback(_test_corrupt) |
---|
8397 | |
---|
8398 | hunk ./src/allmydata/test/test_storage.py 63 |
---|
8399 | |
---|
8400 | class Bucket(unittest.TestCase): |
---|
8401 | def make_workdir(self, name): |
---|
8402 | - basedir = os.path.join("storage", "Bucket", name) |
---|
8403 | - incoming = os.path.join(basedir, "tmp", "bucket") |
---|
8404 | - final = os.path.join(basedir, "bucket") |
---|
8405 | - fileutil.make_dirs(basedir) |
---|
8406 | - fileutil.make_dirs(os.path.join(basedir, "tmp")) |
---|
8407 | + basedir = FilePath("storage").child("Bucket").child(name) |
---|
8408 | + tmpdir = basedir.child("tmp") |
---|
8409 | + tmpdir.makedirs() |
---|
8410 | + incoming = tmpdir.child("bucket") |
---|
8411 | + final = basedir.child("bucket") |
---|
8412 | return incoming, final |
---|
8413 | |
---|
8414 | def bucket_writer_closed(self, bw, consumed): |
---|
8415 | hunk ./src/allmydata/test/test_storage.py 87 |
---|
8416 | |
---|
8417 | def test_create(self): |
---|
8418 | incoming, final = self.make_workdir("test_create") |
---|
8419 | - bw = BucketWriter(self, incoming, final, 200, self.make_lease(), |
---|
8420 | - FakeCanary()) |
---|
8421 | + share = ImmutableDiskShare("", 0, incoming, final, 200) |
---|
8422 | + bw = BucketWriter(self, share, self.make_lease(), FakeCanary()) |
---|
8423 | bw.remote_write(0, "a"*25) |
---|
8424 | bw.remote_write(25, "b"*25) |
---|
8425 | bw.remote_write(50, "c"*25) |
---|
8426 | hunk ./src/allmydata/test/test_storage.py 97 |
---|
8427 | |
---|
8428 | def test_readwrite(self): |
---|
8429 | incoming, final = self.make_workdir("test_readwrite") |
---|
8430 | - bw = BucketWriter(self, incoming, final, 200, self.make_lease(), |
---|
8431 | - FakeCanary()) |
---|
8432 | + share = ImmutableDiskShare("", 0, incoming, 200) |
---|
8433 | + bw = BucketWriter(self, share, self.make_lease(), FakeCanary()) |
---|
8434 | bw.remote_write(0, "a"*25) |
---|
8435 | bw.remote_write(25, "b"*25) |
---|
8436 | bw.remote_write(50, "c"*7) # last block may be short |
---|
8437 | hunk ./src/allmydata/test/test_storage.py 140 |
---|
8438 | |
---|
8439 | incoming, final = self.make_workdir("test_read_past_end_of_share_data") |
---|
8440 | |
---|
8441 | - fileutil.write(final, share_file_data) |
---|
8442 | + final.setContent(share_file_data) |
---|
8443 | |
---|
8444 | mockstorageserver = mock.Mock() |
---|
8445 | |
---|
8446 | hunk ./src/allmydata/test/test_storage.py 179 |
---|
8447 | |
---|
8448 | class BucketProxy(unittest.TestCase): |
---|
8449 | def make_bucket(self, name, size): |
---|
8450 | - basedir = os.path.join("storage", "BucketProxy", name) |
---|
8451 | - incoming = os.path.join(basedir, "tmp", "bucket") |
---|
8452 | - final = os.path.join(basedir, "bucket") |
---|
8453 | - fileutil.make_dirs(basedir) |
---|
8454 | - fileutil.make_dirs(os.path.join(basedir, "tmp")) |
---|
8455 | - bw = BucketWriter(self, incoming, final, size, self.make_lease(), |
---|
8456 | - FakeCanary()) |
---|
8457 | + basedir = FilePath("storage").child("BucketProxy").child(name) |
---|
8458 | + tmpdir = basedir.child("tmp") |
---|
8459 | + tmpdir.makedirs() |
---|
8460 | + incoming = tmpdir.child("bucket") |
---|
8461 | + final = basedir.child("bucket") |
---|
8462 | + share = ImmutableDiskShare("", 0, incoming, final, size) |
---|
8463 | + bw = BucketWriter(self, share, self.make_lease(), FakeCanary()) |
---|
8464 | rb = RemoteBucket() |
---|
8465 | rb.target = bw |
---|
8466 | return bw, rb, final |
---|
8467 | hunk ./src/allmydata/test/test_storage.py 206 |
---|
8468 | pass |
---|
8469 | |
---|
8470 | def test_create(self): |
---|
8471 | - bw, rb, sharefname = self.make_bucket("test_create", 500) |
---|
8472 | + bw, rb, sharefp = self.make_bucket("test_create", 500) |
---|
8473 | bp = WriteBucketProxy(rb, None, |
---|
8474 | data_size=300, |
---|
8475 | block_size=10, |
---|
8476 | hunk ./src/allmydata/test/test_storage.py 237 |
---|
8477 | for i in (1,9,13)] |
---|
8478 | uri_extension = "s" + "E"*498 + "e" |
---|
8479 | |
---|
8480 | - bw, rb, sharefname = self.make_bucket(name, sharesize) |
---|
8481 | + bw, rb, sharefp = self.make_bucket(name, sharesize) |
---|
8482 | bp = wbp_class(rb, None, |
---|
8483 | data_size=95, |
---|
8484 | block_size=25, |
---|
8485 | hunk ./src/allmydata/test/test_storage.py 258 |
---|
8486 | |
---|
8487 | # now read everything back |
---|
8488 | def _start_reading(res): |
---|
8489 | - br = BucketReader(self, sharefname) |
---|
8490 | + br = BucketReader(self, sharefp) |
---|
8491 | rb = RemoteBucket() |
---|
8492 | rb.target = br |
---|
8493 | server = NoNetworkServer("abc", None) |
---|
8494 | hunk ./src/allmydata/test/test_storage.py 373 |
---|
8495 | for i, wb in writers.items(): |
---|
8496 | wb.remote_write(0, "%10d" % i) |
---|
8497 | wb.remote_close() |
---|
8498 | - storedir = os.path.join(self.workdir("test_dont_overfill_dirs"), |
---|
8499 | - "shares") |
---|
8500 | - children_of_storedir = set(os.listdir(storedir)) |
---|
8501 | + storedir = self.workdir("test_dont_overfill_dirs").child("shares") |
---|
8502 | + children_of_storedir = sorted([child.basename() for child in storedir.children()]) |
---|
8503 | |
---|
8504 | # Now store another one under another storageindex that has leading |
---|
8505 | # chars the same as the first storageindex. |
---|
8506 | hunk ./src/allmydata/test/test_storage.py 382 |
---|
8507 | for i, wb in writers.items(): |
---|
8508 | wb.remote_write(0, "%10d" % i) |
---|
8509 | wb.remote_close() |
---|
8510 | - storedir = os.path.join(self.workdir("test_dont_overfill_dirs"), |
---|
8511 | - "shares") |
---|
8512 | - new_children_of_storedir = set(os.listdir(storedir)) |
---|
8513 | + storedir = self.workdir("test_dont_overfill_dirs").child("shares") |
---|
8514 | + new_children_of_storedir = sorted([child.basename() for child in storedir.children()]) |
---|
8515 | self.failUnlessEqual(children_of_storedir, new_children_of_storedir) |
---|
8516 | |
---|
8517 | def test_remove_incoming(self): |
---|
8518 | hunk ./src/allmydata/test/test_storage.py 390 |
---|
8519 | ss = self.create("test_remove_incoming") |
---|
8520 | already, writers = self.allocate(ss, "vid", range(3), 10) |
---|
8521 | for i,wb in writers.items(): |
---|
8522 | + incoming_share_home = wb._share._home |
---|
8523 | wb.remote_write(0, "%10d" % i) |
---|
8524 | wb.remote_close() |
---|
8525 | hunk ./src/allmydata/test/test_storage.py 393 |
---|
8526 | - incoming_share_dir = wb.incominghome |
---|
8527 | - incoming_bucket_dir = os.path.dirname(incoming_share_dir) |
---|
8528 | - incoming_prefix_dir = os.path.dirname(incoming_bucket_dir) |
---|
8529 | - incoming_dir = os.path.dirname(incoming_prefix_dir) |
---|
8530 | - self.failIf(os.path.exists(incoming_bucket_dir), incoming_bucket_dir) |
---|
8531 | - self.failIf(os.path.exists(incoming_prefix_dir), incoming_prefix_dir) |
---|
8532 | - self.failUnless(os.path.exists(incoming_dir), incoming_dir) |
---|
8533 | + incoming_bucket_dir = incoming_share_home.parent() |
---|
8534 | + incoming_prefix_dir = incoming_bucket_dir.parent() |
---|
8535 | + incoming_dir = incoming_prefix_dir.parent() |
---|
8536 | + self.failIf(incoming_bucket_dir.exists(), incoming_bucket_dir) |
---|
8537 | + self.failIf(incoming_prefix_dir.exists(), incoming_prefix_dir) |
---|
8538 | + self.failUnless(incoming_dir.exists(), incoming_dir) |
---|
8539 | |
---|
8540 | def test_abort(self): |
---|
8541 | # remote_abort, when called on a writer, should make sure that |
---|
8542 | hunk ./src/allmydata/test/test_upload.py 1849 |
---|
8543 | # remove the storedir, wiping out any existing shares |
---|
8544 | fileutil.fp_remove(storedir) |
---|
8545 | # create an empty storedir to replace the one we just removed |
---|
8546 | - storedir.mkdir() |
---|
8547 | + storedir.makedirs() |
---|
8548 | client = self.g.clients[0] |
---|
8549 | client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4 |
---|
8550 | return client |
---|
8551 | hunk ./src/allmydata/test/test_upload.py 1890 |
---|
8552 | # remove the storedir, wiping out any existing shares |
---|
8553 | fileutil.fp_remove(storedir) |
---|
8554 | # create an empty storedir to replace the one we just removed |
---|
8555 | - storedir.mkdir() |
---|
8556 | + storedir.makedirs() |
---|
8557 | client = self.g.clients[0] |
---|
8558 | client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4 |
---|
8559 | return client |
---|
8560 | } |
---|
8561 | [uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999 |
---|
8562 | david-sarah@jacaranda.org**20110921222038 |
---|
8563 | Ignore-this: ffeeab60d8e71a6a29a002d024d76fcf |
---|
8564 | ] { |
---|
8565 | hunk ./src/allmydata/uri.py 829 |
---|
8566 | def is_mutable(self): |
---|
8567 | return False |
---|
8568 | |
---|
8569 | + def is_readonly(self): |
---|
8570 | + return True |
---|
8571 | + |
---|
8572 | + def get_readonly(self): |
---|
8573 | + return self |
---|
8574 | + |
---|
8575 | + |
---|
8576 | class DirectoryURIVerifier(_DirectoryBaseURI): |
---|
8577 | implements(IVerifierURI) |
---|
8578 | |
---|
8579 | hunk ./src/allmydata/uri.py 855 |
---|
8580 | def is_mutable(self): |
---|
8581 | return False |
---|
8582 | |
---|
8583 | + def is_readonly(self): |
---|
8584 | + return True |
---|
8585 | + |
---|
8586 | + def get_readonly(self): |
---|
8587 | + return self |
---|
8588 | + |
---|
8589 | |
---|
8590 | class ImmutableDirectoryURIVerifier(DirectoryURIVerifier): |
---|
8591 | implements(IVerifierURI) |
---|
8592 | } |
---|
8593 | [Fix some more test failures. refs #999 |
---|
8594 | david-sarah@jacaranda.org**20110922045451 |
---|
8595 | Ignore-this: b726193cbd03a7c3d343f6e4a0f33ee7 |
---|
8596 | ] { |
---|
8597 | hunk ./src/allmydata/scripts/debug.py 42 |
---|
8598 | from allmydata.util.encodingutil import quote_output |
---|
8599 | |
---|
8600 | out = options.stdout |
---|
8601 | + filename = options['filename'] |
---|
8602 | |
---|
8603 | # check the version, to see if we have a mutable or immutable share |
---|
8604 | hunk ./src/allmydata/scripts/debug.py 45 |
---|
8605 | - print >>out, "share filename: %s" % quote_output(options['filename']) |
---|
8606 | + print >>out, "share filename: %s" % quote_output(filename) |
---|
8607 | |
---|
8608 | hunk ./src/allmydata/scripts/debug.py 47 |
---|
8609 | - share = get_share("", 0, fp) |
---|
8610 | + share = get_share("", 0, FilePath(filename)) |
---|
8611 | if share.sharetype == "mutable": |
---|
8612 | return dump_mutable_share(options, share) |
---|
8613 | else: |
---|
8614 | hunk ./src/allmydata/storage/backends/disk/mutable.py 85 |
---|
8615 | self.parent = parent # for logging |
---|
8616 | |
---|
8617 | def log(self, *args, **kwargs): |
---|
8618 | - return self.parent.log(*args, **kwargs) |
---|
8619 | + if self.parent: |
---|
8620 | + return self.parent.log(*args, **kwargs) |
---|
8621 | |
---|
8622 | def create(self, serverid, write_enabler): |
---|
8623 | assert not self._home.exists() |
---|
8624 | hunk ./src/allmydata/storage/common.py 6 |
---|
8625 | class DataTooLargeError(Exception): |
---|
8626 | pass |
---|
8627 | |
---|
8628 | -class UnknownMutableContainerVersionError(Exception): |
---|
8629 | +class UnknownContainerVersionError(Exception): |
---|
8630 | pass |
---|
8631 | |
---|
8632 | hunk ./src/allmydata/storage/common.py 9 |
---|
8633 | -class UnknownImmutableContainerVersionError(Exception): |
---|
8634 | +class UnknownMutableContainerVersionError(UnknownContainerVersionError): |
---|
8635 | + pass |
---|
8636 | + |
---|
8637 | +class UnknownImmutableContainerVersionError(UnknownContainerVersionError): |
---|
8638 | pass |
---|
8639 | |
---|
8640 | |
---|
8641 | hunk ./src/allmydata/storage/crawler.py 208 |
---|
8642 | try: |
---|
8643 | state = pickle.loads(self.statefp.getContent()) |
---|
8644 | except EnvironmentError: |
---|
8645 | + if self.statefp.exists(): |
---|
8646 | + raise |
---|
8647 | state = {"version": 1, |
---|
8648 | "last-cycle-finished": None, |
---|
8649 | "current-cycle": None, |
---|
8650 | hunk ./src/allmydata/storage/server.py 24 |
---|
8651 | |
---|
8652 | name = 'storage' |
---|
8653 | LeaseCheckerClass = LeaseCheckingCrawler |
---|
8654 | + BucketCounterClass = BucketCountingCrawler |
---|
8655 | DEFAULT_EXPIRATION_POLICY = { |
---|
8656 | 'enabled': False, |
---|
8657 | 'mode': 'age', |
---|
8658 | hunk ./src/allmydata/storage/server.py 70 |
---|
8659 | |
---|
8660 | def _setup_bucket_counter(self): |
---|
8661 | statefp = self._statedir.child("bucket_counter.state") |
---|
8662 | - self.bucket_counter = BucketCountingCrawler(self.backend, statefp) |
---|
8663 | + self.bucket_counter = self.BucketCounterClass(self.backend, statefp) |
---|
8664 | self.bucket_counter.setServiceParent(self) |
---|
8665 | |
---|
8666 | def _setup_lease_checker(self, expiration_policy): |
---|
8667 | hunk ./src/allmydata/storage/server.py 224 |
---|
8668 | share.add_or_renew_lease(lease_info) |
---|
8669 | alreadygot.add(share.get_shnum()) |
---|
8670 | |
---|
8671 | - for shnum in sharenums - alreadygot: |
---|
8672 | + for shnum in set(sharenums) - alreadygot: |
---|
8673 | if shareset.has_incoming(shnum): |
---|
8674 | # Note that we don't create BucketWriters for shnums that |
---|
8675 | # have a partial share (in incoming/), so if a second upload |
---|
8676 | hunk ./src/allmydata/storage/server.py 247 |
---|
8677 | |
---|
8678 | def remote_add_lease(self, storageindex, renew_secret, cancel_secret, |
---|
8679 | owner_num=1): |
---|
8680 | - # cancel_secret is no longer used. |
---|
8681 | start = time.time() |
---|
8682 | self.count("add-lease") |
---|
8683 | new_expire_time = time.time() + 31*24*60*60 |
---|
8684 | hunk ./src/allmydata/storage/server.py 250 |
---|
8685 | - lease_info = LeaseInfo(owner_num, renew_secret, |
---|
8686 | + lease_info = LeaseInfo(owner_num, renew_secret, cancel_secret, |
---|
8687 | new_expire_time, self._serverid) |
---|
8688 | |
---|
8689 | try: |
---|
8690 | hunk ./src/allmydata/storage/server.py 254 |
---|
8691 | - self.backend.add_or_renew_lease(lease_info) |
---|
8692 | + shareset = self.backend.get_shareset(storageindex) |
---|
8693 | + shareset.add_or_renew_lease(lease_info) |
---|
8694 | finally: |
---|
8695 | self.add_latency("add-lease", time.time() - start) |
---|
8696 | |
---|
8697 | hunk ./src/allmydata/test/test_crawler.py 3 |
---|
8698 | |
---|
8699 | import time |
---|
8700 | -import os.path |
---|
8701 | + |
---|
8702 | from twisted.trial import unittest |
---|
8703 | from twisted.application import service |
---|
8704 | from twisted.internet import defer |
---|
8705 | hunk ./src/allmydata/test/test_crawler.py 10 |
---|
8706 | from twisted.python.filepath import FilePath |
---|
8707 | from foolscap.api import eventually, fireEventually |
---|
8708 | |
---|
8709 | -from allmydata.util import fileutil, hashutil, pollmixin |
---|
8710 | +from allmydata.util import hashutil, pollmixin |
---|
8711 | from allmydata.storage.server import StorageServer, si_b2a |
---|
8712 | from allmydata.storage.crawler import ShareCrawler, TimeSliceExceeded |
---|
8713 | from allmydata.storage.backends.disk.disk_backend import DiskBackend |
---|
8714 | hunk ./src/allmydata/test/test_mutable.py 3024 |
---|
8715 | cso.stderr = StringIO() |
---|
8716 | debug.catalog_shares(cso) |
---|
8717 | shares = cso.stdout.getvalue().splitlines() |
---|
8718 | + self.failIf(len(shares) < 1, shares) |
---|
8719 | oneshare = shares[0] # all shares should be MDMF |
---|
8720 | self.failIf(oneshare.startswith("UNKNOWN"), oneshare) |
---|
8721 | self.failUnless(oneshare.startswith("MDMF"), oneshare) |
---|
8722 | hunk ./src/allmydata/test/test_storage.py 1 |
---|
8723 | -import time, os.path, platform, stat, re, simplejson, struct, shutil, itertools |
---|
8724 | +import time, os.path, platform, re, simplejson, struct, itertools |
---|
8725 | |
---|
8726 | import mock |
---|
8727 | |
---|
8728 | hunk ./src/allmydata/test/test_storage.py 15 |
---|
8729 | from allmydata.util import fileutil, hashutil, base32, pollmixin, time_format |
---|
8730 | from allmydata.storage.server import StorageServer |
---|
8731 | from allmydata.storage.backends.disk.disk_backend import DiskBackend |
---|
8732 | +from allmydata.storage.backends.disk.immutable import ImmutableDiskShare |
---|
8733 | from allmydata.storage.backends.disk.mutable import MutableDiskShare |
---|
8734 | from allmydata.storage.bucket import BucketWriter, BucketReader |
---|
8735 | hunk ./src/allmydata/test/test_storage.py 18 |
---|
8736 | -from allmydata.storage.common import DataTooLargeError, \ |
---|
8737 | +from allmydata.storage.common import DataTooLargeError, UnknownContainerVersionError, \ |
---|
8738 | UnknownMutableContainerVersionError, UnknownImmutableContainerVersionError |
---|
8739 | from allmydata.storage.lease import LeaseInfo |
---|
8740 | from allmydata.storage.crawler import BucketCountingCrawler |
---|
8741 | hunk ./src/allmydata/test/test_storage.py 88 |
---|
8742 | |
---|
8743 | def test_create(self): |
---|
8744 | incoming, final = self.make_workdir("test_create") |
---|
8745 | - share = ImmutableDiskShare("", 0, incoming, final, 200) |
---|
8746 | + share = ImmutableDiskShare("", 0, incoming, final, max_size=200) |
---|
8747 | bw = BucketWriter(self, share, self.make_lease(), FakeCanary()) |
---|
8748 | bw.remote_write(0, "a"*25) |
---|
8749 | bw.remote_write(25, "b"*25) |
---|
8750 | hunk ./src/allmydata/test/test_storage.py 98 |
---|
8751 | |
---|
8752 | def test_readwrite(self): |
---|
8753 | incoming, final = self.make_workdir("test_readwrite") |
---|
8754 | - share = ImmutableDiskShare("", 0, incoming, 200) |
---|
8755 | + share = ImmutableDiskShare("", 0, incoming, final, max_size=200) |
---|
8756 | bw = BucketWriter(self, share, self.make_lease(), FakeCanary()) |
---|
8757 | bw.remote_write(0, "a"*25) |
---|
8758 | bw.remote_write(25, "b"*25) |
---|
8759 | hunk ./src/allmydata/test/test_storage.py 106 |
---|
8760 | bw.remote_close() |
---|
8761 | |
---|
8762 | # now read from it |
---|
8763 | - br = BucketReader(self, bw.finalhome) |
---|
8764 | + br = BucketReader(self, share) |
---|
8765 | self.failUnlessEqual(br.remote_read(0, 25), "a"*25) |
---|
8766 | self.failUnlessEqual(br.remote_read(25, 25), "b"*25) |
---|
8767 | self.failUnlessEqual(br.remote_read(50, 7), "c"*7) |
---|
8768 | hunk ./src/allmydata/test/test_storage.py 131 |
---|
8769 | ownernumber = struct.pack('>L', 0) |
---|
8770 | renewsecret = 'THIS LETS ME RENEW YOUR FILE....' |
---|
8771 | assert len(renewsecret) == 32 |
---|
8772 | - cancelsecret = 'THIS LETS ME KILL YOUR FILE HAHA' |
---|
8773 | + cancelsecret = 'THIS USED TO LET ME KILL YR FILE' |
---|
8774 | assert len(cancelsecret) == 32 |
---|
8775 | expirationtime = struct.pack('>L', 60*60*24*31) # 31 days in seconds |
---|
8776 | |
---|
8777 | hunk ./src/allmydata/test/test_storage.py 142 |
---|
8778 | incoming, final = self.make_workdir("test_read_past_end_of_share_data") |
---|
8779 | |
---|
8780 | final.setContent(share_file_data) |
---|
8781 | + share = ImmutableDiskShare("", 0, final) |
---|
8782 | |
---|
8783 | mockstorageserver = mock.Mock() |
---|
8784 | |
---|
8785 | hunk ./src/allmydata/test/test_storage.py 147 |
---|
8786 | # Now read from it. |
---|
8787 | - br = BucketReader(mockstorageserver, final) |
---|
8788 | + br = BucketReader(mockstorageserver, share) |
---|
8789 | |
---|
8790 | self.failUnlessEqual(br.remote_read(0, len(share_data)), share_data) |
---|
8791 | |
---|
8792 | hunk ./src/allmydata/test/test_storage.py 260 |
---|
8793 | |
---|
8794 | # now read everything back |
---|
8795 | def _start_reading(res): |
---|
8796 | - br = BucketReader(self, sharefp) |
---|
8797 | + share = ImmutableDiskShare("", 0, sharefp) |
---|
8798 | + br = BucketReader(self, share) |
---|
8799 | rb = RemoteBucket() |
---|
8800 | rb.target = br |
---|
8801 | server = NoNetworkServer("abc", None) |
---|
8802 | hunk ./src/allmydata/test/test_storage.py 346 |
---|
8803 | if 'cygwin' in syslow or 'windows' in syslow or 'darwin' in syslow: |
---|
8804 | raise unittest.SkipTest("If your filesystem doesn't support efficient sparse files then it is very expensive (Mac OS X and Windows don't support efficient sparse files).") |
---|
8805 | |
---|
8806 | - avail = fileutil.get_available_space('.', 512*2**20) |
---|
8807 | + avail = fileutil.get_available_space(FilePath('.'), 512*2**20) |
---|
8808 | if avail <= 4*2**30: |
---|
8809 | raise unittest.SkipTest("This test will spuriously fail if you have less than 4 GiB free on your filesystem.") |
---|
8810 | |
---|
8811 | hunk ./src/allmydata/test/test_storage.py 476 |
---|
8812 | w[0].remote_write(0, "\xff"*10) |
---|
8813 | w[0].remote_close() |
---|
8814 | |
---|
8815 | - fp = ss.backend.get_shareset("si1").sharehomedir.child("0") |
---|
8816 | + fp = ss.backend.get_shareset("si1")._sharehomedir.child("0") |
---|
8817 | f = fp.open("rb+") |
---|
8818 | hunk ./src/allmydata/test/test_storage.py 478 |
---|
8819 | - f.seek(0) |
---|
8820 | - f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1 |
---|
8821 | - f.close() |
---|
8822 | + try: |
---|
8823 | + f.seek(0) |
---|
8824 | + f.write(struct.pack(">L", 0)) # this is invalid: minimum used is v1 |
---|
8825 | + finally: |
---|
8826 | + f.close() |
---|
8827 | |
---|
8828 | ss.remote_get_buckets("allocate") |
---|
8829 | |
---|
8830 | hunk ./src/allmydata/test/test_storage.py 575 |
---|
8831 | |
---|
8832 | def test_seek(self): |
---|
8833 | basedir = self.workdir("test_seek_behavior") |
---|
8834 | - fileutil.make_dirs(basedir) |
---|
8835 | - filename = os.path.join(basedir, "testfile") |
---|
8836 | - f = open(filename, "wb") |
---|
8837 | - f.write("start") |
---|
8838 | - f.close() |
---|
8839 | + basedir.makedirs() |
---|
8840 | + fp = basedir.child("testfile") |
---|
8841 | + fp.setContent("start") |
---|
8842 | + |
---|
8843 | # mode="w" allows seeking-to-create-holes, but truncates pre-existing |
---|
8844 | # files. mode="a" preserves previous contents but does not allow |
---|
8845 | # seeking-to-create-holes. mode="r+" allows both. |
---|
8846 | hunk ./src/allmydata/test/test_storage.py 582 |
---|
8847 | - f = open(filename, "rb+") |
---|
8848 | - f.seek(100) |
---|
8849 | - f.write("100") |
---|
8850 | - f.close() |
---|
8851 | - filelen = os.stat(filename)[stat.ST_SIZE] |
---|
8852 | + f = fp.open("rb+") |
---|
8853 | + try: |
---|
8854 | + f.seek(100) |
---|
8855 | + f.write("100") |
---|
8856 | + finally: |
---|
8857 | + f.close() |
---|
8858 | + fp.restat() |
---|
8859 | + filelen = fp.getsize() |
---|
8860 | self.failUnlessEqual(filelen, 100+3) |
---|
8861 | hunk ./src/allmydata/test/test_storage.py 591 |
---|
8862 | - f2 = open(filename, "rb") |
---|
8863 | - self.failUnlessEqual(f2.read(5), "start") |
---|
8864 | - |
---|
8865 | + f2 = fp.open("rb") |
---|
8866 | + try: |
---|
8867 | + self.failUnlessEqual(f2.read(5), "start") |
---|
8868 | + finally: |
---|
8869 | + f2.close() |
---|
8870 | |
---|
8871 | def test_leases(self): |
---|
8872 | ss = self.create("test_leases") |
---|
8873 | hunk ./src/allmydata/test/test_storage.py 693 |
---|
8874 | |
---|
8875 | def test_readonly(self): |
---|
8876 | workdir = self.workdir("test_readonly") |
---|
8877 | - ss = StorageServer(workdir, "\x00" * 20, readonly_storage=True) |
---|
8878 | + backend = DiskBackend(workdir, readonly=True) |
---|
8879 | + ss = StorageServer("\x00" * 20, backend, workdir) |
---|
8880 | ss.setServiceParent(self.sparent) |
---|
8881 | |
---|
8882 | already,writers = self.allocate(ss, "vid", [0,1,2], 75) |
---|
8883 | hunk ./src/allmydata/test/test_storage.py 710 |
---|
8884 | |
---|
8885 | def test_discard(self): |
---|
8886 | # discard is really only used for other tests, but we test it anyways |
---|
8887 | + # XXX replace this with a null backend test |
---|
8888 | workdir = self.workdir("test_discard") |
---|
8889 | hunk ./src/allmydata/test/test_storage.py 712 |
---|
8890 | - ss = StorageServer(workdir, "\x00" * 20, discard_storage=True) |
---|
8891 | + backend = DiskBackend(workdir, readonly=False, discard_storage=True) |
---|
8892 | + ss = StorageServer("\x00" * 20, backend, workdir) |
---|
8893 | ss.setServiceParent(self.sparent) |
---|
8894 | |
---|
8895 | already,writers = self.allocate(ss, "vid", [0,1,2], 75) |
---|
8896 | hunk ./src/allmydata/test/test_storage.py 731 |
---|
8897 | |
---|
8898 | def test_advise_corruption(self): |
---|
8899 | workdir = self.workdir("test_advise_corruption") |
---|
8900 | - ss = StorageServer(workdir, "\x00" * 20, discard_storage=True) |
---|
8901 | + backend = DiskBackend(workdir, readonly=False, discard_storage=True) |
---|
8902 | + ss = StorageServer("\x00" * 20, backend, workdir) |
---|
8903 | ss.setServiceParent(self.sparent) |
---|
8904 | |
---|
8905 | si0_s = base32.b2a("si0") |
---|
8906 | hunk ./src/allmydata/test/test_storage.py 738 |
---|
8907 | ss.remote_advise_corrupt_share("immutable", "si0", 0, |
---|
8908 | "This share smells funny.\n") |
---|
8909 | - reportdir = os.path.join(workdir, "corruption-advisories") |
---|
8910 | - reports = os.listdir(reportdir) |
---|
8911 | + reportdir = workdir.child("corruption-advisories") |
---|
8912 | + reports = [child.basename() for child in reportdir.children()] |
---|
8913 | self.failUnlessEqual(len(reports), 1) |
---|
8914 | report_si0 = reports[0] |
---|
8915 | hunk ./src/allmydata/test/test_storage.py 742 |
---|
8916 | - self.failUnlessIn(si0_s, report_si0) |
---|
8917 | - f = open(os.path.join(reportdir, report_si0), "r") |
---|
8918 | - report = f.read() |
---|
8919 | - f.close() |
---|
8920 | + self.failUnlessIn(si0_s, str(report_si0)) |
---|
8921 | + report = reportdir.child(report_si0).getContent() |
---|
8922 | + |
---|
8923 | self.failUnlessIn("type: immutable", report) |
---|
8924 | self.failUnlessIn("storage_index: %s" % si0_s, report) |
---|
8925 | self.failUnlessIn("share_number: 0", report) |
---|
8926 | hunk ./src/allmydata/test/test_storage.py 762 |
---|
8927 | self.failUnlessEqual(set(b.keys()), set([1])) |
---|
8928 | b[1].remote_advise_corrupt_share("This share tastes like dust.\n") |
---|
8929 | |
---|
8930 | - reports = os.listdir(reportdir) |
---|
8931 | + reports = [child.basename() for child in reportdir.children()] |
---|
8932 | self.failUnlessEqual(len(reports), 2) |
---|
8933 | hunk ./src/allmydata/test/test_storage.py 764 |
---|
8934 | - report_si1 = [r for r in reports if si1_s in r][0] |
---|
8935 | - f = open(os.path.join(reportdir, report_si1), "r") |
---|
8936 | - report = f.read() |
---|
8937 | - f.close() |
---|
8938 | + report_si1 = [r for r in reports if si1_s in str(r)][0] |
---|
8939 | + report = reportdir.child(report_si1).getContent() |
---|
8940 | + |
---|
8941 | self.failUnlessIn("type: immutable", report) |
---|
8942 | self.failUnlessIn("storage_index: %s" % si1_s, report) |
---|
8943 | self.failUnlessIn("share_number: 1", report) |
---|
8944 | hunk ./src/allmydata/test/test_storage.py 783 |
---|
8945 | return self.sparent.stopService() |
---|
8946 | |
---|
8947 | def workdir(self, name): |
---|
8948 | - basedir = os.path.join("storage", "MutableServer", name) |
---|
8949 | - return basedir |
---|
8950 | + return FilePath("storage").child("MutableServer").child(name) |
---|
8951 | |
---|
8952 | def create(self, name): |
---|
8953 | workdir = self.workdir(name) |
---|
8954 | hunk ./src/allmydata/test/test_storage.py 787 |
---|
8955 | - ss = StorageServer(workdir, "\x00" * 20) |
---|
8956 | + backend = DiskBackend(workdir) |
---|
8957 | + ss = StorageServer("\x00" * 20, backend, workdir) |
---|
8958 | ss.setServiceParent(self.sparent) |
---|
8959 | return ss |
---|
8960 | |
---|
8961 | hunk ./src/allmydata/test/test_storage.py 810 |
---|
8962 | cancel_secret = self.cancel_secret(lease_tag) |
---|
8963 | rstaraw = ss.remote_slot_testv_and_readv_and_writev |
---|
8964 | testandwritev = dict( [ (shnum, ([], [], None) ) |
---|
8965 | - for shnum in sharenums ] ) |
---|
8966 | + for shnum in sharenums ] ) |
---|
8967 | readv = [] |
---|
8968 | rc = rstaraw(storage_index, |
---|
8969 | (write_enabler, renew_secret, cancel_secret), |
---|
8970 | hunk ./src/allmydata/test/test_storage.py 824 |
---|
8971 | def test_bad_magic(self): |
---|
8972 | ss = self.create("test_bad_magic") |
---|
8973 | self.allocate(ss, "si1", "we1", self._lease_secret.next(), set([0]), 10) |
---|
8974 | - fp = ss.backend.get_shareset("si1").sharehomedir.child("0") |
---|
8975 | + fp = ss.backend.get_shareset("si1")._sharehomedir.child("0") |
---|
8976 | f = fp.open("rb+") |
---|
8977 | hunk ./src/allmydata/test/test_storage.py 826 |
---|
8978 | - f.seek(0) |
---|
8979 | - f.write("BAD MAGIC") |
---|
8980 | - f.close() |
---|
8981 | + try: |
---|
8982 | + f.seek(0) |
---|
8983 | + f.write("BAD MAGIC") |
---|
8984 | + finally: |
---|
8985 | + f.close() |
---|
8986 | read = ss.remote_slot_readv |
---|
8987 | hunk ./src/allmydata/test/test_storage.py 832 |
---|
8988 | - e = self.failUnlessRaises(UnknownMutableContainerVersionError, |
---|
8989 | + |
---|
8990 | + # This used to test for UnknownMutableContainerVersionError, |
---|
8991 | + # but the current code raises UnknownImmutableContainerVersionError. |
---|
8992 | + # (It changed because remote_slot_readv now works with either |
---|
8993 | + # mutable or immutable shares.) Since the share file doesn't have |
---|
8994 | + # the mutable magic, it's not clear that this is wrong. |
---|
8995 | + # For now, accept either exception. |
---|
8996 | + e = self.failUnlessRaises(UnknownContainerVersionError, |
---|
8997 | read, "si1", [0], [(0,10)]) |
---|
8998 | hunk ./src/allmydata/test/test_storage.py 841 |
---|
8999 | - self.failUnlessIn(" had magic ", str(e)) |
---|
9000 | + self.failUnlessIn(" had ", str(e)) |
---|
9001 | self.failUnlessIn(" but we wanted ", str(e)) |
---|
9002 | |
---|
9003 | def test_container_size(self): |
---|
9004 | hunk ./src/allmydata/test/test_storage.py 1248 |
---|
9005 | |
---|
9006 | # create a random non-numeric file in the bucket directory, to |
---|
9007 | # exercise the code that's supposed to ignore those. |
---|
9008 | - bucket_dir = ss.backend.get_shareset("si1").sharehomedir |
---|
9009 | + bucket_dir = ss.backend.get_shareset("si1")._sharehomedir |
---|
9010 | bucket_dir.child("ignore_me.txt").setContent("you ought to be ignoring me\n") |
---|
9011 | |
---|
9012 | hunk ./src/allmydata/test/test_storage.py 1251 |
---|
9013 | - s0 = MutableDiskShare(os.path.join(bucket_dir, "0")) |
---|
9014 | + s0 = MutableDiskShare("", 0, bucket_dir.child("0")) |
---|
9015 | self.failUnlessEqual(len(list(s0.get_leases())), 1) |
---|
9016 | |
---|
9017 | # add-lease on a missing storage index is silently ignored |
---|
9018 | hunk ./src/allmydata/test/test_storage.py 1365 |
---|
9019 | # note: this is a detail of the storage server implementation, and |
---|
9020 | # may change in the future |
---|
9021 | prefix = si[:2] |
---|
9022 | - prefixdir = os.path.join(self.workdir("test_remove"), "shares", prefix) |
---|
9023 | - bucketdir = os.path.join(prefixdir, si) |
---|
9024 | - self.failUnless(os.path.exists(prefixdir), prefixdir) |
---|
9025 | - self.failIf(os.path.exists(bucketdir), bucketdir) |
---|
9026 | + prefixdir = self.workdir("test_remove").child("shares").child(prefix) |
---|
9027 | + bucketdir = prefixdir.child(si) |
---|
9028 | + self.failUnless(prefixdir.exists(), prefixdir) |
---|
9029 | + self.failIf(bucketdir.exists(), bucketdir) |
---|
9030 | |
---|
9031 | |
---|
9032 | class MDMFProxies(unittest.TestCase, ShouldFailMixin): |
---|
9033 | hunk ./src/allmydata/test/test_storage.py 1420 |
---|
9034 | |
---|
9035 | |
---|
9036 | def workdir(self, name): |
---|
9037 | - basedir = os.path.join("storage", "MutableServer", name) |
---|
9038 | - return basedir |
---|
9039 | - |
---|
9040 | + return FilePath("storage").child("MDMFProxies").child(name) |
---|
9041 | |
---|
9042 | def create(self, name): |
---|
9043 | workdir = self.workdir(name) |
---|
9044 | hunk ./src/allmydata/test/test_storage.py 1424 |
---|
9045 | - ss = StorageServer(workdir, "\x00" * 20) |
---|
9046 | + backend = DiskBackend(workdir) |
---|
9047 | + ss = StorageServer("\x00" * 20, backend, workdir) |
---|
9048 | ss.setServiceParent(self.sparent) |
---|
9049 | return ss |
---|
9050 | |
---|
9051 | hunk ./src/allmydata/test/test_storage.py 2798 |
---|
9052 | return self.sparent.stopService() |
---|
9053 | |
---|
9054 | def workdir(self, name): |
---|
9055 | - return FilePath("storage").child("Server").child(name) |
---|
9056 | + return FilePath("storage").child("Stats").child(name) |
---|
9057 | |
---|
9058 | def create(self, name): |
---|
9059 | workdir = self.workdir(name) |
---|
9060 | hunk ./src/allmydata/test/test_storage.py 2886 |
---|
9061 | d.callback(None) |
---|
9062 | |
---|
9063 | class MyStorageServer(StorageServer): |
---|
9064 | - def add_bucket_counter(self): |
---|
9065 | - statefile = os.path.join(self.storedir, "bucket_counter.state") |
---|
9066 | - self.bucket_counter = MyBucketCountingCrawler(self, statefile) |
---|
9067 | - self.bucket_counter.setServiceParent(self) |
---|
9068 | + BucketCounterClass = MyBucketCountingCrawler |
---|
9069 | + |
---|
9070 | |
---|
9071 | class BucketCounter(unittest.TestCase, pollmixin.PollMixin): |
---|
9072 | |
---|
9073 | hunk ./src/allmydata/test/test_storage.py 2899 |
---|
9074 | |
---|
9075 | def test_bucket_counter(self): |
---|
9076 | basedir = "storage/BucketCounter/bucket_counter" |
---|
9077 | - fileutil.make_dirs(basedir) |
---|
9078 | - ss = StorageServer(basedir, "\x00" * 20) |
---|
9079 | + fp = FilePath(basedir) |
---|
9080 | + backend = DiskBackend(fp) |
---|
9081 | + ss = StorageServer("\x00" * 20, backend, fp) |
---|
9082 | + |
---|
9083 | # to make sure we capture the bucket-counting-crawler in the middle |
---|
9084 | # of a cycle, we reach in and reduce its maximum slice time to 0. We |
---|
9085 | # also make it start sooner than usual. |
---|
9086 | hunk ./src/allmydata/test/test_storage.py 2958 |
---|
9087 | |
---|
9088 | def test_bucket_counter_cleanup(self): |
---|
9089 | basedir = "storage/BucketCounter/bucket_counter_cleanup" |
---|
9090 | - fileutil.make_dirs(basedir) |
---|
9091 | - ss = StorageServer(basedir, "\x00" * 20) |
---|
9092 | + fp = FilePath(basedir) |
---|
9093 | + backend = DiskBackend(fp) |
---|
9094 | + ss = StorageServer("\x00" * 20, backend, fp) |
---|
9095 | + |
---|
9096 | # to make sure we capture the bucket-counting-crawler in the middle |
---|
9097 | # of a cycle, we reach in and reduce its maximum slice time to 0. |
---|
9098 | ss.bucket_counter.slow_start = 0 |
---|
9099 | hunk ./src/allmydata/test/test_storage.py 3002 |
---|
9100 | |
---|
9101 | def test_bucket_counter_eta(self): |
---|
9102 | basedir = "storage/BucketCounter/bucket_counter_eta" |
---|
9103 | - fileutil.make_dirs(basedir) |
---|
9104 | - ss = MyStorageServer(basedir, "\x00" * 20) |
---|
9105 | + fp = FilePath(basedir) |
---|
9106 | + backend = DiskBackend(fp) |
---|
9107 | + ss = MyStorageServer("\x00" * 20, backend, fp) |
---|
9108 | ss.bucket_counter.slow_start = 0 |
---|
9109 | # these will be fired inside finished_prefix() |
---|
9110 | hooks = ss.bucket_counter.hook_ds = [defer.Deferred() for i in range(3)] |
---|
9111 | hunk ./src/allmydata/test/test_storage.py 3125 |
---|
9112 | |
---|
9113 | def test_basic(self): |
---|
9114 | basedir = "storage/LeaseCrawler/basic" |
---|
9115 | - fileutil.make_dirs(basedir) |
---|
9116 | - ss = InstrumentedStorageServer(basedir, "\x00" * 20) |
---|
9117 | + fp = FilePath(basedir) |
---|
9118 | + backend = DiskBackend(fp) |
---|
9119 | + ss = InstrumentedStorageServer("\x00" * 20, backend, fp) |
---|
9120 | + |
---|
9121 | # make it start sooner than usual. |
---|
9122 | lc = ss.lease_checker |
---|
9123 | lc.slow_start = 0 |
---|
9124 | hunk ./src/allmydata/test/test_storage.py 3141 |
---|
9125 | [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis |
---|
9126 | |
---|
9127 | # add a non-sharefile to exercise another code path |
---|
9128 | - fp = ss.backend.get_shareset(immutable_si_0).sharehomedir.child("not-a-share") |
---|
9129 | + fp = ss.backend.get_shareset(immutable_si_0)._sharehomedir.child("not-a-share") |
---|
9130 | fp.setContent("I am not a share.\n") |
---|
9131 | |
---|
9132 | # this is before the crawl has started, so we're not in a cycle yet |
---|
9133 | hunk ./src/allmydata/test/test_storage.py 3264 |
---|
9134 | self.failUnlessEqual(rec["configured-sharebytes"], 0) |
---|
9135 | |
---|
9136 | def _get_sharefile(si): |
---|
9137 | - return list(ss._iter_share_files(si))[0] |
---|
9138 | + return list(ss.backend.get_shareset(si).get_shares())[0] |
---|
9139 | def count_leases(si): |
---|
9140 | return len(list(_get_sharefile(si).get_leases())) |
---|
9141 | self.failUnlessEqual(count_leases(immutable_si_0), 1) |
---|
9142 | hunk ./src/allmydata/test/test_storage.py 3296 |
---|
9143 | for i,lease in enumerate(sf.get_leases()): |
---|
9144 | if lease.renew_secret == renew_secret: |
---|
9145 | lease.expiration_time = new_expire_time |
---|
9146 | - f = open(sf.home, 'rb+') |
---|
9147 | - sf._write_lease_record(f, i, lease) |
---|
9148 | - f.close() |
---|
9149 | + f = sf._home.open('rb+') |
---|
9150 | + try: |
---|
9151 | + sf._write_lease_record(f, i, lease) |
---|
9152 | + finally: |
---|
9153 | + f.close() |
---|
9154 | return |
---|
9155 | raise IndexError("unable to renew non-existent lease") |
---|
9156 | |
---|
9157 | hunk ./src/allmydata/test/test_storage.py 3306 |
---|
9158 | def test_expire_age(self): |
---|
9159 | basedir = "storage/LeaseCrawler/expire_age" |
---|
9160 | - fileutil.make_dirs(basedir) |
---|
9161 | + fp = FilePath(basedir) |
---|
9162 | + backend = DiskBackend(fp) |
---|
9163 | + |
---|
9164 | # setting 'override_lease_duration' to 2000 means that any lease that |
---|
9165 | # is more than 2000 seconds old will be expired. |
---|
9166 | expiration_policy = { |
---|
9167 | hunk ./src/allmydata/test/test_storage.py 3317 |
---|
9168 | 'override_lease_duration': 2000, |
---|
9169 | 'sharetypes': ('mutable', 'immutable'), |
---|
9170 | } |
---|
9171 | - ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy) |
---|
9172 | + ss = InstrumentedStorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy) |
---|
9173 | + |
---|
9174 | # make it start sooner than usual. |
---|
9175 | lc = ss.lease_checker |
---|
9176 | lc.slow_start = 0 |
---|
9177 | hunk ./src/allmydata/test/test_storage.py 3330 |
---|
9178 | [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis |
---|
9179 | |
---|
9180 | def count_shares(si): |
---|
9181 | - return len(list(ss._iter_share_files(si))) |
---|
9182 | + return len(list(ss.backend.get_shareset(si).get_shares())) |
---|
9183 | def _get_sharefile(si): |
---|
9184 | hunk ./src/allmydata/test/test_storage.py 3332 |
---|
9185 | - return list(ss._iter_share_files(si))[0] |
---|
9186 | + return list(ss.backend.get_shareset(si).get_shares())[0] |
---|
9187 | def count_leases(si): |
---|
9188 | return len(list(_get_sharefile(si).get_leases())) |
---|
9189 | |
---|
9190 | hunk ./src/allmydata/test/test_storage.py 3355 |
---|
9191 | |
---|
9192 | sf0 = _get_sharefile(immutable_si_0) |
---|
9193 | self.backdate_lease(sf0, self.renew_secrets[0], now - 1000) |
---|
9194 | - sf0_size = os.stat(sf0.home).st_size |
---|
9195 | + sf0_size = sf0.get_size() |
---|
9196 | |
---|
9197 | # immutable_si_1 gets an extra lease |
---|
9198 | sf1 = _get_sharefile(immutable_si_1) |
---|
9199 | hunk ./src/allmydata/test/test_storage.py 3363 |
---|
9200 | |
---|
9201 | sf2 = _get_sharefile(mutable_si_2) |
---|
9202 | self.backdate_lease(sf2, self.renew_secrets[3], now - 1000) |
---|
9203 | - sf2_size = os.stat(sf2.home).st_size |
---|
9204 | + sf2_size = sf2.get_size() |
---|
9205 | |
---|
9206 | # mutable_si_3 gets an extra lease |
---|
9207 | sf3 = _get_sharefile(mutable_si_3) |
---|
9208 | hunk ./src/allmydata/test/test_storage.py 3450 |
---|
9209 | |
---|
9210 | def test_expire_cutoff_date(self): |
---|
9211 | basedir = "storage/LeaseCrawler/expire_cutoff_date" |
---|
9212 | - fileutil.make_dirs(basedir) |
---|
9213 | + fp = FilePath(basedir) |
---|
9214 | + backend = DiskBackend(fp) |
---|
9215 | + |
---|
9216 | # setting 'cutoff_date' to 2000 seconds ago means that any lease that |
---|
9217 | # is more than 2000 seconds old will be expired. |
---|
9218 | now = time.time() |
---|
9219 | hunk ./src/allmydata/test/test_storage.py 3463 |
---|
9220 | 'cutoff_date': then, |
---|
9221 | 'sharetypes': ('mutable', 'immutable'), |
---|
9222 | } |
---|
9223 | - ss = InstrumentedStorageServer(basedir, "\x00" * 20, expiration_policy) |
---|
9224 | + ss = InstrumentedStorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy) |
---|
9225 | + |
---|
9226 | # make it start sooner than usual. |
---|
9227 | lc = ss.lease_checker |
---|
9228 | lc.slow_start = 0 |
---|
9229 | hunk ./src/allmydata/test/test_storage.py 3476 |
---|
9230 | [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis |
---|
9231 | |
---|
9232 | def count_shares(si): |
---|
9233 | - return len(list(ss._iter_share_files(si))) |
---|
9234 | + return len(list(ss.backend.get_shareset(si).get_shares())) |
---|
9235 | def _get_sharefile(si): |
---|
9236 | hunk ./src/allmydata/test/test_storage.py 3478 |
---|
9237 | - return list(ss._iter_share_files(si))[0] |
---|
9238 | + return list(ss.backend.get_shareset(si).get_shares())[0] |
---|
9239 | def count_leases(si): |
---|
9240 | return len(list(_get_sharefile(si).get_leases())) |
---|
9241 | |
---|
9242 | hunk ./src/allmydata/test/test_storage.py 3505 |
---|
9243 | |
---|
9244 | sf0 = _get_sharefile(immutable_si_0) |
---|
9245 | self.backdate_lease(sf0, self.renew_secrets[0], new_expiration_time) |
---|
9246 | - sf0_size = os.stat(sf0.home).st_size |
---|
9247 | + sf0_size = sf0.get_size() |
---|
9248 | |
---|
9249 | # immutable_si_1 gets an extra lease |
---|
9250 | sf1 = _get_sharefile(immutable_si_1) |
---|
9251 | hunk ./src/allmydata/test/test_storage.py 3513 |
---|
9252 | |
---|
9253 | sf2 = _get_sharefile(mutable_si_2) |
---|
9254 | self.backdate_lease(sf2, self.renew_secrets[3], new_expiration_time) |
---|
9255 | - sf2_size = os.stat(sf2.home).st_size |
---|
9256 | + sf2_size = sf2.get_size() |
---|
9257 | |
---|
9258 | # mutable_si_3 gets an extra lease |
---|
9259 | sf3 = _get_sharefile(mutable_si_3) |
---|
9260 | hunk ./src/allmydata/test/test_storage.py 3605 |
---|
9261 | |
---|
9262 | def test_only_immutable(self): |
---|
9263 | basedir = "storage/LeaseCrawler/only_immutable" |
---|
9264 | - fileutil.make_dirs(basedir) |
---|
9265 | + fp = FilePath(basedir) |
---|
9266 | + backend = DiskBackend(fp) |
---|
9267 | + |
---|
9268 | # setting 'cutoff_date' to 2000 seconds ago means that any lease that |
---|
9269 | # is more than 2000 seconds old will be expired. |
---|
9270 | now = time.time() |
---|
9271 | hunk ./src/allmydata/test/test_storage.py 3618 |
---|
9272 | 'cutoff_date': then, |
---|
9273 | 'sharetypes': ('immutable',), |
---|
9274 | } |
---|
9275 | - ss = StorageServer(basedir, "\x00" * 20, expiration_policy) |
---|
9276 | + ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy) |
---|
9277 | lc = ss.lease_checker |
---|
9278 | lc.slow_start = 0 |
---|
9279 | webstatus = StorageStatus(ss) |
---|
9280 | hunk ./src/allmydata/test/test_storage.py 3629 |
---|
9281 | new_expiration_time = now - 3000 + 31*24*60*60 |
---|
9282 | |
---|
9283 | def count_shares(si): |
---|
9284 | - return len(list(ss._iter_share_files(si))) |
---|
9285 | + return len(list(ss.backend.get_shareset(si).get_shares())) |
---|
9286 | def _get_sharefile(si): |
---|
9287 | hunk ./src/allmydata/test/test_storage.py 3631 |
---|
9288 | - return list(ss._iter_share_files(si))[0] |
---|
9289 | + return list(ss.backend.get_shareset(si).get_shares())[0] |
---|
9290 | def count_leases(si): |
---|
9291 | return len(list(_get_sharefile(si).get_leases())) |
---|
9292 | |
---|
9293 | hunk ./src/allmydata/test/test_storage.py 3668 |
---|
9294 | |
---|
9295 | def test_only_mutable(self): |
---|
9296 | basedir = "storage/LeaseCrawler/only_mutable" |
---|
9297 | - fileutil.make_dirs(basedir) |
---|
9298 | + fp = FilePath(basedir) |
---|
9299 | + backend = DiskBackend(fp) |
---|
9300 | + |
---|
9301 | # setting 'cutoff_date' to 2000 seconds ago means that any lease that |
---|
9302 | # is more than 2000 seconds old will be expired. |
---|
9303 | now = time.time() |
---|
9304 | hunk ./src/allmydata/test/test_storage.py 3681 |
---|
9305 | 'cutoff_date': then, |
---|
9306 | 'sharetypes': ('mutable',), |
---|
9307 | } |
---|
9308 | - ss = StorageServer(basedir, "\x00" * 20, expiration_policy) |
---|
9309 | + ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy) |
---|
9310 | lc = ss.lease_checker |
---|
9311 | lc.slow_start = 0 |
---|
9312 | webstatus = StorageStatus(ss) |
---|
9313 | hunk ./src/allmydata/test/test_storage.py 3692 |
---|
9314 | new_expiration_time = now - 3000 + 31*24*60*60 |
---|
9315 | |
---|
9316 | def count_shares(si): |
---|
9317 | - return len(list(ss._iter_share_files(si))) |
---|
9318 | + return len(list(ss.backend.get_shareset(si).get_shares())) |
---|
9319 | def _get_sharefile(si): |
---|
9320 | hunk ./src/allmydata/test/test_storage.py 3694 |
---|
9321 | - return list(ss._iter_share_files(si))[0] |
---|
9322 | + return list(ss.backend.get_shareset(si).get_shares())[0] |
---|
9323 | def count_leases(si): |
---|
9324 | return len(list(_get_sharefile(si).get_leases())) |
---|
9325 | |
---|
9326 | hunk ./src/allmydata/test/test_storage.py 3731 |
---|
9327 | |
---|
9328 | def test_bad_mode(self): |
---|
9329 | basedir = "storage/LeaseCrawler/bad_mode" |
---|
9330 | - fileutil.make_dirs(basedir) |
---|
9331 | + fp = FilePath(basedir) |
---|
9332 | + backend = DiskBackend(fp) |
---|
9333 | + |
---|
9334 | + expiration_policy = { |
---|
9335 | + 'enabled': True, |
---|
9336 | + 'mode': 'bogus', |
---|
9337 | + 'override_lease_duration': None, |
---|
9338 | + 'cutoff_date': None, |
---|
9339 | + 'sharetypes': ('mutable', 'immutable'), |
---|
9340 | + } |
---|
9341 | e = self.failUnlessRaises(ValueError, |
---|
9342 | hunk ./src/allmydata/test/test_storage.py 3742 |
---|
9343 | - StorageServer, basedir, "\x00" * 20, |
---|
9344 | - expiration_mode="bogus") |
---|
9345 | + StorageServer, "\x00" * 20, backend, fp, |
---|
9346 | + expiration_policy=expiration_policy) |
---|
9347 | self.failUnlessIn("GC mode 'bogus' must be 'age' or 'cutoff-date'", str(e)) |
---|
9348 | |
---|
9349 | def test_parse_duration(self): |
---|
9350 | hunk ./src/allmydata/test/test_storage.py 3767 |
---|
9351 | |
---|
9352 | def test_limited_history(self): |
---|
9353 | basedir = "storage/LeaseCrawler/limited_history" |
---|
9354 | - fileutil.make_dirs(basedir) |
---|
9355 | - ss = StorageServer(basedir, "\x00" * 20) |
---|
9356 | + fp = FilePath(basedir) |
---|
9357 | + backend = DiskBackend(fp) |
---|
9358 | + ss = StorageServer("\x00" * 20, backend, fp) |
---|
9359 | + |
---|
9360 | # make it start sooner than usual. |
---|
9361 | lc = ss.lease_checker |
---|
9362 | lc.slow_start = 0 |
---|
9363 | hunk ./src/allmydata/test/test_storage.py 3801 |
---|
9364 | |
---|
9365 | def test_unpredictable_future(self): |
---|
9366 | basedir = "storage/LeaseCrawler/unpredictable_future" |
---|
9367 | - fileutil.make_dirs(basedir) |
---|
9368 | - ss = StorageServer(basedir, "\x00" * 20) |
---|
9369 | + fp = FilePath(basedir) |
---|
9370 | + backend = DiskBackend(fp) |
---|
9371 | + ss = StorageServer("\x00" * 20, backend, fp) |
---|
9372 | + |
---|
9373 | # make it start sooner than usual. |
---|
9374 | lc = ss.lease_checker |
---|
9375 | lc.slow_start = 0 |
---|
9376 | hunk ./src/allmydata/test/test_storage.py 3866 |
---|
9377 | |
---|
9378 | def test_no_st_blocks(self): |
---|
9379 | basedir = "storage/LeaseCrawler/no_st_blocks" |
---|
9380 | - fileutil.make_dirs(basedir) |
---|
9381 | + fp = FilePath(basedir) |
---|
9382 | + backend = DiskBackend(fp) |
---|
9383 | + |
---|
9384 | # A negative 'override_lease_duration' means that the "configured-" |
---|
9385 | # space-recovered counts will be non-zero, since all shares will have |
---|
9386 | # expired by then. |
---|
9387 | hunk ./src/allmydata/test/test_storage.py 3878 |
---|
9388 | 'override_lease_duration': -1000, |
---|
9389 | 'sharetypes': ('mutable', 'immutable'), |
---|
9390 | } |
---|
9391 | - ss = No_ST_BLOCKS_StorageServer(basedir, "\x00" * 20, expiration_policy) |
---|
9392 | + ss = No_ST_BLOCKS_StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy) |
---|
9393 | |
---|
9394 | # make it start sooner than usual. |
---|
9395 | lc = ss.lease_checker |
---|
9396 | hunk ./src/allmydata/test/test_storage.py 3911 |
---|
9397 | UnknownImmutableContainerVersionError, |
---|
9398 | ] |
---|
9399 | basedir = "storage/LeaseCrawler/share_corruption" |
---|
9400 | - fileutil.make_dirs(basedir) |
---|
9401 | - ss = InstrumentedStorageServer(basedir, "\x00" * 20) |
---|
9402 | + fp = FilePath(basedir) |
---|
9403 | + backend = DiskBackend(fp) |
---|
9404 | + ss = InstrumentedStorageServer("\x00" * 20, backend, fp) |
---|
9405 | w = StorageStatus(ss) |
---|
9406 | # make it start sooner than usual. |
---|
9407 | lc = ss.lease_checker |
---|
9408 | hunk ./src/allmydata/test/test_storage.py 3928 |
---|
9409 | [immutable_si_0, immutable_si_1, mutable_si_2, mutable_si_3] = self.sis |
---|
9410 | first = min(self.sis) |
---|
9411 | first_b32 = base32.b2a(first) |
---|
9412 | - fp = ss.backend.get_shareset(first).sharehomedir.child("0") |
---|
9413 | + fp = ss.backend.get_shareset(first)._sharehomedir.child("0") |
---|
9414 | f = fp.open("rb+") |
---|
9415 | hunk ./src/allmydata/test/test_storage.py 3930 |
---|
9416 | - f.seek(0) |
---|
9417 | - f.write("BAD MAGIC") |
---|
9418 | - f.close() |
---|
9419 | + try: |
---|
9420 | + f.seek(0) |
---|
9421 | + f.write("BAD MAGIC") |
---|
9422 | + finally: |
---|
9423 | + f.close() |
---|
9424 | # if get_share_file() doesn't see the correct mutable magic, it |
---|
9425 | # assumes the file is an immutable share, and then |
---|
9426 | # immutable.ShareFile sees a bad version. So regardless of which kind |
---|
9427 | hunk ./src/allmydata/test/test_storage.py 3943 |
---|
9428 | |
---|
9429 | # also create an empty bucket |
---|
9430 | empty_si = base32.b2a("\x04"*16) |
---|
9431 | - empty_bucket_dir = ss.backend.get_shareset(empty_si).sharehomedir |
---|
9432 | + empty_bucket_dir = ss.backend.get_shareset(empty_si)._sharehomedir |
---|
9433 | fileutil.fp_make_dirs(empty_bucket_dir) |
---|
9434 | |
---|
9435 | ss.setServiceParent(self.s) |
---|
9436 | hunk ./src/allmydata/test/test_storage.py 4031 |
---|
9437 | |
---|
9438 | def test_status(self): |
---|
9439 | basedir = "storage/WebStatus/status" |
---|
9440 | - fileutil.make_dirs(basedir) |
---|
9441 | - ss = StorageServer(basedir, "\x00" * 20) |
---|
9442 | + fp = FilePath(basedir) |
---|
9443 | + backend = DiskBackend(fp) |
---|
9444 | + ss = StorageServer("\x00" * 20, backend, fp) |
---|
9445 | ss.setServiceParent(self.s) |
---|
9446 | w = StorageStatus(ss) |
---|
9447 | d = self.render1(w) |
---|
9448 | hunk ./src/allmydata/test/test_storage.py 4065 |
---|
9449 | # Some platforms may have no disk stats API. Make sure the code can handle that |
---|
9450 | # (test runs on all platforms). |
---|
9451 | basedir = "storage/WebStatus/status_no_disk_stats" |
---|
9452 | - fileutil.make_dirs(basedir) |
---|
9453 | - ss = StorageServer(basedir, "\x00" * 20) |
---|
9454 | + fp = FilePath(basedir) |
---|
9455 | + backend = DiskBackend(fp) |
---|
9456 | + ss = StorageServer("\x00" * 20, backend, fp) |
---|
9457 | ss.setServiceParent(self.s) |
---|
9458 | w = StorageStatus(ss) |
---|
9459 | html = w.renderSynchronously() |
---|
9460 | hunk ./src/allmydata/test/test_storage.py 4085 |
---|
9461 | # If the API to get disk stats exists but a call to it fails, then the status should |
---|
9462 | # show that no shares will be accepted, and get_available_space() should be 0. |
---|
9463 | basedir = "storage/WebStatus/status_bad_disk_stats" |
---|
9464 | - fileutil.make_dirs(basedir) |
---|
9465 | - ss = StorageServer(basedir, "\x00" * 20) |
---|
9466 | + fp = FilePath(basedir) |
---|
9467 | + backend = DiskBackend(fp) |
---|
9468 | + ss = StorageServer("\x00" * 20, backend, fp) |
---|
9469 | ss.setServiceParent(self.s) |
---|
9470 | w = StorageStatus(ss) |
---|
9471 | html = w.renderSynchronously() |
---|
9472 | } |
---|
9473 | [Fix most of the crawler tests. refs #999 |
---|
9474 | david-sarah@jacaranda.org**20110922183008 |
---|
9475 | Ignore-this: 116c0848008f3989ba78d87c07ec783c |
---|
9476 | ] { |
---|
9477 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 160 |
---|
9478 | self._discard_storage = discard_storage |
---|
9479 | |
---|
9480 | def get_overhead(self): |
---|
9481 | - return (fileutil.get_disk_usage(self._sharehomedir) + |
---|
9482 | - fileutil.get_disk_usage(self._incominghomedir)) |
---|
9483 | + return (fileutil.get_used_space(self._sharehomedir) + |
---|
9484 | + fileutil.get_used_space(self._incominghomedir)) |
---|
9485 | |
---|
9486 | def get_shares(self): |
---|
9487 | """ |
---|
9488 | hunk ./src/allmydata/storage/crawler.py 2 |
---|
9489 | |
---|
9490 | -import time, struct |
---|
9491 | -import cPickle as pickle |
---|
9492 | +import time, pickle, struct |
---|
9493 | from twisted.internet import reactor |
---|
9494 | from twisted.application import service |
---|
9495 | |
---|
9496 | hunk ./src/allmydata/storage/crawler.py 205 |
---|
9497 | # shareset to be processed, or None if we |
---|
9498 | # are sleeping between cycles |
---|
9499 | try: |
---|
9500 | - state = pickle.loads(self.statefp.getContent()) |
---|
9501 | + pickled = self.statefp.getContent() |
---|
9502 | except EnvironmentError: |
---|
9503 | if self.statefp.exists(): |
---|
9504 | raise |
---|
9505 | hunk ./src/allmydata/storage/crawler.py 215 |
---|
9506 | "last-complete-prefix": None, |
---|
9507 | "last-complete-bucket": None, |
---|
9508 | } |
---|
9509 | + else: |
---|
9510 | + state = pickle.loads(pickled) |
---|
9511 | + |
---|
9512 | state.setdefault("current-cycle-start-time", time.time()) # approximate |
---|
9513 | self.state = state |
---|
9514 | lcp = state["last-complete-prefix"] |
---|
9515 | hunk ./src/allmydata/storage/crawler.py 246 |
---|
9516 | else: |
---|
9517 | last_complete_prefix = self.prefixes[lcpi] |
---|
9518 | self.state["last-complete-prefix"] = last_complete_prefix |
---|
9519 | - self.statefp.setContent(pickle.dumps(self.state)) |
---|
9520 | + pickled = pickle.dumps(self.state) |
---|
9521 | + self.statefp.setContent(pickled) |
---|
9522 | |
---|
9523 | def startService(self): |
---|
9524 | # arrange things to look like we were just sleeping, so |
---|
9525 | hunk ./src/allmydata/storage/expirer.py 86 |
---|
9526 | # initialize history |
---|
9527 | if not self.historyfp.exists(): |
---|
9528 | history = {} # cyclenum -> dict |
---|
9529 | - self.historyfp.setContent(pickle.dumps(history)) |
---|
9530 | + pickled = pickle.dumps(history) |
---|
9531 | + self.historyfp.setContent(pickled) |
---|
9532 | |
---|
9533 | def create_empty_cycle_dict(self): |
---|
9534 | recovered = self.create_empty_recovered_dict() |
---|
9535 | hunk ./src/allmydata/storage/expirer.py 111 |
---|
9536 | def started_cycle(self, cycle): |
---|
9537 | self.state["cycle-to-date"] = self.create_empty_cycle_dict() |
---|
9538 | |
---|
9539 | - def process_storage_index(self, cycle, prefix, container): |
---|
9540 | + def process_shareset(self, cycle, prefix, shareset): |
---|
9541 | would_keep_shares = [] |
---|
9542 | wks = None |
---|
9543 | hunk ./src/allmydata/storage/expirer.py 114 |
---|
9544 | - sharetype = None |
---|
9545 | |
---|
9546 | hunk ./src/allmydata/storage/expirer.py 115 |
---|
9547 | - for share in container.get_shares(): |
---|
9548 | - sharetype = share.sharetype |
---|
9549 | + for share in shareset.get_shares(): |
---|
9550 | try: |
---|
9551 | wks = self.process_share(share) |
---|
9552 | except (UnknownMutableContainerVersionError, |
---|
9553 | hunk ./src/allmydata/storage/expirer.py 128 |
---|
9554 | wks = (1, 1, 1, "unknown") |
---|
9555 | would_keep_shares.append(wks) |
---|
9556 | |
---|
9557 | - container_type = None |
---|
9558 | + shareset_type = None |
---|
9559 | if wks: |
---|
9560 | hunk ./src/allmydata/storage/expirer.py 130 |
---|
9561 | - # use the last share's sharetype as the container type |
---|
9562 | - container_type = wks[3] |
---|
9563 | + # use the last share's type as the shareset type |
---|
9564 | + shareset_type = wks[3] |
---|
9565 | rec = self.state["cycle-to-date"]["space-recovered"] |
---|
9566 | self.increment(rec, "examined-buckets", 1) |
---|
9567 | hunk ./src/allmydata/storage/expirer.py 134 |
---|
9568 | - if sharetype: |
---|
9569 | - self.increment(rec, "examined-buckets-"+container_type, 1) |
---|
9570 | + if shareset_type: |
---|
9571 | + self.increment(rec, "examined-buckets-"+shareset_type, 1) |
---|
9572 | |
---|
9573 | hunk ./src/allmydata/storage/expirer.py 137 |
---|
9574 | - container_diskbytes = container.get_overhead() |
---|
9575 | + shareset_diskbytes = shareset.get_overhead() |
---|
9576 | |
---|
9577 | if sum([wks[0] for wks in would_keep_shares]) == 0: |
---|
9578 | hunk ./src/allmydata/storage/expirer.py 140 |
---|
9579 | - self.increment_container_space("original", container_diskbytes, sharetype) |
---|
9580 | + self.increment_shareset_space("original", shareset_diskbytes, shareset_type) |
---|
9581 | if sum([wks[1] for wks in would_keep_shares]) == 0: |
---|
9582 | hunk ./src/allmydata/storage/expirer.py 142 |
---|
9583 | - self.increment_container_space("configured", container_diskbytes, sharetype) |
---|
9584 | + self.increment_shareset_space("configured", shareset_diskbytes, shareset_type) |
---|
9585 | if sum([wks[2] for wks in would_keep_shares]) == 0: |
---|
9586 | hunk ./src/allmydata/storage/expirer.py 144 |
---|
9587 | - self.increment_container_space("actual", container_diskbytes, sharetype) |
---|
9588 | + self.increment_shareset_space("actual", shareset_diskbytes, shareset_type) |
---|
9589 | |
---|
9590 | def process_share(self, share): |
---|
9591 | sharetype = share.sharetype |
---|
9592 | hunk ./src/allmydata/storage/expirer.py 189 |
---|
9593 | |
---|
9594 | so_far = self.state["cycle-to-date"] |
---|
9595 | self.increment(so_far["leases-per-share-histogram"], num_leases, 1) |
---|
9596 | - self.increment_space("examined", diskbytes, sharetype) |
---|
9597 | + self.increment_space("examined", sharebytes, diskbytes, sharetype) |
---|
9598 | |
---|
9599 | would_keep_share = [1, 1, 1, sharetype] |
---|
9600 | |
---|
9601 | hunk ./src/allmydata/storage/expirer.py 220 |
---|
9602 | self.increment(so_far_sr, a+"-sharebytes-"+sharetype, sharebytes) |
---|
9603 | self.increment(so_far_sr, a+"-diskbytes-"+sharetype, diskbytes) |
---|
9604 | |
---|
9605 | - def increment_container_space(self, a, container_diskbytes, container_type): |
---|
9606 | + def increment_shareset_space(self, a, shareset_diskbytes, shareset_type): |
---|
9607 | rec = self.state["cycle-to-date"]["space-recovered"] |
---|
9608 | hunk ./src/allmydata/storage/expirer.py 222 |
---|
9609 | - self.increment(rec, a+"-diskbytes", container_diskbytes) |
---|
9610 | + self.increment(rec, a+"-diskbytes", shareset_diskbytes) |
---|
9611 | self.increment(rec, a+"-buckets", 1) |
---|
9612 | hunk ./src/allmydata/storage/expirer.py 224 |
---|
9613 | - if container_type: |
---|
9614 | - self.increment(rec, a+"-diskbytes-"+container_type, container_diskbytes) |
---|
9615 | - self.increment(rec, a+"-buckets-"+container_type, 1) |
---|
9616 | + if shareset_type: |
---|
9617 | + self.increment(rec, a+"-diskbytes-"+shareset_type, shareset_diskbytes) |
---|
9618 | + self.increment(rec, a+"-buckets-"+shareset_type, 1) |
---|
9619 | |
---|
9620 | def increment(self, d, k, delta=1): |
---|
9621 | if k not in d: |
---|
9622 | hunk ./src/allmydata/storage/expirer.py 280 |
---|
9623 | # copy() needs to become a deepcopy |
---|
9624 | h["space-recovered"] = s["space-recovered"].copy() |
---|
9625 | |
---|
9626 | - history = pickle.loads(self.historyfp.getContent()) |
---|
9627 | + pickled = self.historyfp.getContent() |
---|
9628 | + history = pickle.loads(pickled) |
---|
9629 | history[cycle] = h |
---|
9630 | while len(history) > 10: |
---|
9631 | oldcycles = sorted(history.keys()) |
---|
9632 | hunk ./src/allmydata/storage/expirer.py 286 |
---|
9633 | del history[oldcycles[0]] |
---|
9634 | - self.historyfp.setContent(pickle.dumps(history)) |
---|
9635 | + repickled = pickle.dumps(history) |
---|
9636 | + self.historyfp.setContent(repickled) |
---|
9637 | |
---|
9638 | def get_state(self): |
---|
9639 | """In addition to the crawler state described in |
---|
9640 | hunk ./src/allmydata/storage/expirer.py 356 |
---|
9641 | progress = self.get_progress() |
---|
9642 | |
---|
9643 | state = ShareCrawler.get_state(self) # does a shallow copy |
---|
9644 | - history = pickle.loads(self.historyfp.getContent()) |
---|
9645 | + pickled = self.historyfp.getContent() |
---|
9646 | + history = pickle.loads(pickled) |
---|
9647 | state["history"] = history |
---|
9648 | |
---|
9649 | if not progress["cycle-in-progress"]: |
---|
9650 | hunk ./src/allmydata/test/test_crawler.py 25 |
---|
9651 | ShareCrawler.__init__(self, *args, **kwargs) |
---|
9652 | self.all_buckets = [] |
---|
9653 | self.finished_d = defer.Deferred() |
---|
9654 | - def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32): |
---|
9655 | - self.all_buckets.append(storage_index_b32) |
---|
9656 | + |
---|
9657 | + def process_shareset(self, cycle, prefix, shareset): |
---|
9658 | + self.all_buckets.append(shareset.get_storage_index_string()) |
---|
9659 | + |
---|
9660 | def finished_cycle(self, cycle): |
---|
9661 | eventually(self.finished_d.callback, None) |
---|
9662 | |
---|
9663 | hunk ./src/allmydata/test/test_crawler.py 41 |
---|
9664 | self.all_buckets = [] |
---|
9665 | self.finished_d = defer.Deferred() |
---|
9666 | self.yield_cb = None |
---|
9667 | - def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32): |
---|
9668 | - self.all_buckets.append(storage_index_b32) |
---|
9669 | + |
---|
9670 | + def process_shareset(self, cycle, prefix, shareset): |
---|
9671 | + self.all_buckets.append(shareset.get_storage_index_string()) |
---|
9672 | self.countdown -= 1 |
---|
9673 | if self.countdown == 0: |
---|
9674 | # force a timeout. We restore it in yielding() |
---|
9675 | hunk ./src/allmydata/test/test_crawler.py 66 |
---|
9676 | self.accumulated = 0.0 |
---|
9677 | self.cycles = 0 |
---|
9678 | self.last_yield = 0.0 |
---|
9679 | - def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32): |
---|
9680 | + |
---|
9681 | + def process_shareset(self, cycle, prefix, shareset): |
---|
9682 | start = time.time() |
---|
9683 | time.sleep(0.05) |
---|
9684 | elapsed = time.time() - start |
---|
9685 | hunk ./src/allmydata/test/test_crawler.py 85 |
---|
9686 | ShareCrawler.__init__(self, *args, **kwargs) |
---|
9687 | self.counter = 0 |
---|
9688 | self.finished_d = defer.Deferred() |
---|
9689 | - def process_bucket(self, cycle, prefix, prefixdir, storage_index_b32): |
---|
9690 | + |
---|
9691 | + def process_shareset(self, cycle, prefix, shareset): |
---|
9692 | self.counter += 1 |
---|
9693 | def finished_cycle(self, cycle): |
---|
9694 | self.finished_d.callback(None) |
---|
9695 | hunk ./src/allmydata/test/test_storage.py 3041 |
---|
9696 | |
---|
9697 | class InstrumentedLeaseCheckingCrawler(LeaseCheckingCrawler): |
---|
9698 | stop_after_first_bucket = False |
---|
9699 | - def process_bucket(self, *args, **kwargs): |
---|
9700 | - LeaseCheckingCrawler.process_bucket(self, *args, **kwargs) |
---|
9701 | + |
---|
9702 | + def process_shareset(self, cycle, prefix, shareset): |
---|
9703 | + LeaseCheckingCrawler.process_shareset(self, cycle, prefix, shareset) |
---|
9704 | if self.stop_after_first_bucket: |
---|
9705 | self.stop_after_first_bucket = False |
---|
9706 | self.cpu_slice = -1.0 |
---|
9707 | hunk ./src/allmydata/test/test_storage.py 3051 |
---|
9708 | if not self.stop_after_first_bucket: |
---|
9709 | self.cpu_slice = 500 |
---|
9710 | |
---|
9711 | +class InstrumentedStorageServer(StorageServer): |
---|
9712 | + LeaseCheckerClass = InstrumentedLeaseCheckingCrawler |
---|
9713 | + |
---|
9714 | + |
---|
9715 | class BrokenStatResults: |
---|
9716 | pass |
---|
9717 | class No_ST_BLOCKS_LeaseCheckingCrawler(LeaseCheckingCrawler): |
---|
9718 | hunk ./src/allmydata/test/test_storage.py 3069 |
---|
9719 | setattr(bsr, attrname, getattr(s, attrname)) |
---|
9720 | return bsr |
---|
9721 | |
---|
9722 | -class InstrumentedStorageServer(StorageServer): |
---|
9723 | - LeaseCheckerClass = InstrumentedLeaseCheckingCrawler |
---|
9724 | class No_ST_BLOCKS_StorageServer(StorageServer): |
---|
9725 | LeaseCheckerClass = No_ST_BLOCKS_LeaseCheckingCrawler |
---|
9726 | |
---|
9727 | } |
---|
9728 | [Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999 |
---|
9729 | david-sarah@jacaranda.org**20110922183323 |
---|
9730 | Ignore-this: a11fb0dd0078ff627cb727fc769ec848 |
---|
9731 | ] { |
---|
9732 | hunk ./src/allmydata/storage/backends/disk/immutable.py 260 |
---|
9733 | except IndexError: |
---|
9734 | self.add_lease(lease_info) |
---|
9735 | |
---|
9736 | + def cancel_lease(self, cancel_secret): |
---|
9737 | + """Remove a lease with the given cancel_secret. If the last lease is |
---|
9738 | + cancelled, the file will be removed. Return the number of bytes that |
---|
9739 | + were freed (by truncating the list of leases, and possibly by |
---|
9740 | + deleting the file). Raise IndexError if there was no lease with the |
---|
9741 | + given cancel_secret. |
---|
9742 | + """ |
---|
9743 | + |
---|
9744 | + leases = list(self.get_leases()) |
---|
9745 | + num_leases_removed = 0 |
---|
9746 | + for i, lease in enumerate(leases): |
---|
9747 | + if constant_time_compare(lease.cancel_secret, cancel_secret): |
---|
9748 | + leases[i] = None |
---|
9749 | + num_leases_removed += 1 |
---|
9750 | + if not num_leases_removed: |
---|
9751 | + raise IndexError("unable to find matching lease to cancel") |
---|
9752 | + |
---|
9753 | + space_freed = 0 |
---|
9754 | + if num_leases_removed: |
---|
9755 | + # pack and write out the remaining leases. We write these out in |
---|
9756 | + # the same order as they were added, so that if we crash while |
---|
9757 | + # doing this, we won't lose any non-cancelled leases. |
---|
9758 | + leases = [l for l in leases if l] # remove the cancelled leases |
---|
9759 | + if len(leases) > 0: |
---|
9760 | + f = self._home.open('rb+') |
---|
9761 | + try: |
---|
9762 | + for i, lease in enumerate(leases): |
---|
9763 | + self._write_lease_record(f, i, lease) |
---|
9764 | + self._write_num_leases(f, len(leases)) |
---|
9765 | + self._truncate_leases(f, len(leases)) |
---|
9766 | + finally: |
---|
9767 | + f.close() |
---|
9768 | + space_freed = self.LEASE_SIZE * num_leases_removed |
---|
9769 | + else: |
---|
9770 | + space_freed = fileutil.get_used_space(self._home) |
---|
9771 | + self.unlink() |
---|
9772 | + return space_freed |
---|
9773 | + |
---|
9774 | hunk ./src/allmydata/storage/backends/disk/mutable.py 361 |
---|
9775 | except IndexError: |
---|
9776 | self.add_lease(lease_info) |
---|
9777 | |
---|
9778 | + def cancel_lease(self, cancel_secret): |
---|
9779 | + """Remove any leases with the given cancel_secret. If the last lease |
---|
9780 | + is cancelled, the file will be removed. Return the number of bytes |
---|
9781 | + that were freed (by truncating the list of leases, and possibly by |
---|
9782 | + deleting the file). Raise IndexError if there was no lease with the |
---|
9783 | + given cancel_secret.""" |
---|
9784 | + |
---|
9785 | + # XXX can this be more like ImmutableDiskShare.cancel_lease? |
---|
9786 | + |
---|
9787 | + accepting_nodeids = set() |
---|
9788 | + modified = 0 |
---|
9789 | + remaining = 0 |
---|
9790 | + blank_lease = LeaseInfo(owner_num=0, |
---|
9791 | + renew_secret="\x00"*32, |
---|
9792 | + cancel_secret="\x00"*32, |
---|
9793 | + expiration_time=0, |
---|
9794 | + nodeid="\x00"*20) |
---|
9795 | + f = self._home.open('rb+') |
---|
9796 | + try: |
---|
9797 | + for (leasenum, lease) in self._enumerate_leases(f): |
---|
9798 | + accepting_nodeids.add(lease.nodeid) |
---|
9799 | + if constant_time_compare(lease.cancel_secret, cancel_secret): |
---|
9800 | + self._write_lease_record(f, leasenum, blank_lease) |
---|
9801 | + modified += 1 |
---|
9802 | + else: |
---|
9803 | + remaining += 1 |
---|
9804 | + if modified: |
---|
9805 | + freed_space = self._pack_leases(f) |
---|
9806 | + finally: |
---|
9807 | + f.close() |
---|
9808 | + |
---|
9809 | + if modified > 0: |
---|
9810 | + if remaining == 0: |
---|
9811 | + freed_space = fileutil.get_used_space(self._home) |
---|
9812 | + self.unlink() |
---|
9813 | + return freed_space |
---|
9814 | + |
---|
9815 | + msg = ("Unable to cancel non-existent lease. I have leases " |
---|
9816 | + "accepted by nodeids: ") |
---|
9817 | + msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid)) |
---|
9818 | + for anid in accepting_nodeids]) |
---|
9819 | + msg += " ." |
---|
9820 | + raise IndexError(msg) |
---|
9821 | + |
---|
9822 | + def _pack_leases(self, f): |
---|
9823 | + # TODO: reclaim space from cancelled leases |
---|
9824 | + return 0 |
---|
9825 | + |
---|
9826 | def _read_write_enabler_and_nodeid(self, f): |
---|
9827 | f.seek(0) |
---|
9828 | data = f.read(self.HEADER_SIZE) |
---|
9829 | } |
---|
9830 | [Blank line cleanups. |
---|
9831 | david-sarah@jacaranda.org**20110923012044 |
---|
9832 | Ignore-this: 8e1c4ecb5b0c65673af35872876a8591 |
---|
9833 | ] { |
---|
9834 | hunk ./src/allmydata/interfaces.py 33 |
---|
9835 | LeaseRenewSecret = Hash # used to protect lease renewal requests |
---|
9836 | LeaseCancelSecret = Hash # used to protect lease cancellation requests |
---|
9837 | |
---|
9838 | + |
---|
9839 | class RIStubClient(RemoteInterface): |
---|
9840 | """Each client publishes a service announcement for a dummy object called |
---|
9841 | the StubClient. This object doesn't actually offer any services, but the |
---|
9842 | hunk ./src/allmydata/interfaces.py 42 |
---|
9843 | the grid and the client versions in use). This is the (empty) |
---|
9844 | RemoteInterface for the StubClient.""" |
---|
9845 | |
---|
9846 | + |
---|
9847 | class RIBucketWriter(RemoteInterface): |
---|
9848 | """ Objects of this kind live on the server side. """ |
---|
9849 | def write(offset=Offset, data=ShareData): |
---|
9850 | hunk ./src/allmydata/interfaces.py 61 |
---|
9851 | """ |
---|
9852 | return None |
---|
9853 | |
---|
9854 | + |
---|
9855 | class RIBucketReader(RemoteInterface): |
---|
9856 | def read(offset=Offset, length=ReadSize): |
---|
9857 | return ShareData |
---|
9858 | hunk ./src/allmydata/interfaces.py 78 |
---|
9859 | documentation. |
---|
9860 | """ |
---|
9861 | |
---|
9862 | + |
---|
9863 | TestVector = ListOf(TupleOf(Offset, ReadSize, str, str)) |
---|
9864 | # elements are (offset, length, operator, specimen) |
---|
9865 | # operator is one of "lt, le, eq, ne, ge, gt" |
---|
9866 | hunk ./src/allmydata/interfaces.py 95 |
---|
9867 | ReadData = ListOf(ShareData) |
---|
9868 | # returns data[offset:offset+length] for each element of TestVector |
---|
9869 | |
---|
9870 | + |
---|
9871 | class RIStorageServer(RemoteInterface): |
---|
9872 | __remote_name__ = "RIStorageServer.tahoe.allmydata.com" |
---|
9873 | |
---|
9874 | hunk ./src/allmydata/interfaces.py 2255 |
---|
9875 | |
---|
9876 | def get_storage_index(): |
---|
9877 | """Return a string with the (binary) storage index.""" |
---|
9878 | + |
---|
9879 | def get_storage_index_string(): |
---|
9880 | """Return a string with the (printable) abbreviated storage index.""" |
---|
9881 | hunk ./src/allmydata/interfaces.py 2258 |
---|
9882 | + |
---|
9883 | def get_uri(): |
---|
9884 | """Return the (string) URI of the object that was checked.""" |
---|
9885 | |
---|
9886 | hunk ./src/allmydata/interfaces.py 2353 |
---|
9887 | def get_report(): |
---|
9888 | """Return a list of strings with more detailed results.""" |
---|
9889 | |
---|
9890 | + |
---|
9891 | class ICheckAndRepairResults(Interface): |
---|
9892 | """I contain the detailed results of a check/verify/repair operation. |
---|
9893 | |
---|
9894 | hunk ./src/allmydata/interfaces.py 2363 |
---|
9895 | |
---|
9896 | def get_storage_index(): |
---|
9897 | """Return a string with the (binary) storage index.""" |
---|
9898 | + |
---|
9899 | def get_storage_index_string(): |
---|
9900 | """Return a string with the (printable) abbreviated storage index.""" |
---|
9901 | hunk ./src/allmydata/interfaces.py 2366 |
---|
9902 | + |
---|
9903 | def get_repair_attempted(): |
---|
9904 | """Return a boolean, True if a repair was attempted. We might not |
---|
9905 | attempt to repair the file because it was healthy, or healthy enough |
---|
9906 | hunk ./src/allmydata/interfaces.py 2372 |
---|
9907 | (i.e. some shares were missing but not enough to exceed some |
---|
9908 | threshold), or because we don't know how to repair this object.""" |
---|
9909 | + |
---|
9910 | def get_repair_successful(): |
---|
9911 | """Return a boolean, True if repair was attempted and the file/dir |
---|
9912 | was fully healthy afterwards. False if no repair was attempted or if |
---|
9913 | hunk ./src/allmydata/interfaces.py 2377 |
---|
9914 | a repair attempt failed.""" |
---|
9915 | + |
---|
9916 | def get_pre_repair_results(): |
---|
9917 | """Return an ICheckResults instance that describes the state of the |
---|
9918 | file/dir before any repair was attempted.""" |
---|
9919 | hunk ./src/allmydata/interfaces.py 2381 |
---|
9920 | + |
---|
9921 | def get_post_repair_results(): |
---|
9922 | """Return an ICheckResults instance that describes the state of the |
---|
9923 | file/dir after any repair was attempted. If no repair was attempted, |
---|
9924 | hunk ./src/allmydata/interfaces.py 2615 |
---|
9925 | (childnode, metadata_dict) tuples), the directory will be populated |
---|
9926 | with those children, otherwise it will be empty.""" |
---|
9927 | |
---|
9928 | + |
---|
9929 | class IClientStatus(Interface): |
---|
9930 | def list_all_uploads(): |
---|
9931 | """Return a list of uploader objects, one for each upload that |
---|
9932 | hunk ./src/allmydata/interfaces.py 2621 |
---|
9933 | currently has an object available (tracked with weakrefs). This is |
---|
9934 | intended for debugging purposes.""" |
---|
9935 | + |
---|
9936 | def list_active_uploads(): |
---|
9937 | """Return a list of active IUploadStatus objects.""" |
---|
9938 | hunk ./src/allmydata/interfaces.py 2624 |
---|
9939 | + |
---|
9940 | def list_recent_uploads(): |
---|
9941 | """Return a list of IUploadStatus objects for the most recently |
---|
9942 | started uploads.""" |
---|
9943 | hunk ./src/allmydata/interfaces.py 2633 |
---|
9944 | """Return a list of downloader objects, one for each download that |
---|
9945 | currently has an object available (tracked with weakrefs). This is |
---|
9946 | intended for debugging purposes.""" |
---|
9947 | + |
---|
9948 | def list_active_downloads(): |
---|
9949 | """Return a list of active IDownloadStatus objects.""" |
---|
9950 | hunk ./src/allmydata/interfaces.py 2636 |
---|
9951 | + |
---|
9952 | def list_recent_downloads(): |
---|
9953 | """Return a list of IDownloadStatus objects for the most recently |
---|
9954 | started downloads.""" |
---|
9955 | hunk ./src/allmydata/interfaces.py 2641 |
---|
9956 | |
---|
9957 | + |
---|
9958 | class IUploadStatus(Interface): |
---|
9959 | def get_started(): |
---|
9960 | """Return a timestamp (float with seconds since epoch) indicating |
---|
9961 | hunk ./src/allmydata/interfaces.py 2646 |
---|
9962 | when the operation was started.""" |
---|
9963 | + |
---|
9964 | def get_storage_index(): |
---|
9965 | """Return a string with the (binary) storage index in use on this |
---|
9966 | upload. Returns None if the storage index has not yet been |
---|
9967 | hunk ./src/allmydata/interfaces.py 2651 |
---|
9968 | calculated.""" |
---|
9969 | + |
---|
9970 | def get_size(): |
---|
9971 | """Return an integer with the number of bytes that will eventually |
---|
9972 | be uploaded for this file. Returns None if the size is not yet known. |
---|
9973 | hunk ./src/allmydata/interfaces.py 2656 |
---|
9974 | """ |
---|
9975 | + |
---|
9976 | def using_helper(): |
---|
9977 | """Return True if this upload is using a Helper, False if not.""" |
---|
9978 | hunk ./src/allmydata/interfaces.py 2659 |
---|
9979 | + |
---|
9980 | def get_status(): |
---|
9981 | """Return a string describing the current state of the upload |
---|
9982 | process.""" |
---|
9983 | hunk ./src/allmydata/interfaces.py 2663 |
---|
9984 | + |
---|
9985 | def get_progress(): |
---|
9986 | """Returns a tuple of floats, (chk, ciphertext, encode_and_push), |
---|
9987 | each from 0.0 to 1.0 . 'chk' describes how much progress has been |
---|
9988 | hunk ./src/allmydata/interfaces.py 2675 |
---|
9989 | process has finished: for helper uploads this is dependent upon the |
---|
9990 | helper providing progress reports. It might be reasonable to add all |
---|
9991 | three numbers and report the sum to the user.""" |
---|
9992 | + |
---|
9993 | def get_active(): |
---|
9994 | """Return True if the upload is currently active, False if not.""" |
---|
9995 | hunk ./src/allmydata/interfaces.py 2678 |
---|
9996 | + |
---|
9997 | def get_results(): |
---|
9998 | """Return an instance of UploadResults (which contains timing and |
---|
9999 | sharemap information). Might return None if the upload is not yet |
---|
10000 | hunk ./src/allmydata/interfaces.py 2683 |
---|
10001 | finished.""" |
---|
10002 | + |
---|
10003 | def get_counter(): |
---|
10004 | """Each upload status gets a unique number: this method returns that |
---|
10005 | number. This provides a handle to this particular upload, so a web |
---|
10006 | hunk ./src/allmydata/interfaces.py 2689 |
---|
10007 | page can generate a suitable hyperlink.""" |
---|
10008 | |
---|
10009 | + |
---|
10010 | class IDownloadStatus(Interface): |
---|
10011 | def get_started(): |
---|
10012 | """Return a timestamp (float with seconds since epoch) indicating |
---|
10013 | hunk ./src/allmydata/interfaces.py 2694 |
---|
10014 | when the operation was started.""" |
---|
10015 | + |
---|
10016 | def get_storage_index(): |
---|
10017 | """Return a string with the (binary) storage index in use on this |
---|
10018 | download. This may be None if there is no storage index (i.e. LIT |
---|
10019 | hunk ./src/allmydata/interfaces.py 2699 |
---|
10020 | files).""" |
---|
10021 | + |
---|
10022 | def get_size(): |
---|
10023 | """Return an integer with the number of bytes that will eventually be |
---|
10024 | retrieved for this file. Returns None if the size is not yet known. |
---|
10025 | hunk ./src/allmydata/interfaces.py 2704 |
---|
10026 | """ |
---|
10027 | + |
---|
10028 | def using_helper(): |
---|
10029 | """Return True if this download is using a Helper, False if not.""" |
---|
10030 | hunk ./src/allmydata/interfaces.py 2707 |
---|
10031 | + |
---|
10032 | def get_status(): |
---|
10033 | """Return a string describing the current state of the download |
---|
10034 | process.""" |
---|
10035 | hunk ./src/allmydata/interfaces.py 2711 |
---|
10036 | + |
---|
10037 | def get_progress(): |
---|
10038 | """Returns a float (from 0.0 to 1.0) describing the amount of the |
---|
10039 | download that has completed. This value will remain at 0.0 until the |
---|
10040 | hunk ./src/allmydata/interfaces.py 2716 |
---|
10041 | first byte of plaintext is pushed to the download target.""" |
---|
10042 | + |
---|
10043 | def get_active(): |
---|
10044 | """Return True if the download is currently active, False if not.""" |
---|
10045 | hunk ./src/allmydata/interfaces.py 2719 |
---|
10046 | + |
---|
10047 | def get_counter(): |
---|
10048 | """Each download status gets a unique number: this method returns |
---|
10049 | that number. This provides a handle to this particular download, so a |
---|
10050 | hunk ./src/allmydata/interfaces.py 2725 |
---|
10051 | web page can generate a suitable hyperlink.""" |
---|
10052 | |
---|
10053 | + |
---|
10054 | class IServermapUpdaterStatus(Interface): |
---|
10055 | pass |
---|
10056 | hunk ./src/allmydata/interfaces.py 2728 |
---|
10057 | + |
---|
10058 | + |
---|
10059 | class IPublishStatus(Interface): |
---|
10060 | pass |
---|
10061 | hunk ./src/allmydata/interfaces.py 2732 |
---|
10062 | + |
---|
10063 | + |
---|
10064 | class IRetrieveStatus(Interface): |
---|
10065 | pass |
---|
10066 | |
---|
10067 | hunk ./src/allmydata/interfaces.py 2737 |
---|
10068 | + |
---|
10069 | class NotCapableError(Exception): |
---|
10070 | """You have tried to write to a read-only node.""" |
---|
10071 | |
---|
10072 | hunk ./src/allmydata/interfaces.py 2741 |
---|
10073 | + |
---|
10074 | class BadWriteEnablerError(Exception): |
---|
10075 | pass |
---|
10076 | |
---|
10077 | hunk ./src/allmydata/interfaces.py 2745 |
---|
10078 | -class RIControlClient(RemoteInterface): |
---|
10079 | |
---|
10080 | hunk ./src/allmydata/interfaces.py 2746 |
---|
10081 | +class RIControlClient(RemoteInterface): |
---|
10082 | def wait_for_client_connections(num_clients=int): |
---|
10083 | """Do not return until we have connections to at least NUM_CLIENTS |
---|
10084 | storage servers. |
---|
10085 | hunk ./src/allmydata/interfaces.py 2801 |
---|
10086 | |
---|
10087 | return DictOf(str, float) |
---|
10088 | |
---|
10089 | + |
---|
10090 | UploadResults = Any() #DictOf(str, str) |
---|
10091 | |
---|
10092 | hunk ./src/allmydata/interfaces.py 2804 |
---|
10093 | + |
---|
10094 | class RIEncryptedUploadable(RemoteInterface): |
---|
10095 | __remote_name__ = "RIEncryptedUploadable.tahoe.allmydata.com" |
---|
10096 | |
---|
10097 | hunk ./src/allmydata/interfaces.py 2877 |
---|
10098 | """ |
---|
10099 | return DictOf(str, DictOf(str, ChoiceOf(float, int, long, None))) |
---|
10100 | |
---|
10101 | + |
---|
10102 | class RIStatsGatherer(RemoteInterface): |
---|
10103 | __remote_name__ = "RIStatsGatherer.tahoe.allmydata.com" |
---|
10104 | """ |
---|
10105 | hunk ./src/allmydata/interfaces.py 2917 |
---|
10106 | class FileTooLargeError(Exception): |
---|
10107 | pass |
---|
10108 | |
---|
10109 | + |
---|
10110 | class IValidatedThingProxy(Interface): |
---|
10111 | def start(): |
---|
10112 | """ Acquire a thing and validate it. Return a deferred that is |
---|
10113 | hunk ./src/allmydata/interfaces.py 2924 |
---|
10114 | eventually fired with self if the thing is valid or errbacked if it |
---|
10115 | can't be acquired or validated.""" |
---|
10116 | |
---|
10117 | + |
---|
10118 | class InsufficientVersionError(Exception): |
---|
10119 | def __init__(self, needed, got): |
---|
10120 | self.needed = needed |
---|
10121 | hunk ./src/allmydata/interfaces.py 2933 |
---|
10122 | return "InsufficientVersionError(need '%s', got %s)" % (self.needed, |
---|
10123 | self.got) |
---|
10124 | |
---|
10125 | + |
---|
10126 | class EmptyPathnameComponentError(Exception): |
---|
10127 | """The webapi disallows empty pathname components.""" |
---|
10128 | hunk ./src/allmydata/test/test_crawler.py 21 |
---|
10129 | class BucketEnumeratingCrawler(ShareCrawler): |
---|
10130 | cpu_slice = 500 # make sure it can complete in a single slice |
---|
10131 | slow_start = 0 |
---|
10132 | + |
---|
10133 | def __init__(self, *args, **kwargs): |
---|
10134 | ShareCrawler.__init__(self, *args, **kwargs) |
---|
10135 | self.all_buckets = [] |
---|
10136 | hunk ./src/allmydata/test/test_crawler.py 33 |
---|
10137 | def finished_cycle(self, cycle): |
---|
10138 | eventually(self.finished_d.callback, None) |
---|
10139 | |
---|
10140 | + |
---|
10141 | class PacedCrawler(ShareCrawler): |
---|
10142 | cpu_slice = 500 # make sure it can complete in a single slice |
---|
10143 | slow_start = 0 |
---|
10144 | hunk ./src/allmydata/test/test_crawler.py 37 |
---|
10145 | + |
---|
10146 | def __init__(self, *args, **kwargs): |
---|
10147 | ShareCrawler.__init__(self, *args, **kwargs) |
---|
10148 | self.countdown = 6 |
---|
10149 | hunk ./src/allmydata/test/test_crawler.py 51 |
---|
10150 | if self.countdown == 0: |
---|
10151 | # force a timeout. We restore it in yielding() |
---|
10152 | self.cpu_slice = -1.0 |
---|
10153 | + |
---|
10154 | def yielding(self, sleep_time): |
---|
10155 | self.cpu_slice = 500 |
---|
10156 | if self.yield_cb: |
---|
10157 | hunk ./src/allmydata/test/test_crawler.py 56 |
---|
10158 | self.yield_cb() |
---|
10159 | + |
---|
10160 | def finished_cycle(self, cycle): |
---|
10161 | eventually(self.finished_d.callback, None) |
---|
10162 | |
---|
10163 | hunk ./src/allmydata/test/test_crawler.py 60 |
---|
10164 | + |
---|
10165 | class ConsumingCrawler(ShareCrawler): |
---|
10166 | cpu_slice = 0.5 |
---|
10167 | allowed_cpu_percentage = 0.5 |
---|
10168 | hunk ./src/allmydata/test/test_crawler.py 79 |
---|
10169 | elapsed = time.time() - start |
---|
10170 | self.accumulated += elapsed |
---|
10171 | self.last_yield += elapsed |
---|
10172 | + |
---|
10173 | def finished_cycle(self, cycle): |
---|
10174 | self.cycles += 1 |
---|
10175 | hunk ./src/allmydata/test/test_crawler.py 82 |
---|
10176 | + |
---|
10177 | def yielding(self, sleep_time): |
---|
10178 | self.last_yield = 0.0 |
---|
10179 | |
---|
10180 | hunk ./src/allmydata/test/test_crawler.py 86 |
---|
10181 | + |
---|
10182 | class OneShotCrawler(ShareCrawler): |
---|
10183 | cpu_slice = 500 # make sure it can complete in a single slice |
---|
10184 | slow_start = 0 |
---|
10185 | hunk ./src/allmydata/test/test_crawler.py 90 |
---|
10186 | + |
---|
10187 | def __init__(self, *args, **kwargs): |
---|
10188 | ShareCrawler.__init__(self, *args, **kwargs) |
---|
10189 | self.counter = 0 |
---|
10190 | hunk ./src/allmydata/test/test_crawler.py 98 |
---|
10191 | |
---|
10192 | def process_shareset(self, cycle, prefix, shareset): |
---|
10193 | self.counter += 1 |
---|
10194 | + |
---|
10195 | def finished_cycle(self, cycle): |
---|
10196 | self.finished_d.callback(None) |
---|
10197 | self.disownServiceParent() |
---|
10198 | hunk ./src/allmydata/test/test_crawler.py 103 |
---|
10199 | |
---|
10200 | + |
---|
10201 | class Basic(unittest.TestCase, StallMixin, pollmixin.PollMixin): |
---|
10202 | def setUp(self): |
---|
10203 | self.s = service.MultiService() |
---|
10204 | hunk ./src/allmydata/test/test_crawler.py 114 |
---|
10205 | |
---|
10206 | def si(self, i): |
---|
10207 | return hashutil.storage_index_hash(str(i)) |
---|
10208 | + |
---|
10209 | def rs(self, i, serverid): |
---|
10210 | return hashutil.bucket_renewal_secret_hash(str(i), serverid) |
---|
10211 | hunk ./src/allmydata/test/test_crawler.py 117 |
---|
10212 | + |
---|
10213 | def cs(self, i, serverid): |
---|
10214 | return hashutil.bucket_cancel_secret_hash(str(i), serverid) |
---|
10215 | |
---|
10216 | hunk ./src/allmydata/test/test_storage.py 39 |
---|
10217 | from allmydata.test.no_network import NoNetworkServer |
---|
10218 | from allmydata.web.storage import StorageStatus, remove_prefix |
---|
10219 | |
---|
10220 | + |
---|
10221 | class Marker: |
---|
10222 | pass |
---|
10223 | hunk ./src/allmydata/test/test_storage.py 42 |
---|
10224 | + |
---|
10225 | + |
---|
10226 | class FakeCanary: |
---|
10227 | def __init__(self, ignore_disconnectors=False): |
---|
10228 | self.ignore = ignore_disconnectors |
---|
10229 | hunk ./src/allmydata/test/test_storage.py 59 |
---|
10230 | return |
---|
10231 | del self.disconnectors[marker] |
---|
10232 | |
---|
10233 | + |
---|
10234 | class FakeStatsProvider: |
---|
10235 | def count(self, name, delta=1): |
---|
10236 | pass |
---|
10237 | hunk ./src/allmydata/test/test_storage.py 66 |
---|
10238 | def register_producer(self, producer): |
---|
10239 | pass |
---|
10240 | |
---|
10241 | + |
---|
10242 | class Bucket(unittest.TestCase): |
---|
10243 | def make_workdir(self, name): |
---|
10244 | basedir = FilePath("storage").child("Bucket").child(name) |
---|
10245 | hunk ./src/allmydata/test/test_storage.py 165 |
---|
10246 | result_of_read = br.remote_read(0, len(share_data)+1) |
---|
10247 | self.failUnlessEqual(result_of_read, share_data) |
---|
10248 | |
---|
10249 | + |
---|
10250 | class RemoteBucket: |
---|
10251 | |
---|
10252 | def __init__(self): |
---|
10253 | hunk ./src/allmydata/test/test_storage.py 309 |
---|
10254 | return self._do_test_readwrite("test_readwrite_v2", |
---|
10255 | 0x44, WriteBucketProxy_v2, ReadBucketProxy) |
---|
10256 | |
---|
10257 | + |
---|
10258 | class Server(unittest.TestCase): |
---|
10259 | |
---|
10260 | def setUp(self): |
---|
10261 | hunk ./src/allmydata/test/test_storage.py 780 |
---|
10262 | self.failUnlessIn("This share tastes like dust.", report) |
---|
10263 | |
---|
10264 | |
---|
10265 | - |
---|
10266 | class MutableServer(unittest.TestCase): |
---|
10267 | |
---|
10268 | def setUp(self): |
---|
10269 | hunk ./src/allmydata/test/test_storage.py 1407 |
---|
10270 | # header. |
---|
10271 | self.salt_hash_tree_s = self.serialize_blockhashes(self.salt_hash_tree[1:]) |
---|
10272 | |
---|
10273 | - |
---|
10274 | def tearDown(self): |
---|
10275 | self.sparent.stopService() |
---|
10276 | fileutil.fp_remove(self.workdir("MDMFProxies storage test server")) |
---|
10277 | hunk ./src/allmydata/test/test_storage.py 1411 |
---|
10278 | |
---|
10279 | - |
---|
10280 | def write_enabler(self, we_tag): |
---|
10281 | return hashutil.tagged_hash("we_blah", we_tag) |
---|
10282 | |
---|
10283 | hunk ./src/allmydata/test/test_storage.py 1414 |
---|
10284 | - |
---|
10285 | def renew_secret(self, tag): |
---|
10286 | return hashutil.tagged_hash("renew_blah", str(tag)) |
---|
10287 | |
---|
10288 | hunk ./src/allmydata/test/test_storage.py 1417 |
---|
10289 | - |
---|
10290 | def cancel_secret(self, tag): |
---|
10291 | return hashutil.tagged_hash("cancel_blah", str(tag)) |
---|
10292 | |
---|
10293 | hunk ./src/allmydata/test/test_storage.py 1420 |
---|
10294 | - |
---|
10295 | def workdir(self, name): |
---|
10296 | return FilePath("storage").child("MDMFProxies").child(name) |
---|
10297 | |
---|
10298 | hunk ./src/allmydata/test/test_storage.py 1430 |
---|
10299 | ss.setServiceParent(self.sparent) |
---|
10300 | return ss |
---|
10301 | |
---|
10302 | - |
---|
10303 | def build_test_mdmf_share(self, tail_segment=False, empty=False): |
---|
10304 | # Start with the checkstring |
---|
10305 | data = struct.pack(">BQ32s", |
---|
10306 | hunk ./src/allmydata/test/test_storage.py 1527 |
---|
10307 | data += self.block_hash_tree_s |
---|
10308 | return data |
---|
10309 | |
---|
10310 | - |
---|
10311 | def write_test_share_to_server(self, |
---|
10312 | storage_index, |
---|
10313 | tail_segment=False, |
---|
10314 | hunk ./src/allmydata/test/test_storage.py 1548 |
---|
10315 | results = write(storage_index, self.secrets, tws, readv) |
---|
10316 | self.failUnless(results[0]) |
---|
10317 | |
---|
10318 | - |
---|
10319 | def build_test_sdmf_share(self, empty=False): |
---|
10320 | if empty: |
---|
10321 | sharedata = "" |
---|
10322 | hunk ./src/allmydata/test/test_storage.py 1598 |
---|
10323 | self.offsets['EOF'] = eof_offset |
---|
10324 | return final_share |
---|
10325 | |
---|
10326 | - |
---|
10327 | def write_sdmf_share_to_server(self, |
---|
10328 | storage_index, |
---|
10329 | empty=False): |
---|
10330 | hunk ./src/allmydata/test/test_storage.py 1613 |
---|
10331 | results = write(storage_index, self.secrets, tws, readv) |
---|
10332 | self.failUnless(results[0]) |
---|
10333 | |
---|
10334 | - |
---|
10335 | def test_read(self): |
---|
10336 | self.write_test_share_to_server("si1") |
---|
10337 | mr = MDMFSlotReadProxy(self.rref, "si1", 0) |
---|
10338 | hunk ./src/allmydata/test/test_storage.py 1682 |
---|
10339 | self.failUnlessEqual(checkstring, checkstring)) |
---|
10340 | return d |
---|
10341 | |
---|
10342 | - |
---|
10343 | def test_read_with_different_tail_segment_size(self): |
---|
10344 | self.write_test_share_to_server("si1", tail_segment=True) |
---|
10345 | mr = MDMFSlotReadProxy(self.rref, "si1", 0) |
---|
10346 | hunk ./src/allmydata/test/test_storage.py 1693 |
---|
10347 | d.addCallback(_check_tail_segment) |
---|
10348 | return d |
---|
10349 | |
---|
10350 | - |
---|
10351 | def test_get_block_with_invalid_segnum(self): |
---|
10352 | self.write_test_share_to_server("si1") |
---|
10353 | mr = MDMFSlotReadProxy(self.rref, "si1", 0) |
---|
10354 | hunk ./src/allmydata/test/test_storage.py 1703 |
---|
10355 | mr.get_block_and_salt, 7)) |
---|
10356 | return d |
---|
10357 | |
---|
10358 | - |
---|
10359 | def test_get_encoding_parameters_first(self): |
---|
10360 | self.write_test_share_to_server("si1") |
---|
10361 | mr = MDMFSlotReadProxy(self.rref, "si1", 0) |
---|
10362 | hunk ./src/allmydata/test/test_storage.py 1715 |
---|
10363 | d.addCallback(_check_encoding_parameters) |
---|
10364 | return d |
---|
10365 | |
---|
10366 | - |
---|
10367 | def test_get_seqnum_first(self): |
---|
10368 | self.write_test_share_to_server("si1") |
---|
10369 | mr = MDMFSlotReadProxy(self.rref, "si1", 0) |
---|
10370 | hunk ./src/allmydata/test/test_storage.py 1723 |
---|
10371 | self.failUnlessEqual(seqnum, 0)) |
---|
10372 | return d |
---|
10373 | |
---|
10374 | - |
---|
10375 | def test_get_root_hash_first(self): |
---|
10376 | self.write_test_share_to_server("si1") |
---|
10377 | mr = MDMFSlotReadProxy(self.rref, "si1", 0) |
---|
10378 | hunk ./src/allmydata/test/test_storage.py 1731 |
---|
10379 | self.failUnlessEqual(root_hash, self.root_hash)) |
---|
10380 | return d |
---|
10381 | |
---|
10382 | - |
---|
10383 | def test_get_checkstring_first(self): |
---|
10384 | self.write_test_share_to_server("si1") |
---|
10385 | mr = MDMFSlotReadProxy(self.rref, "si1", 0) |
---|
10386 | hunk ./src/allmydata/test/test_storage.py 1739 |
---|
10387 | self.failUnlessEqual(checkstring, self.checkstring)) |
---|
10388 | return d |
---|
10389 | |
---|
10390 | - |
---|
10391 | def test_write_read_vectors(self): |
---|
10392 | # When writing for us, the storage server will return to us a |
---|
10393 | # read vector, along with its result. If a write fails because |
---|
10394 | hunk ./src/allmydata/test/test_storage.py 1777 |
---|
10395 | # The checkstring remains the same for the rest of the process. |
---|
10396 | return d |
---|
10397 | |
---|
10398 | - |
---|
10399 | def test_private_key_after_share_hash_chain(self): |
---|
10400 | mw = self._make_new_mw("si1", 0) |
---|
10401 | d = defer.succeed(None) |
---|
10402 | hunk ./src/allmydata/test/test_storage.py 1795 |
---|
10403 | mw.put_encprivkey, self.encprivkey)) |
---|
10404 | return d |
---|
10405 | |
---|
10406 | - |
---|
10407 | def test_signature_after_verification_key(self): |
---|
10408 | mw = self._make_new_mw("si1", 0) |
---|
10409 | d = defer.succeed(None) |
---|
10410 | hunk ./src/allmydata/test/test_storage.py 1821 |
---|
10411 | mw.put_signature, self.signature)) |
---|
10412 | return d |
---|
10413 | |
---|
10414 | - |
---|
10415 | def test_uncoordinated_write(self): |
---|
10416 | # Make two mutable writers, both pointing to the same storage |
---|
10417 | # server, both at the same storage index, and try writing to the |
---|
10418 | hunk ./src/allmydata/test/test_storage.py 1853 |
---|
10419 | d.addCallback(_check_failure) |
---|
10420 | return d |
---|
10421 | |
---|
10422 | - |
---|
10423 | def test_invalid_salt_size(self): |
---|
10424 | # Salts need to be 16 bytes in size. Writes that attempt to |
---|
10425 | # write more or less than this should be rejected. |
---|
10426 | hunk ./src/allmydata/test/test_storage.py 1871 |
---|
10427 | another_invalid_salt)) |
---|
10428 | return d |
---|
10429 | |
---|
10430 | - |
---|
10431 | def test_write_test_vectors(self): |
---|
10432 | # If we give the write proxy a bogus test vector at |
---|
10433 | # any point during the process, it should fail to write when we |
---|
10434 | hunk ./src/allmydata/test/test_storage.py 1904 |
---|
10435 | d.addCallback(_check_success) |
---|
10436 | return d |
---|
10437 | |
---|
10438 | - |
---|
10439 | def serialize_blockhashes(self, blockhashes): |
---|
10440 | return "".join(blockhashes) |
---|
10441 | |
---|
10442 | hunk ./src/allmydata/test/test_storage.py 1907 |
---|
10443 | - |
---|
10444 | def serialize_sharehashes(self, sharehashes): |
---|
10445 | ret = "".join([struct.pack(">H32s", i, sharehashes[i]) |
---|
10446 | for i in sorted(sharehashes.keys())]) |
---|
10447 | hunk ./src/allmydata/test/test_storage.py 1912 |
---|
10448 | return ret |
---|
10449 | |
---|
10450 | - |
---|
10451 | def test_write(self): |
---|
10452 | # This translates to a file with 6 6-byte segments, and with 2-byte |
---|
10453 | # blocks. |
---|
10454 | hunk ./src/allmydata/test/test_storage.py 2043 |
---|
10455 | 6, datalength) |
---|
10456 | return mw |
---|
10457 | |
---|
10458 | - |
---|
10459 | def test_write_rejected_with_too_many_blocks(self): |
---|
10460 | mw = self._make_new_mw("si0", 0) |
---|
10461 | |
---|
10462 | hunk ./src/allmydata/test/test_storage.py 2059 |
---|
10463 | mw.put_block, self.block, 7, self.salt)) |
---|
10464 | return d |
---|
10465 | |
---|
10466 | - |
---|
10467 | def test_write_rejected_with_invalid_salt(self): |
---|
10468 | # Try writing an invalid salt. Salts are 16 bytes -- any more or |
---|
10469 | # less should cause an error. |
---|
10470 | hunk ./src/allmydata/test/test_storage.py 2070 |
---|
10471 | None, mw.put_block, self.block, 7, bad_salt)) |
---|
10472 | return d |
---|
10473 | |
---|
10474 | - |
---|
10475 | def test_write_rejected_with_invalid_root_hash(self): |
---|
10476 | # Try writing an invalid root hash. This should be SHA256d, and |
---|
10477 | # 32 bytes long as a result. |
---|
10478 | hunk ./src/allmydata/test/test_storage.py 2095 |
---|
10479 | None, mw.put_root_hash, invalid_root_hash)) |
---|
10480 | return d |
---|
10481 | |
---|
10482 | - |
---|
10483 | def test_write_rejected_with_invalid_blocksize(self): |
---|
10484 | # The blocksize implied by the writer that we get from |
---|
10485 | # _make_new_mw is 2bytes -- any more or any less than this |
---|
10486 | hunk ./src/allmydata/test/test_storage.py 2128 |
---|
10487 | mw.put_block(valid_block, 5, self.salt)) |
---|
10488 | return d |
---|
10489 | |
---|
10490 | - |
---|
10491 | def test_write_enforces_order_constraints(self): |
---|
10492 | # We require that the MDMFSlotWriteProxy be interacted with in a |
---|
10493 | # specific way. |
---|
10494 | hunk ./src/allmydata/test/test_storage.py 2213 |
---|
10495 | mw0.put_verification_key(self.verification_key)) |
---|
10496 | return d |
---|
10497 | |
---|
10498 | - |
---|
10499 | def test_end_to_end(self): |
---|
10500 | mw = self._make_new_mw("si1", 0) |
---|
10501 | # Write a share using the mutable writer, and make sure that the |
---|
10502 | hunk ./src/allmydata/test/test_storage.py 2378 |
---|
10503 | self.failUnlessEqual(root_hash, self.root_hash, root_hash)) |
---|
10504 | return d |
---|
10505 | |
---|
10506 | - |
---|
10507 | def test_only_reads_one_segment_sdmf(self): |
---|
10508 | # SDMF shares have only one segment, so it doesn't make sense to |
---|
10509 | # read more segments than that. The reader should know this and |
---|
10510 | hunk ./src/allmydata/test/test_storage.py 2395 |
---|
10511 | mr.get_block_and_salt, 1)) |
---|
10512 | return d |
---|
10513 | |
---|
10514 | - |
---|
10515 | def test_read_with_prefetched_mdmf_data(self): |
---|
10516 | # The MDMFSlotReadProxy will prefill certain fields if you pass |
---|
10517 | # it data that you have already fetched. This is useful for |
---|
10518 | hunk ./src/allmydata/test/test_storage.py 2459 |
---|
10519 | d.addCallback(_check_block_and_salt) |
---|
10520 | return d |
---|
10521 | |
---|
10522 | - |
---|
10523 | def test_read_with_prefetched_sdmf_data(self): |
---|
10524 | sdmf_data = self.build_test_sdmf_share() |
---|
10525 | self.write_sdmf_share_to_server("si1") |
---|
10526 | hunk ./src/allmydata/test/test_storage.py 2522 |
---|
10527 | d.addCallback(_check_block_and_salt) |
---|
10528 | return d |
---|
10529 | |
---|
10530 | - |
---|
10531 | def test_read_with_empty_mdmf_file(self): |
---|
10532 | # Some tests upload a file with no contents to test things |
---|
10533 | # unrelated to the actual handling of the content of the file. |
---|
10534 | hunk ./src/allmydata/test/test_storage.py 2550 |
---|
10535 | mr.get_block_and_salt, 0)) |
---|
10536 | return d |
---|
10537 | |
---|
10538 | - |
---|
10539 | def test_read_with_empty_sdmf_file(self): |
---|
10540 | self.write_sdmf_share_to_server("si1", empty=True) |
---|
10541 | mr = MDMFSlotReadProxy(self.rref, "si1", 0) |
---|
10542 | hunk ./src/allmydata/test/test_storage.py 2575 |
---|
10543 | mr.get_block_and_salt, 0)) |
---|
10544 | return d |
---|
10545 | |
---|
10546 | - |
---|
10547 | def test_verinfo_with_sdmf_file(self): |
---|
10548 | self.write_sdmf_share_to_server("si1") |
---|
10549 | mr = MDMFSlotReadProxy(self.rref, "si1", 0) |
---|
10550 | hunk ./src/allmydata/test/test_storage.py 2615 |
---|
10551 | d.addCallback(_check_verinfo) |
---|
10552 | return d |
---|
10553 | |
---|
10554 | - |
---|
10555 | def test_verinfo_with_mdmf_file(self): |
---|
10556 | self.write_test_share_to_server("si1") |
---|
10557 | mr = MDMFSlotReadProxy(self.rref, "si1", 0) |
---|
10558 | hunk ./src/allmydata/test/test_storage.py 2653 |
---|
10559 | d.addCallback(_check_verinfo) |
---|
10560 | return d |
---|
10561 | |
---|
10562 | - |
---|
10563 | def test_sdmf_writer(self): |
---|
10564 | # Go through the motions of writing an SDMF share to the storage |
---|
10565 | # server. Then read the storage server to see that the share got |
---|
10566 | hunk ./src/allmydata/test/test_storage.py 2696 |
---|
10567 | d.addCallback(_then) |
---|
10568 | return d |
---|
10569 | |
---|
10570 | - |
---|
10571 | def test_sdmf_writer_preexisting_share(self): |
---|
10572 | data = self.build_test_sdmf_share() |
---|
10573 | self.write_sdmf_share_to_server("si1") |
---|
10574 | hunk ./src/allmydata/test/test_storage.py 2839 |
---|
10575 | self.failUnless(output["get"]["99_0_percentile"] is None, output) |
---|
10576 | self.failUnless(output["get"]["99_9_percentile"] is None, output) |
---|
10577 | |
---|
10578 | + |
---|
10579 | def remove_tags(s): |
---|
10580 | s = re.sub(r'<[^>]*>', ' ', s) |
---|
10581 | s = re.sub(r'\s+', ' ', s) |
---|
10582 | hunk ./src/allmydata/test/test_storage.py 2845 |
---|
10583 | return s |
---|
10584 | |
---|
10585 | + |
---|
10586 | class MyBucketCountingCrawler(BucketCountingCrawler): |
---|
10587 | def finished_prefix(self, cycle, prefix): |
---|
10588 | BucketCountingCrawler.finished_prefix(self, cycle, prefix) |
---|
10589 | hunk ./src/allmydata/test/test_storage.py 2974 |
---|
10590 | backend = DiskBackend(fp) |
---|
10591 | ss = MyStorageServer("\x00" * 20, backend, fp) |
---|
10592 | ss.bucket_counter.slow_start = 0 |
---|
10593 | + |
---|
10594 | # these will be fired inside finished_prefix() |
---|
10595 | hooks = ss.bucket_counter.hook_ds = [defer.Deferred() for i in range(3)] |
---|
10596 | w = StorageStatus(ss) |
---|
10597 | hunk ./src/allmydata/test/test_storage.py 3008 |
---|
10598 | ss.setServiceParent(self.s) |
---|
10599 | return d |
---|
10600 | |
---|
10601 | + |
---|
10602 | class InstrumentedLeaseCheckingCrawler(LeaseCheckingCrawler): |
---|
10603 | stop_after_first_bucket = False |
---|
10604 | |
---|
10605 | hunk ./src/allmydata/test/test_storage.py 3017 |
---|
10606 | if self.stop_after_first_bucket: |
---|
10607 | self.stop_after_first_bucket = False |
---|
10608 | self.cpu_slice = -1.0 |
---|
10609 | + |
---|
10610 | def yielding(self, sleep_time): |
---|
10611 | if not self.stop_after_first_bucket: |
---|
10612 | self.cpu_slice = 500 |
---|
10613 | hunk ./src/allmydata/test/test_storage.py 3028 |
---|
10614 | |
---|
10615 | class BrokenStatResults: |
---|
10616 | pass |
---|
10617 | + |
---|
10618 | class No_ST_BLOCKS_LeaseCheckingCrawler(LeaseCheckingCrawler): |
---|
10619 | def stat(self, fn): |
---|
10620 | s = os.stat(fn) |
---|
10621 | hunk ./src/allmydata/test/test_storage.py 3044 |
---|
10622 | class No_ST_BLOCKS_StorageServer(StorageServer): |
---|
10623 | LeaseCheckerClass = No_ST_BLOCKS_LeaseCheckingCrawler |
---|
10624 | |
---|
10625 | + |
---|
10626 | class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin): |
---|
10627 | |
---|
10628 | def setUp(self): |
---|
10629 | hunk ./src/allmydata/test/test_storage.py 3891 |
---|
10630 | backend = DiskBackend(fp) |
---|
10631 | ss = InstrumentedStorageServer("\x00" * 20, backend, fp) |
---|
10632 | w = StorageStatus(ss) |
---|
10633 | + |
---|
10634 | # make it start sooner than usual. |
---|
10635 | lc = ss.lease_checker |
---|
10636 | lc.stop_after_first_bucket = True |
---|
10637 | hunk ./src/allmydata/util/fileutil.py 460 |
---|
10638 | 'avail': avail, |
---|
10639 | } |
---|
10640 | |
---|
10641 | + |
---|
10642 | def get_available_space(whichdirfp, reserved_space): |
---|
10643 | """Returns available space for share storage in bytes, or None if no |
---|
10644 | API to get this information is available. |
---|
10645 | } |
---|
10646 | [mutable/publish.py: elements should not be removed from a dictionary while it is being iterated over. refs #393 |
---|
10647 | david-sarah@jacaranda.org**20110923040825 |
---|
10648 | Ignore-this: 135da94bd344db6ccd59a576b54901c1 |
---|
10649 | ] { |
---|
10650 | hunk ./src/allmydata/mutable/publish.py 6 |
---|
10651 | import os, time |
---|
10652 | from StringIO import StringIO |
---|
10653 | from itertools import count |
---|
10654 | +from copy import copy |
---|
10655 | from zope.interface import implements |
---|
10656 | from twisted.internet import defer |
---|
10657 | from twisted.python import failure |
---|
10658 | merger 0.0 ( |
---|
10659 | hunk ./src/allmydata/mutable/publish.py 868 |
---|
10660 | - |
---|
10661 | - # TODO: Bad, since we remove from this same dict. We need to |
---|
10662 | - # make a copy, or just use a non-iterated value. |
---|
10663 | - for (shnum, writer) in self.writers.iteritems(): |
---|
10664 | + for (shnum, writer) in self.writers.copy().iteritems(): |
---|
10665 | hunk ./src/allmydata/mutable/publish.py 868 |
---|
10666 | - |
---|
10667 | - # TODO: Bad, since we remove from this same dict. We need to |
---|
10668 | - # make a copy, or just use a non-iterated value. |
---|
10669 | - for (shnum, writer) in self.writers.iteritems(): |
---|
10670 | + for (shnum, writer) in copy(self.writers).iteritems(): |
---|
10671 | ) |
---|
10672 | } |
---|
10673 | [A few comment cleanups. refs #999 |
---|
10674 | david-sarah@jacaranda.org**20110923041003 |
---|
10675 | Ignore-this: f574b4a3954b6946016646011ad15edf |
---|
10676 | ] { |
---|
10677 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 17 |
---|
10678 | |
---|
10679 | # storage/ |
---|
10680 | # storage/shares/incoming |
---|
10681 | -# incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will |
---|
10682 | -# be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success |
---|
10683 | -# storage/shares/$START/$STORAGEINDEX |
---|
10684 | -# storage/shares/$START/$STORAGEINDEX/$SHARENUM |
---|
10685 | +# incoming/ holds temp dirs named $PREFIX/$STORAGEINDEX/$SHNUM which will |
---|
10686 | +# be moved to storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM upon success |
---|
10687 | +# storage/shares/$PREFIX/$STORAGEINDEX |
---|
10688 | +# storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM |
---|
10689 | |
---|
10690 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 22 |
---|
10691 | -# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2 |
---|
10692 | +# Where "$PREFIX" denotes the first 10 bits worth of $STORAGEINDEX (that's 2 |
---|
10693 | # base-32 chars). |
---|
10694 | # $SHARENUM matches this regex: |
---|
10695 | NUM_RE=re.compile("^[0-9]+$") |
---|
10696 | hunk ./src/allmydata/storage/backends/disk/immutable.py 16 |
---|
10697 | from allmydata.storage.lease import LeaseInfo |
---|
10698 | |
---|
10699 | |
---|
10700 | -# each share file (in storage/shares/$SI/$SHNUM) contains lease information |
---|
10701 | -# and share data. The share data is accessed by RIBucketWriter.write and |
---|
10702 | -# RIBucketReader.read . The lease information is not accessible through these |
---|
10703 | -# interfaces. |
---|
10704 | +# Each share file (in storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM) contains |
---|
10705 | +# lease information and share data. The share data is accessed by |
---|
10706 | +# RIBucketWriter.write and RIBucketReader.read . The lease information is not |
---|
10707 | +# accessible through these remote interfaces. |
---|
10708 | |
---|
10709 | # The share file has the following layout: |
---|
10710 | # 0x00: share file version number, four bytes, current version is 1 |
---|
10711 | hunk ./src/allmydata/storage/backends/disk/immutable.py 211 |
---|
10712 | |
---|
10713 | # These lease operations are intended for use by disk_backend.py. |
---|
10714 | # Other clients should not depend on the fact that the disk backend |
---|
10715 | - # stores leases in share files. XXX bucket.py also relies on this. |
---|
10716 | + # stores leases in share files. |
---|
10717 | + # XXX BucketWriter in bucket.py also relies on add_lease. |
---|
10718 | |
---|
10719 | def get_leases(self): |
---|
10720 | """Yields a LeaseInfo instance for all leases.""" |
---|
10721 | } |
---|
10722 | [Move advise_corrupt_share to allmydata/storage/backends/base.py, since it will be common to the disk and S3 backends. refs #999 |
---|
10723 | david-sarah@jacaranda.org**20110923041115 |
---|
10724 | Ignore-this: 782b49f243bd98fcb6c249f8e40fd9f |
---|
10725 | ] { |
---|
10726 | hunk ./src/allmydata/storage/backends/base.py 4 |
---|
10727 | |
---|
10728 | from twisted.application import service |
---|
10729 | |
---|
10730 | +from allmydata.util import fileutil, log, time_format |
---|
10731 | from allmydata.storage.common import si_b2a |
---|
10732 | from allmydata.storage.lease import LeaseInfo |
---|
10733 | from allmydata.storage.bucket import BucketReader |
---|
10734 | hunk ./src/allmydata/storage/backends/base.py 13 |
---|
10735 | class Backend(service.MultiService): |
---|
10736 | def __init__(self): |
---|
10737 | service.MultiService.__init__(self) |
---|
10738 | + self._corruption_advisory_dir = None |
---|
10739 | + |
---|
10740 | + def advise_corrupt_share(self, sharetype, storageindex, shnum, reason): |
---|
10741 | + if self._corruption_advisory_dir is not None: |
---|
10742 | + fileutil.fp_make_dirs(self._corruption_advisory_dir) |
---|
10743 | + now = time_format.iso_utc(sep="T") |
---|
10744 | + si_s = si_b2a(storageindex) |
---|
10745 | + |
---|
10746 | + # Windows can't handle colons in the filename. |
---|
10747 | + name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "") |
---|
10748 | + f = self._corruption_advisory_dir.child(name).open("w") |
---|
10749 | + try: |
---|
10750 | + f.write("report: Share Corruption\n") |
---|
10751 | + f.write("type: %s\n" % sharetype) |
---|
10752 | + f.write("storage_index: %s\n" % si_s) |
---|
10753 | + f.write("share_number: %d\n" % shnum) |
---|
10754 | + f.write("\n") |
---|
10755 | + f.write(reason) |
---|
10756 | + f.write("\n") |
---|
10757 | + finally: |
---|
10758 | + f.close() |
---|
10759 | + |
---|
10760 | + log.msg(format=("client claims corruption in (%(share_type)s) " + |
---|
10761 | + "%(si)s-%(shnum)d: %(reason)s"), |
---|
10762 | + share_type=sharetype, si=si_s, shnum=shnum, reason=reason, |
---|
10763 | + level=log.SCARY, umid="2fASGx") |
---|
10764 | |
---|
10765 | |
---|
10766 | class ShareSet(object): |
---|
10767 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 8 |
---|
10768 | |
---|
10769 | from zope.interface import implements |
---|
10770 | from allmydata.interfaces import IStorageBackend, IShareSet |
---|
10771 | -from allmydata.util import fileutil, log, time_format |
---|
10772 | +from allmydata.util import fileutil, log |
---|
10773 | from allmydata.storage.common import si_b2a, si_a2b |
---|
10774 | from allmydata.storage.bucket import BucketWriter |
---|
10775 | from allmydata.storage.backends.base import Backend, ShareSet |
---|
10776 | hunk ./src/allmydata/storage/backends/disk/disk_backend.py 125 |
---|
10777 | return 0 |
---|
10778 | return fileutil.get_available_space(self._sharedir, self._reserved_space) |
---|
10779 | |
---|
10780 | - def advise_corrupt_share(self, sharetype, storageindex, shnum, reason): |
---|
10781 | - fileutil.fp_make_dirs(self._corruption_advisory_dir) |
---|
10782 | - now = time_format.iso_utc(sep="T") |
---|
10783 | - si_s = si_b2a(storageindex) |
---|
10784 | - |
---|
10785 | - # Windows can't handle colons in the filename. |
---|
10786 | - name = ("%s--%s-%d" % (now, si_s, shnum)).replace(":", "") |
---|
10787 | - f = self._corruption_advisory_dir.child(name).open("w") |
---|
10788 | - try: |
---|
10789 | - f.write("report: Share Corruption\n") |
---|
10790 | - f.write("type: %s\n" % sharetype) |
---|
10791 | - f.write("storage_index: %s\n" % si_s) |
---|
10792 | - f.write("share_number: %d\n" % shnum) |
---|
10793 | - f.write("\n") |
---|
10794 | - f.write(reason) |
---|
10795 | - f.write("\n") |
---|
10796 | - finally: |
---|
10797 | - f.close() |
---|
10798 | - |
---|
10799 | - log.msg(format=("client claims corruption in (%(share_type)s) " + |
---|
10800 | - "%(si)s-%(shnum)d: %(reason)s"), |
---|
10801 | - share_type=sharetype, si=si_s, shnum=shnum, reason=reason, |
---|
10802 | - level=log.SCARY, umid="SGx2fA") |
---|
10803 | - |
---|
10804 | |
---|
10805 | class DiskShareSet(ShareSet): |
---|
10806 | implements(IShareSet) |
---|
10807 | } |
---|
10808 | [Add incomplete S3 backend. refs #999 |
---|
10809 | david-sarah@jacaranda.org**20110923041314 |
---|
10810 | Ignore-this: b48df65699e3926dcbb87b5f755cdbf1 |
---|
10811 | ] { |
---|
10812 | adddir ./src/allmydata/storage/backends/s3 |
---|
10813 | addfile ./src/allmydata/storage/backends/s3/__init__.py |
---|
10814 | addfile ./src/allmydata/storage/backends/s3/immutable.py |
---|
10815 | hunk ./src/allmydata/storage/backends/s3/immutable.py 1 |
---|
10816 | + |
---|
10817 | +import struct |
---|
10818 | + |
---|
10819 | +from zope.interface import implements |
---|
10820 | + |
---|
10821 | +from allmydata.interfaces import IStoredShare |
---|
10822 | +from allmydata.util.assertutil import precondition |
---|
10823 | +from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError |
---|
10824 | + |
---|
10825 | + |
---|
10826 | +# Each share file (in storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM) contains |
---|
10827 | +# lease information [currently inaccessible] and share data. The share data is |
---|
10828 | +# accessed by RIBucketWriter.write and RIBucketReader.read . |
---|
10829 | + |
---|
10830 | +# The share file has the following layout: |
---|
10831 | +# 0x00: share file version number, four bytes, current version is 1 |
---|
10832 | +# 0x04: always zero (was share data length prior to Tahoe-LAFS v1.3.0) |
---|
10833 | +# 0x08: number of leases, four bytes big-endian |
---|
10834 | +# 0x0c: beginning of share data (see immutable.layout.WriteBucketProxy) |
---|
10835 | +# data_length+0x0c: first lease. Each lease record is 72 bytes. |
---|
10836 | + |
---|
10837 | + |
---|
10838 | +class ImmutableS3Share(object): |
---|
10839 | + implements(IStoredShare) |
---|
10840 | + |
---|
10841 | + sharetype = "immutable" |
---|
10842 | + LEASE_SIZE = struct.calcsize(">L32s32sL") # for compatibility |
---|
10843 | + |
---|
10844 | + |
---|
10845 | + def __init__(self, storageindex, shnum, s3bucket, create=False, max_size=None): |
---|
10846 | + """ |
---|
10847 | + If max_size is not None then I won't allow more than max_size to be written to me. |
---|
10848 | + """ |
---|
10849 | + precondition((max_size is not None) or not create, max_size, create) |
---|
10850 | + self._storageindex = storageindex |
---|
10851 | + self._max_size = max_size |
---|
10852 | + |
---|
10853 | + self._s3bucket = s3bucket |
---|
10854 | + si_s = si_b2a(storageindex) |
---|
10855 | + self._key = "storage/shares/%s/%s/%d" % (si_s[:2], si_s, shnum) |
---|
10856 | + self._shnum = shnum |
---|
10857 | + |
---|
10858 | + if create: |
---|
10859 | + # The second field, which was the four-byte share data length in |
---|
10860 | + # Tahoe-LAFS versions prior to 1.3.0, is not used; we always write 0. |
---|
10861 | + # We also write 0 for the number of leases. |
---|
10862 | + self._home.setContent(struct.pack(">LLL", 1, 0, 0) ) |
---|
10863 | + self._end_offset = max_size + 0x0c |
---|
10864 | + |
---|
10865 | + # TODO: start write to S3. |
---|
10866 | + else: |
---|
10867 | + # TODO: get header |
---|
10868 | + header = "\x00"*12 |
---|
10869 | + (version, unused, num_leases) = struct.unpack(">LLL", header) |
---|
10870 | + |
---|
10871 | + if version != 1: |
---|
10872 | + msg = "sharefile %s had version %d but we wanted 1" % \ |
---|
10873 | + (self._home, version) |
---|
10874 | + raise UnknownImmutableContainerVersionError(msg) |
---|
10875 | + |
---|
10876 | + # We cannot write leases in share files, but allow them to be present |
---|
10877 | + # in case a share file is copied from a disk backend, or in case we |
---|
10878 | + # need them in future. |
---|
10879 | + # TODO: filesize = size of S3 object |
---|
10880 | + self._end_offset = filesize - (num_leases * self.LEASE_SIZE) |
---|
10881 | + self._data_offset = 0xc |
---|
10882 | + |
---|
10883 | + def __repr__(self): |
---|
10884 | + return ("<ImmutableS3Share %s:%r at %r>" |
---|
10885 | + % (si_b2a(self._storageindex), self._shnum, self._key)) |
---|
10886 | + |
---|
10887 | + def close(self): |
---|
10888 | + # TODO: finalize write to S3. |
---|
10889 | + pass |
---|
10890 | + |
---|
10891 | + def get_used_space(self): |
---|
10892 | + return self._size |
---|
10893 | + |
---|
10894 | + def get_storage_index(self): |
---|
10895 | + return self._storageindex |
---|
10896 | + |
---|
10897 | + def get_storage_index_string(self): |
---|
10898 | + return si_b2a(self._storageindex) |
---|
10899 | + |
---|
10900 | + def get_shnum(self): |
---|
10901 | + return self._shnum |
---|
10902 | + |
---|
10903 | + def unlink(self): |
---|
10904 | + # TODO: remove the S3 object. |
---|
10905 | + pass |
---|
10906 | + |
---|
10907 | + def get_allocated_size(self): |
---|
10908 | + return self._max_size |
---|
10909 | + |
---|
10910 | + def get_size(self): |
---|
10911 | + return self._size |
---|
10912 | + |
---|
10913 | + def get_data_length(self): |
---|
10914 | + return self._end_offset - self._data_offset |
---|
10915 | + |
---|
10916 | + def read_share_data(self, offset, length): |
---|
10917 | + precondition(offset >= 0) |
---|
10918 | + |
---|
10919 | + # Reads beyond the end of the data are truncated. Reads that start |
---|
10920 | + # beyond the end of the data return an empty string. |
---|
10921 | + seekpos = self._data_offset+offset |
---|
10922 | + actuallength = max(0, min(length, self._end_offset-seekpos)) |
---|
10923 | + if actuallength == 0: |
---|
10924 | + return "" |
---|
10925 | + |
---|
10926 | + # TODO: perform an S3 GET request, possibly with a Content-Range header. |
---|
10927 | + return "\x00"*actuallength |
---|
10928 | + |
---|
10929 | + def write_share_data(self, offset, data): |
---|
10930 | + assert offset >= self._size, "offset = %r, size = %r" % (offset, self._size) |
---|
10931 | + |
---|
10932 | + # TODO: write data to S3. If offset > self._size, fill the space |
---|
10933 | + # between with zeroes. |
---|
10934 | + |
---|
10935 | + self._size = offset + len(data) |
---|
10936 | + |
---|
10937 | + def add_lease(self, lease_info): |
---|
10938 | + pass |
---|
10939 | addfile ./src/allmydata/storage/backends/s3/mutable.py |
---|
10940 | hunk ./src/allmydata/storage/backends/s3/mutable.py 1 |
---|
10941 | + |
---|
10942 | +import struct |
---|
10943 | + |
---|
10944 | +from zope.interface import implements |
---|
10945 | + |
---|
10946 | +from allmydata.interfaces import IStoredMutableShare, BadWriteEnablerError |
---|
10947 | +from allmydata.util import fileutil, idlib, log |
---|
10948 | +from allmydata.util.assertutil import precondition |
---|
10949 | +from allmydata.util.hashutil import constant_time_compare |
---|
10950 | +from allmydata.util.encodingutil import quote_filepath |
---|
10951 | +from allmydata.storage.common import si_b2a, UnknownMutableContainerVersionError, \ |
---|
10952 | + DataTooLargeError |
---|
10953 | +from allmydata.storage.lease import LeaseInfo |
---|
10954 | +from allmydata.storage.backends.base import testv_compare |
---|
10955 | + |
---|
10956 | + |
---|
10957 | +# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data. |
---|
10958 | +# It has a different layout. See docs/mutable.rst for more details. |
---|
10959 | + |
---|
10960 | +# # offset size name |
---|
10961 | +# 1 0 32 magic verstr "tahoe mutable container v1" plus binary |
---|
10962 | +# 2 32 20 write enabler's nodeid |
---|
10963 | +# 3 52 32 write enabler |
---|
10964 | +# 4 84 8 data size (actual share data present) (a) |
---|
10965 | +# 5 92 8 offset of (8) count of extra leases (after data) |
---|
10966 | +# 6 100 368 four leases, 92 bytes each |
---|
10967 | +# 0 4 ownerid (0 means "no lease here") |
---|
10968 | +# 4 4 expiration timestamp |
---|
10969 | +# 8 32 renewal token |
---|
10970 | +# 40 32 cancel token |
---|
10971 | +# 72 20 nodeid that accepted the tokens |
---|
10972 | +# 7 468 (a) data |
---|
10973 | +# 8 ?? 4 count of extra leases |
---|
10974 | +# 9 ?? n*92 extra leases |
---|
10975 | + |
---|
10976 | + |
---|
10977 | +# The struct module doc says that L's are 4 bytes in size, and that Q's are |
---|
10978 | +# 8 bytes in size. Since compatibility depends upon this, double-check it. |
---|
10979 | +assert struct.calcsize(">L") == 4, struct.calcsize(">L") |
---|
10980 | +assert struct.calcsize(">Q") == 8, struct.calcsize(">Q") |
---|
10981 | + |
---|
10982 | + |
---|
10983 | +class MutableDiskShare(object): |
---|
10984 | + implements(IStoredMutableShare) |
---|
10985 | + |
---|
10986 | + sharetype = "mutable" |
---|
10987 | + DATA_LENGTH_OFFSET = struct.calcsize(">32s20s32s") |
---|
10988 | + EXTRA_LEASE_OFFSET = DATA_LENGTH_OFFSET + 8 |
---|
10989 | + HEADER_SIZE = struct.calcsize(">32s20s32sQQ") # doesn't include leases |
---|
10990 | + LEASE_SIZE = struct.calcsize(">LL32s32s20s") |
---|
10991 | + assert LEASE_SIZE == 92 |
---|
10992 | + DATA_OFFSET = HEADER_SIZE + 4*LEASE_SIZE |
---|
10993 | + assert DATA_OFFSET == 468, DATA_OFFSET |
---|
10994 | + |
---|
10995 | + # our sharefiles share with a recognizable string, plus some random |
---|
10996 | + # binary data to reduce the chance that a regular text file will look |
---|
10997 | + # like a sharefile. |
---|
10998 | + MAGIC = "Tahoe mutable container v1\n" + "\x75\x09\x44\x03\x8e" |
---|
10999 | + assert len(MAGIC) == 32 |
---|
11000 | + MAX_SIZE = 2*1000*1000*1000 # 2GB, kind of arbitrary |
---|
11001 | + # TODO: decide upon a policy for max share size |
---|
11002 | + |
---|
11003 | + def __init__(self, storageindex, shnum, home, parent=None): |
---|
11004 | + self._storageindex = storageindex |
---|
11005 | + self._shnum = shnum |
---|
11006 | + self._home = home |
---|
11007 | + if self._home.exists(): |
---|
11008 | + # we don't cache anything, just check the magic |
---|
11009 | + f = self._home.open('rb') |
---|
11010 | + try: |
---|
11011 | + data = f.read(self.HEADER_SIZE) |
---|
11012 | + (magic, |
---|
11013 | + write_enabler_nodeid, write_enabler, |
---|
11014 | + data_length, extra_least_offset) = \ |
---|
11015 | + struct.unpack(">32s20s32sQQ", data) |
---|
11016 | + if magic != self.MAGIC: |
---|
11017 | + msg = "sharefile %s had magic '%r' but we wanted '%r'" % \ |
---|
11018 | + (quote_filepath(self._home), magic, self.MAGIC) |
---|
11019 | + raise UnknownMutableContainerVersionError(msg) |
---|
11020 | + finally: |
---|
11021 | + f.close() |
---|
11022 | + self.parent = parent # for logging |
---|
11023 | + |
---|
11024 | + def log(self, *args, **kwargs): |
---|
11025 | + if self.parent: |
---|
11026 | + return self.parent.log(*args, **kwargs) |
---|
11027 | + |
---|
11028 | + def create(self, serverid, write_enabler): |
---|
11029 | + assert not self._home.exists() |
---|
11030 | + data_length = 0 |
---|
11031 | + extra_lease_offset = (self.HEADER_SIZE |
---|
11032 | + + 4 * self.LEASE_SIZE |
---|
11033 | + + data_length) |
---|
11034 | + assert extra_lease_offset == self.DATA_OFFSET # true at creation |
---|
11035 | + num_extra_leases = 0 |
---|
11036 | + f = self._home.open('wb') |
---|
11037 | + try: |
---|
11038 | + header = struct.pack(">32s20s32sQQ", |
---|
11039 | + self.MAGIC, serverid, write_enabler, |
---|
11040 | + data_length, extra_lease_offset, |
---|
11041 | + ) |
---|
11042 | + leases = ("\x00"*self.LEASE_SIZE) * 4 |
---|
11043 | + f.write(header + leases) |
---|
11044 | + # data goes here, empty after creation |
---|
11045 | + f.write(struct.pack(">L", num_extra_leases)) |
---|
11046 | + # extra leases go here, none at creation |
---|
11047 | + finally: |
---|
11048 | + f.close() |
---|
11049 | + |
---|
11050 | + def __repr__(self): |
---|
11051 | + return ("<MutableDiskShare %s:%r at %s>" |
---|
11052 | + % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home))) |
---|
11053 | + |
---|
11054 | + def get_used_space(self): |
---|
11055 | + return fileutil.get_used_space(self._home) |
---|
11056 | + |
---|
11057 | + def get_storage_index(self): |
---|
11058 | + return self._storageindex |
---|
11059 | + |
---|
11060 | + def get_storage_index_string(self): |
---|
11061 | + return si_b2a(self._storageindex) |
---|
11062 | + |
---|
11063 | + def get_shnum(self): |
---|
11064 | + return self._shnum |
---|
11065 | + |
---|
11066 | + def unlink(self): |
---|
11067 | + self._home.remove() |
---|
11068 | + |
---|
11069 | + def _read_data_length(self, f): |
---|
11070 | + f.seek(self.DATA_LENGTH_OFFSET) |
---|
11071 | + (data_length,) = struct.unpack(">Q", f.read(8)) |
---|
11072 | + return data_length |
---|
11073 | + |
---|
11074 | + def _write_data_length(self, f, data_length): |
---|
11075 | + f.seek(self.DATA_LENGTH_OFFSET) |
---|
11076 | + f.write(struct.pack(">Q", data_length)) |
---|
11077 | + |
---|
11078 | + def _read_share_data(self, f, offset, length): |
---|
11079 | + precondition(offset >= 0) |
---|
11080 | + data_length = self._read_data_length(f) |
---|
11081 | + if offset+length > data_length: |
---|
11082 | + # reads beyond the end of the data are truncated. Reads that |
---|
11083 | + # start beyond the end of the data return an empty string. |
---|
11084 | + length = max(0, data_length-offset) |
---|
11085 | + if length == 0: |
---|
11086 | + return "" |
---|
11087 | + precondition(offset+length <= data_length) |
---|
11088 | + f.seek(self.DATA_OFFSET+offset) |
---|
11089 | + data = f.read(length) |
---|
11090 | + return data |
---|
11091 | + |
---|
11092 | + def _read_extra_lease_offset(self, f): |
---|
11093 | + f.seek(self.EXTRA_LEASE_OFFSET) |
---|
11094 | + (extra_lease_offset,) = struct.unpack(">Q", f.read(8)) |
---|
11095 | + return extra_lease_offset |
---|
11096 | + |
---|
11097 | + def _write_extra_lease_offset(self, f, offset): |
---|
11098 | + f.seek(self.EXTRA_LEASE_OFFSET) |
---|
11099 | + f.write(struct.pack(">Q", offset)) |
---|
11100 | + |
---|
11101 | + def _read_num_extra_leases(self, f): |
---|
11102 | + offset = self._read_extra_lease_offset(f) |
---|
11103 | + f.seek(offset) |
---|
11104 | + (num_extra_leases,) = struct.unpack(">L", f.read(4)) |
---|
11105 | + return num_extra_leases |
---|
11106 | + |
---|
11107 | + def _write_num_extra_leases(self, f, num_leases): |
---|
11108 | + extra_lease_offset = self._read_extra_lease_offset(f) |
---|
11109 | + f.seek(extra_lease_offset) |
---|
11110 | + f.write(struct.pack(">L", num_leases)) |
---|
11111 | + |
---|
11112 | + def _change_container_size(self, f, new_container_size): |
---|
11113 | + if new_container_size > self.MAX_SIZE: |
---|
11114 | + raise DataTooLargeError() |
---|
11115 | + old_extra_lease_offset = self._read_extra_lease_offset(f) |
---|
11116 | + new_extra_lease_offset = self.DATA_OFFSET + new_container_size |
---|
11117 | + if new_extra_lease_offset < old_extra_lease_offset: |
---|
11118 | + # TODO: allow containers to shrink. For now they remain large. |
---|
11119 | + return |
---|
11120 | + num_extra_leases = self._read_num_extra_leases(f) |
---|
11121 | + f.seek(old_extra_lease_offset) |
---|
11122 | + leases_size = 4 + num_extra_leases * self.LEASE_SIZE |
---|
11123 | + extra_lease_data = f.read(leases_size) |
---|
11124 | + |
---|
11125 | + # Zero out the old lease info (in order to minimize the chance that |
---|
11126 | + # it could accidentally be exposed to a reader later, re #1528). |
---|
11127 | + f.seek(old_extra_lease_offset) |
---|
11128 | + f.write('\x00' * leases_size) |
---|
11129 | + f.flush() |
---|
11130 | + |
---|
11131 | + # An interrupt here will corrupt the leases. |
---|
11132 | + |
---|
11133 | + f.seek(new_extra_lease_offset) |
---|
11134 | + f.write(extra_lease_data) |
---|
11135 | + self._write_extra_lease_offset(f, new_extra_lease_offset) |
---|
11136 | + |
---|
11137 | + def _write_share_data(self, f, offset, data): |
---|
11138 | + length = len(data) |
---|
11139 | + precondition(offset >= 0) |
---|
11140 | + data_length = self._read_data_length(f) |
---|
11141 | + extra_lease_offset = self._read_extra_lease_offset(f) |
---|
11142 | + |
---|
11143 | + if offset+length >= data_length: |
---|
11144 | + # They are expanding their data size. |
---|
11145 | + |
---|
11146 | + if self.DATA_OFFSET+offset+length > extra_lease_offset: |
---|
11147 | + # TODO: allow containers to shrink. For now, they remain |
---|
11148 | + # large. |
---|
11149 | + |
---|
11150 | + # Their new data won't fit in the current container, so we |
---|
11151 | + # have to move the leases. With luck, they're expanding it |
---|
11152 | + # more than the size of the extra lease block, which will |
---|
11153 | + # minimize the corrupt-the-share window |
---|
11154 | + self._change_container_size(f, offset+length) |
---|
11155 | + extra_lease_offset = self._read_extra_lease_offset(f) |
---|
11156 | + |
---|
11157 | + # an interrupt here is ok.. the container has been enlarged |
---|
11158 | + # but the data remains untouched |
---|
11159 | + |
---|
11160 | + assert self.DATA_OFFSET+offset+length <= extra_lease_offset |
---|
11161 | + # Their data now fits in the current container. We must write |
---|
11162 | + # their new data and modify the recorded data size. |
---|
11163 | + |
---|
11164 | + # Fill any newly exposed empty space with 0's. |
---|
11165 | + if offset > data_length: |
---|
11166 | + f.seek(self.DATA_OFFSET+data_length) |
---|
11167 | + f.write('\x00'*(offset - data_length)) |
---|
11168 | + f.flush() |
---|
11169 | + |
---|
11170 | + new_data_length = offset+length |
---|
11171 | + self._write_data_length(f, new_data_length) |
---|
11172 | + # an interrupt here will result in a corrupted share |
---|
11173 | + |
---|
11174 | + # now all that's left to do is write out their data |
---|
11175 | + f.seek(self.DATA_OFFSET+offset) |
---|
11176 | + f.write(data) |
---|
11177 | + return |
---|
11178 | + |
---|
11179 | + def _write_lease_record(self, f, lease_number, lease_info): |
---|
11180 | + extra_lease_offset = self._read_extra_lease_offset(f) |
---|
11181 | + num_extra_leases = self._read_num_extra_leases(f) |
---|
11182 | + if lease_number < 4: |
---|
11183 | + offset = self.HEADER_SIZE + lease_number * self.LEASE_SIZE |
---|
11184 | + elif (lease_number-4) < num_extra_leases: |
---|
11185 | + offset = (extra_lease_offset |
---|
11186 | + + 4 |
---|
11187 | + + (lease_number-4)*self.LEASE_SIZE) |
---|
11188 | + else: |
---|
11189 | + # must add an extra lease record |
---|
11190 | + self._write_num_extra_leases(f, num_extra_leases+1) |
---|
11191 | + offset = (extra_lease_offset |
---|
11192 | + + 4 |
---|
11193 | + + (lease_number-4)*self.LEASE_SIZE) |
---|
11194 | + f.seek(offset) |
---|
11195 | + assert f.tell() == offset |
---|
11196 | + f.write(lease_info.to_mutable_data()) |
---|
11197 | + |
---|
11198 | + def _read_lease_record(self, f, lease_number): |
---|
11199 | + # returns a LeaseInfo instance, or None |
---|
11200 | + extra_lease_offset = self._read_extra_lease_offset(f) |
---|
11201 | + num_extra_leases = self._read_num_extra_leases(f) |
---|
11202 | + if lease_number < 4: |
---|
11203 | + offset = self.HEADER_SIZE + lease_number * self.LEASE_SIZE |
---|
11204 | + elif (lease_number-4) < num_extra_leases: |
---|
11205 | + offset = (extra_lease_offset |
---|
11206 | + + 4 |
---|
11207 | + + (lease_number-4)*self.LEASE_SIZE) |
---|
11208 | + else: |
---|
11209 | + raise IndexError("No such lease number %d" % lease_number) |
---|
11210 | + f.seek(offset) |
---|
11211 | + assert f.tell() == offset |
---|
11212 | + data = f.read(self.LEASE_SIZE) |
---|
11213 | + lease_info = LeaseInfo().from_mutable_data(data) |
---|
11214 | + if lease_info.owner_num == 0: |
---|
11215 | + return None |
---|
11216 | + return lease_info |
---|
11217 | + |
---|
11218 | + def _get_num_lease_slots(self, f): |
---|
11219 | + # how many places do we have allocated for leases? Not all of them |
---|
11220 | + # are filled. |
---|
11221 | + num_extra_leases = self._read_num_extra_leases(f) |
---|
11222 | + return 4+num_extra_leases |
---|
11223 | + |
---|
11224 | + def _get_first_empty_lease_slot(self, f): |
---|
11225 | + # return an int with the index of an empty slot, or None if we do not |
---|
11226 | + # currently have an empty slot |
---|
11227 | + |
---|
11228 | + for i in range(self._get_num_lease_slots(f)): |
---|
11229 | + if self._read_lease_record(f, i) is None: |
---|
11230 | + return i |
---|
11231 | + return None |
---|
11232 | + |
---|
11233 | + def get_leases(self): |
---|
11234 | + """Yields a LeaseInfo instance for all leases.""" |
---|
11235 | + f = self._home.open('rb') |
---|
11236 | + try: |
---|
11237 | + for i, lease in self._enumerate_leases(f): |
---|
11238 | + yield lease |
---|
11239 | + finally: |
---|
11240 | + f.close() |
---|
11241 | + |
---|
11242 | + def _enumerate_leases(self, f): |
---|
11243 | + for i in range(self._get_num_lease_slots(f)): |
---|
11244 | + try: |
---|
11245 | + data = self._read_lease_record(f, i) |
---|
11246 | + if data is not None: |
---|
11247 | + yield i, data |
---|
11248 | + except IndexError: |
---|
11249 | + return |
---|
11250 | + |
---|
11251 | + # These lease operations are intended for use by disk_backend.py. |
---|
11252 | + # Other non-test clients should not depend on the fact that the disk |
---|
11253 | + # backend stores leases in share files. |
---|
11254 | + |
---|
11255 | + def add_lease(self, lease_info): |
---|
11256 | + precondition(lease_info.owner_num != 0) # 0 means "no lease here" |
---|
11257 | + f = self._home.open('rb+') |
---|
11258 | + try: |
---|
11259 | + num_lease_slots = self._get_num_lease_slots(f) |
---|
11260 | + empty_slot = self._get_first_empty_lease_slot(f) |
---|
11261 | + if empty_slot is not None: |
---|
11262 | + self._write_lease_record(f, empty_slot, lease_info) |
---|
11263 | + else: |
---|
11264 | + self._write_lease_record(f, num_lease_slots, lease_info) |
---|
11265 | + finally: |
---|
11266 | + f.close() |
---|
11267 | + |
---|
11268 | + def renew_lease(self, renew_secret, new_expire_time): |
---|
11269 | + accepting_nodeids = set() |
---|
11270 | + f = self._home.open('rb+') |
---|
11271 | + try: |
---|
11272 | + for (leasenum, lease) in self._enumerate_leases(f): |
---|
11273 | + if constant_time_compare(lease.renew_secret, renew_secret): |
---|
11274 | + # yup. See if we need to update the owner time. |
---|
11275 | + if new_expire_time > lease.expiration_time: |
---|
11276 | + # yes |
---|
11277 | + lease.expiration_time = new_expire_time |
---|
11278 | + self._write_lease_record(f, leasenum, lease) |
---|
11279 | + return |
---|
11280 | + accepting_nodeids.add(lease.nodeid) |
---|
11281 | + finally: |
---|
11282 | + f.close() |
---|
11283 | + # Return the accepting_nodeids set, to give the client a chance to |
---|
11284 | + # update the leases on a share that has been migrated from its |
---|
11285 | + # original server to a new one. |
---|
11286 | + msg = ("Unable to renew non-existent lease. I have leases accepted by" |
---|
11287 | + " nodeids: ") |
---|
11288 | + msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid)) |
---|
11289 | + for anid in accepting_nodeids]) |
---|
11290 | + msg += " ." |
---|
11291 | + raise IndexError(msg) |
---|
11292 | + |
---|
11293 | + def add_or_renew_lease(self, lease_info): |
---|
11294 | + precondition(lease_info.owner_num != 0) # 0 means "no lease here" |
---|
11295 | + try: |
---|
11296 | + self.renew_lease(lease_info.renew_secret, |
---|
11297 | + lease_info.expiration_time) |
---|
11298 | + except IndexError: |
---|
11299 | + self.add_lease(lease_info) |
---|
11300 | + |
---|
11301 | + def cancel_lease(self, cancel_secret): |
---|
11302 | + """Remove any leases with the given cancel_secret. If the last lease |
---|
11303 | + is cancelled, the file will be removed. Return the number of bytes |
---|
11304 | + that were freed (by truncating the list of leases, and possibly by |
---|
11305 | + deleting the file). Raise IndexError if there was no lease with the |
---|
11306 | + given cancel_secret.""" |
---|
11307 | + |
---|
11308 | + # XXX can this be more like ImmutableDiskShare.cancel_lease? |
---|
11309 | + |
---|
11310 | + accepting_nodeids = set() |
---|
11311 | + modified = 0 |
---|
11312 | + remaining = 0 |
---|
11313 | + blank_lease = LeaseInfo(owner_num=0, |
---|
11314 | + renew_secret="\x00"*32, |
---|
11315 | + cancel_secret="\x00"*32, |
---|
11316 | + expiration_time=0, |
---|
11317 | + nodeid="\x00"*20) |
---|
11318 | + f = self._home.open('rb+') |
---|
11319 | + try: |
---|
11320 | + for (leasenum, lease) in self._enumerate_leases(f): |
---|
11321 | + accepting_nodeids.add(lease.nodeid) |
---|
11322 | + if constant_time_compare(lease.cancel_secret, cancel_secret): |
---|
11323 | + self._write_lease_record(f, leasenum, blank_lease) |
---|
11324 | + modified += 1 |
---|
11325 | + else: |
---|
11326 | + remaining += 1 |
---|
11327 | + if modified: |
---|
11328 | + freed_space = self._pack_leases(f) |
---|
11329 | + finally: |
---|
11330 | + f.close() |
---|
11331 | + |
---|
11332 | + if modified > 0: |
---|
11333 | + if remaining == 0: |
---|
11334 | + freed_space = fileutil.get_used_space(self._home) |
---|
11335 | + self.unlink() |
---|
11336 | + return freed_space |
---|
11337 | + |
---|
11338 | + msg = ("Unable to cancel non-existent lease. I have leases " |
---|
11339 | + "accepted by nodeids: ") |
---|
11340 | + msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid)) |
---|
11341 | + for anid in accepting_nodeids]) |
---|
11342 | + msg += " ." |
---|
11343 | + raise IndexError(msg) |
---|
11344 | + |
---|
11345 | + def _pack_leases(self, f): |
---|
11346 | + # TODO: reclaim space from cancelled leases |
---|
11347 | + return 0 |
---|
11348 | + |
---|
11349 | + def _read_write_enabler_and_nodeid(self, f): |
---|
11350 | + f.seek(0) |
---|
11351 | + data = f.read(self.HEADER_SIZE) |
---|
11352 | + (magic, |
---|
11353 | + write_enabler_nodeid, write_enabler, |
---|
11354 | + data_length, extra_least_offset) = \ |
---|
11355 | + struct.unpack(">32s20s32sQQ", data) |
---|
11356 | + assert magic == self.MAGIC |
---|
11357 | + return (write_enabler, write_enabler_nodeid) |
---|
11358 | + |
---|
11359 | + def readv(self, readv): |
---|
11360 | + datav = [] |
---|
11361 | + f = self._home.open('rb') |
---|
11362 | + try: |
---|
11363 | + for (offset, length) in readv: |
---|
11364 | + datav.append(self._read_share_data(f, offset, length)) |
---|
11365 | + finally: |
---|
11366 | + f.close() |
---|
11367 | + return datav |
---|
11368 | + |
---|
11369 | + def get_size(self): |
---|
11370 | + return self._home.getsize() |
---|
11371 | + |
---|
11372 | + def get_data_length(self): |
---|
11373 | + f = self._home.open('rb') |
---|
11374 | + try: |
---|
11375 | + data_length = self._read_data_length(f) |
---|
11376 | + finally: |
---|
11377 | + f.close() |
---|
11378 | + return data_length |
---|
11379 | + |
---|
11380 | + def check_write_enabler(self, write_enabler, si_s): |
---|
11381 | + f = self._home.open('rb+') |
---|
11382 | + try: |
---|
11383 | + (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f) |
---|
11384 | + finally: |
---|
11385 | + f.close() |
---|
11386 | + # avoid a timing attack |
---|
11387 | + #if write_enabler != real_write_enabler: |
---|
11388 | + if not constant_time_compare(write_enabler, real_write_enabler): |
---|
11389 | + # accomodate share migration by reporting the nodeid used for the |
---|
11390 | + # old write enabler. |
---|
11391 | + self.log(format="bad write enabler on SI %(si)s," |
---|
11392 | + " recorded by nodeid %(nodeid)s", |
---|
11393 | + facility="tahoe.storage", |
---|
11394 | + level=log.WEIRD, umid="cE1eBQ", |
---|
11395 | + si=si_s, nodeid=idlib.nodeid_b2a(write_enabler_nodeid)) |
---|
11396 | + msg = "The write enabler was recorded by nodeid '%s'." % \ |
---|
11397 | + (idlib.nodeid_b2a(write_enabler_nodeid),) |
---|
11398 | + raise BadWriteEnablerError(msg) |
---|
11399 | + |
---|
11400 | + def check_testv(self, testv): |
---|
11401 | + test_good = True |
---|
11402 | + f = self._home.open('rb+') |
---|
11403 | + try: |
---|
11404 | + for (offset, length, operator, specimen) in testv: |
---|
11405 | + data = self._read_share_data(f, offset, length) |
---|
11406 | + if not testv_compare(data, operator, specimen): |
---|
11407 | + test_good = False |
---|
11408 | + break |
---|
11409 | + finally: |
---|
11410 | + f.close() |
---|
11411 | + return test_good |
---|
11412 | + |
---|
11413 | + def writev(self, datav, new_length): |
---|
11414 | + f = self._home.open('rb+') |
---|
11415 | + try: |
---|
11416 | + for (offset, data) in datav: |
---|
11417 | + self._write_share_data(f, offset, data) |
---|
11418 | + if new_length is not None: |
---|
11419 | + cur_length = self._read_data_length(f) |
---|
11420 | + if new_length < cur_length: |
---|
11421 | + self._write_data_length(f, new_length) |
---|
11422 | + # TODO: if we're going to shrink the share file when the |
---|
11423 | + # share data has shrunk, then call |
---|
11424 | + # self._change_container_size() here. |
---|
11425 | + finally: |
---|
11426 | + f.close() |
---|
11427 | + |
---|
11428 | + def close(self): |
---|
11429 | + pass |
---|
11430 | + |
---|
11431 | + |
---|
11432 | +def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent): |
---|
11433 | + ms = MutableDiskShare(storageindex, shnum, fp, parent) |
---|
11434 | + ms.create(serverid, write_enabler) |
---|
11435 | + del ms |
---|
11436 | + return MutableDiskShare(storageindex, shnum, fp, parent) |
---|
11437 | addfile ./src/allmydata/storage/backends/s3/s3_backend.py |
---|
11438 | hunk ./src/allmydata/storage/backends/s3/s3_backend.py 1 |
---|
11439 | + |
---|
11440 | +from zope.interface import implements |
---|
11441 | +from allmydata.interfaces import IStorageBackend, IShareSet |
---|
11442 | +from allmydata.storage.common import si_b2a, si_a2b |
---|
11443 | +from allmydata.storage.bucket import BucketWriter |
---|
11444 | +from allmydata.storage.backends.base import Backend, ShareSet |
---|
11445 | +from allmydata.storage.backends.s3.immutable import ImmutableS3Share |
---|
11446 | +from allmydata.storage.backends.s3.mutable import MutableS3Share |
---|
11447 | + |
---|
11448 | +# The S3 bucket has keys of the form shares/$STORAGEINDEX/$SHARENUM |
---|
11449 | + |
---|
11450 | + |
---|
11451 | +class S3Backend(Backend): |
---|
11452 | + implements(IStorageBackend) |
---|
11453 | + |
---|
11454 | + def __init__(self, s3bucket, readonly=False, max_space=None, corruption_advisory_dir=None): |
---|
11455 | + Backend.__init__(self) |
---|
11456 | + self._s3bucket = s3bucket |
---|
11457 | + self._readonly = readonly |
---|
11458 | + if max_space is None: |
---|
11459 | + self._max_space = 2**64 |
---|
11460 | + else: |
---|
11461 | + self._max_space = int(max_space) |
---|
11462 | + |
---|
11463 | + # TODO: any set-up for S3? |
---|
11464 | + |
---|
11465 | + # we don't actually create the corruption-advisory dir until necessary |
---|
11466 | + self._corruption_advisory_dir = corruption_advisory_dir |
---|
11467 | + |
---|
11468 | + def get_sharesets_for_prefix(self, prefix): |
---|
11469 | + # TODO: query S3 for keys matching prefix |
---|
11470 | + return [] |
---|
11471 | + |
---|
11472 | + def get_shareset(self, storageindex): |
---|
11473 | + return S3ShareSet(storageindex, self._s3bucket) |
---|
11474 | + |
---|
11475 | + def fill_in_space_stats(self, stats): |
---|
11476 | + stats['storage_server.max_space'] = self._max_space |
---|
11477 | + |
---|
11478 | + # TODO: query space usage of S3 bucket |
---|
11479 | + stats['storage_server.accepting_immutable_shares'] = int(not self._readonly) |
---|
11480 | + |
---|
11481 | + def get_available_space(self): |
---|
11482 | + if self._readonly: |
---|
11483 | + return 0 |
---|
11484 | + # TODO: query space usage of S3 bucket |
---|
11485 | + return self._max_space |
---|
11486 | + |
---|
11487 | + |
---|
11488 | +class S3ShareSet(ShareSet): |
---|
11489 | + implements(IShareSet) |
---|
11490 | + |
---|
11491 | + def __init__(self, storageindex, s3bucket): |
---|
11492 | + ShareSet.__init__(self, storageindex) |
---|
11493 | + self._s3bucket = s3bucket |
---|
11494 | + |
---|
11495 | + def get_overhead(self): |
---|
11496 | + return 0 |
---|
11497 | + |
---|
11498 | + def get_shares(self): |
---|
11499 | + """ |
---|
11500 | + Generate IStorageBackendShare objects for shares we have for this storage index. |
---|
11501 | + ("Shares we have" means completed ones, excluding incoming ones.) |
---|
11502 | + """ |
---|
11503 | + pass |
---|
11504 | + |
---|
11505 | + def has_incoming(self, shnum): |
---|
11506 | + # TODO: this might need to be more like the disk backend; review callers |
---|
11507 | + return False |
---|
11508 | + |
---|
11509 | + def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary): |
---|
11510 | + immsh = ImmutableS3Share(self.get_storage_index(), shnum, self._s3bucket, |
---|
11511 | + max_size=max_space_per_bucket) |
---|
11512 | + bw = BucketWriter(storageserver, immsh, lease_info, canary) |
---|
11513 | + return bw |
---|
11514 | + |
---|
11515 | + def _create_mutable_share(self, storageserver, shnum, write_enabler): |
---|
11516 | + # TODO |
---|
11517 | + serverid = storageserver.get_serverid() |
---|
11518 | + return MutableS3Share(self.get_storage_index(), shnum, self._s3bucket, serverid, write_enabler, storageserver) |
---|
11519 | + |
---|
11520 | + def _clean_up_after_unlink(self): |
---|
11521 | + pass |
---|
11522 | + |
---|
11523 | } |
---|
11524 | [interfaces.py: add fill_in_space_stats method to IStorageBackend. refs #999 |
---|
11525 | david-sarah@jacaranda.org**20110923203723 |
---|
11526 | Ignore-this: 59371c150532055939794fed6c77dcb6 |
---|
11527 | ] { |
---|
11528 | hunk ./src/allmydata/interfaces.py 304 |
---|
11529 | def get_sharesets_for_prefix(prefix): |
---|
11530 | """ |
---|
11531 | Generates IShareSet objects for all storage indices matching the |
---|
11532 | - given prefix for which this backend holds shares. |
---|
11533 | + given base-32 prefix for which this backend holds shares. |
---|
11534 | """ |
---|
11535 | |
---|
11536 | def get_shareset(storageindex): |
---|
11537 | hunk ./src/allmydata/interfaces.py 312 |
---|
11538 | Get an IShareSet object for the given storage index. |
---|
11539 | """ |
---|
11540 | |
---|
11541 | + def fill_in_space_stats(stats): |
---|
11542 | + """ |
---|
11543 | + Fill in the 'stats' dict with space statistics for this backend, in |
---|
11544 | + 'storage_server.*' keys. |
---|
11545 | + """ |
---|
11546 | + |
---|
11547 | def advise_corrupt_share(storageindex, sharetype, shnum, reason): |
---|
11548 | """ |
---|
11549 | Clients who discover hash failures in shares that they have |
---|
11550 | } |
---|
11551 | [Remove redundant si_s argument from check_write_enabler. refs #999 |
---|
11552 | david-sarah@jacaranda.org**20110923204425 |
---|
11553 | Ignore-this: 25be760118dbce2eb661137f7d46dd20 |
---|
11554 | ] { |
---|
11555 | hunk ./src/allmydata/interfaces.py 500 |
---|
11556 | |
---|
11557 | |
---|
11558 | class IStoredMutableShare(IStoredShare): |
---|
11559 | - def check_write_enabler(write_enabler, si_s): |
---|
11560 | + def check_write_enabler(write_enabler): |
---|
11561 | """ |
---|
11562 | XXX |
---|
11563 | """ |
---|
11564 | hunk ./src/allmydata/storage/backends/base.py 102 |
---|
11565 | if len(secrets) > 2: |
---|
11566 | cancel_secret = secrets[2] |
---|
11567 | |
---|
11568 | - si_s = self.get_storage_index_string() |
---|
11569 | shares = {} |
---|
11570 | for share in self.get_shares(): |
---|
11571 | # XXX is it correct to ignore immutable shares? Maybe get_shares should |
---|
11572 | hunk ./src/allmydata/storage/backends/base.py 107 |
---|
11573 | # have a parameter saying what type it's expecting. |
---|
11574 | if share.sharetype == "mutable": |
---|
11575 | - share.check_write_enabler(write_enabler, si_s) |
---|
11576 | + share.check_write_enabler(write_enabler) |
---|
11577 | shares[share.get_shnum()] = share |
---|
11578 | |
---|
11579 | # write_enabler is good for all existing shares |
---|
11580 | hunk ./src/allmydata/storage/backends/disk/mutable.py 440 |
---|
11581 | f.close() |
---|
11582 | return data_length |
---|
11583 | |
---|
11584 | - def check_write_enabler(self, write_enabler, si_s): |
---|
11585 | + def check_write_enabler(self, write_enabler): |
---|
11586 | f = self._home.open('rb+') |
---|
11587 | try: |
---|
11588 | (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f) |
---|
11589 | hunk ./src/allmydata/storage/backends/disk/mutable.py 447 |
---|
11590 | finally: |
---|
11591 | f.close() |
---|
11592 | # avoid a timing attack |
---|
11593 | - #if write_enabler != real_write_enabler: |
---|
11594 | if not constant_time_compare(write_enabler, real_write_enabler): |
---|
11595 | # accomodate share migration by reporting the nodeid used for the |
---|
11596 | # old write enabler. |
---|
11597 | hunk ./src/allmydata/storage/backends/disk/mutable.py 454 |
---|
11598 | " recorded by nodeid %(nodeid)s", |
---|
11599 | facility="tahoe.storage", |
---|
11600 | level=log.WEIRD, umid="cE1eBQ", |
---|
11601 | - si=si_s, nodeid=idlib.nodeid_b2a(write_enabler_nodeid)) |
---|
11602 | + si=self.get_storage_index_string(), |
---|
11603 | + nodeid=idlib.nodeid_b2a(write_enabler_nodeid)) |
---|
11604 | msg = "The write enabler was recorded by nodeid '%s'." % \ |
---|
11605 | (idlib.nodeid_b2a(write_enabler_nodeid),) |
---|
11606 | raise BadWriteEnablerError(msg) |
---|
11607 | hunk ./src/allmydata/storage/backends/s3/mutable.py 440 |
---|
11608 | f.close() |
---|
11609 | return data_length |
---|
11610 | |
---|
11611 | - def check_write_enabler(self, write_enabler, si_s): |
---|
11612 | + def check_write_enabler(self, write_enabler): |
---|
11613 | f = self._home.open('rb+') |
---|
11614 | try: |
---|
11615 | (real_write_enabler, write_enabler_nodeid) = self._read_write_enabler_and_nodeid(f) |
---|
11616 | hunk ./src/allmydata/storage/backends/s3/mutable.py 447 |
---|
11617 | finally: |
---|
11618 | f.close() |
---|
11619 | # avoid a timing attack |
---|
11620 | - #if write_enabler != real_write_enabler: |
---|
11621 | if not constant_time_compare(write_enabler, real_write_enabler): |
---|
11622 | # accomodate share migration by reporting the nodeid used for the |
---|
11623 | # old write enabler. |
---|
11624 | hunk ./src/allmydata/storage/backends/s3/mutable.py 454 |
---|
11625 | " recorded by nodeid %(nodeid)s", |
---|
11626 | facility="tahoe.storage", |
---|
11627 | level=log.WEIRD, umid="cE1eBQ", |
---|
11628 | - si=si_s, nodeid=idlib.nodeid_b2a(write_enabler_nodeid)) |
---|
11629 | + si=self.get_storage_index_string(), |
---|
11630 | + nodeid=idlib.nodeid_b2a(write_enabler_nodeid)) |
---|
11631 | msg = "The write enabler was recorded by nodeid '%s'." % \ |
---|
11632 | (idlib.nodeid_b2a(write_enabler_nodeid),) |
---|
11633 | raise BadWriteEnablerError(msg) |
---|
11634 | } |
---|
11635 | [Implement readv for immutable shares. refs #999 |
---|
11636 | david-sarah@jacaranda.org**20110923204611 |
---|
11637 | Ignore-this: 24f14b663051169d66293020e40c5a05 |
---|
11638 | ] { |
---|
11639 | hunk ./src/allmydata/storage/backends/disk/immutable.py 156 |
---|
11640 | def get_data_length(self): |
---|
11641 | return self._lease_offset - self._data_offset |
---|
11642 | |
---|
11643 | - #def readv(self, read_vector): |
---|
11644 | - # ... |
---|
11645 | + def readv(self, readv): |
---|
11646 | + datav = [] |
---|
11647 | + f = self._home.open('rb') |
---|
11648 | + try: |
---|
11649 | + for (offset, length) in readv: |
---|
11650 | + datav.append(self._read_share_data(f, offset, length)) |
---|
11651 | + finally: |
---|
11652 | + f.close() |
---|
11653 | + return datav |
---|
11654 | |
---|
11655 | hunk ./src/allmydata/storage/backends/disk/immutable.py 166 |
---|
11656 | - def read_share_data(self, offset, length): |
---|
11657 | + def _read_share_data(self, f, offset, length): |
---|
11658 | precondition(offset >= 0) |
---|
11659 | |
---|
11660 | # Reads beyond the end of the data are truncated. Reads that start |
---|
11661 | hunk ./src/allmydata/storage/backends/disk/immutable.py 175 |
---|
11662 | actuallength = max(0, min(length, self._lease_offset-seekpos)) |
---|
11663 | if actuallength == 0: |
---|
11664 | return "" |
---|
11665 | + f.seek(seekpos) |
---|
11666 | + return f.read(actuallength) |
---|
11667 | + |
---|
11668 | + def read_share_data(self, offset, length): |
---|
11669 | f = self._home.open(mode='rb') |
---|
11670 | try: |
---|
11671 | hunk ./src/allmydata/storage/backends/disk/immutable.py 181 |
---|
11672 | - f.seek(seekpos) |
---|
11673 | - sharedata = f.read(actuallength) |
---|
11674 | + return self._read_share_data(f, offset, length) |
---|
11675 | finally: |
---|
11676 | f.close() |
---|
11677 | hunk ./src/allmydata/storage/backends/disk/immutable.py 184 |
---|
11678 | - return sharedata |
---|
11679 | |
---|
11680 | def write_share_data(self, offset, data): |
---|
11681 | length = len(data) |
---|
11682 | hunk ./src/allmydata/storage/backends/null/null_backend.py 89 |
---|
11683 | return self.shnum |
---|
11684 | |
---|
11685 | def unlink(self): |
---|
11686 | - os.unlink(self.fname) |
---|
11687 | + pass |
---|
11688 | + |
---|
11689 | + def readv(self, readv): |
---|
11690 | + datav = [] |
---|
11691 | + for (offset, length) in readv: |
---|
11692 | + datav.append("") |
---|
11693 | + return datav |
---|
11694 | |
---|
11695 | def read_share_data(self, offset, length): |
---|
11696 | precondition(offset >= 0) |
---|
11697 | hunk ./src/allmydata/storage/backends/s3/immutable.py 101 |
---|
11698 | def get_data_length(self): |
---|
11699 | return self._end_offset - self._data_offset |
---|
11700 | |
---|
11701 | + def readv(self, readv): |
---|
11702 | + datav = [] |
---|
11703 | + for (offset, length) in readv: |
---|
11704 | + datav.append(self.read_share_data(offset, length)) |
---|
11705 | + return datav |
---|
11706 | + |
---|
11707 | def read_share_data(self, offset, length): |
---|
11708 | precondition(offset >= 0) |
---|
11709 | |
---|
11710 | } |
---|
11711 | [The cancel secret needs to be unique, even if it isn't explicitly provided. refs #999 |
---|
11712 | david-sarah@jacaranda.org**20110923204914 |
---|
11713 | Ignore-this: 6c44bb908dd4c0cdc59506b2d87a47b0 |
---|
11714 | ] { |
---|
11715 | hunk ./src/allmydata/storage/backends/base.py 98 |
---|
11716 | |
---|
11717 | write_enabler = secrets[0] |
---|
11718 | renew_secret = secrets[1] |
---|
11719 | - cancel_secret = '\x00'*32 |
---|
11720 | if len(secrets) > 2: |
---|
11721 | cancel_secret = secrets[2] |
---|
11722 | hunk ./src/allmydata/storage/backends/base.py 100 |
---|
11723 | + else: |
---|
11724 | + cancel_secret = renew_secret |
---|
11725 | |
---|
11726 | shares = {} |
---|
11727 | for share in self.get_shares(): |
---|
11728 | } |
---|
11729 | [Make EmptyShare.check_testv a simple function. refs #999 |
---|
11730 | david-sarah@jacaranda.org**20110923204945 |
---|
11731 | Ignore-this: d0132c085f40c39815fa920b77fc39ab |
---|
11732 | ] { |
---|
11733 | hunk ./src/allmydata/storage/backends/base.py 125 |
---|
11734 | else: |
---|
11735 | # compare the vectors against an empty share, in which all |
---|
11736 | # reads return empty strings |
---|
11737 | - if not EmptyShare().check_testv(testv): |
---|
11738 | + if not empty_check_testv(testv): |
---|
11739 | storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv)) |
---|
11740 | testv_is_good = False |
---|
11741 | break |
---|
11742 | hunk ./src/allmydata/storage/backends/base.py 195 |
---|
11743 | # never reached |
---|
11744 | |
---|
11745 | |
---|
11746 | -class EmptyShare: |
---|
11747 | - def check_testv(self, testv): |
---|
11748 | - test_good = True |
---|
11749 | - for (offset, length, operator, specimen) in testv: |
---|
11750 | - data = "" |
---|
11751 | - if not testv_compare(data, operator, specimen): |
---|
11752 | - test_good = False |
---|
11753 | - break |
---|
11754 | - return test_good |
---|
11755 | +def empty_check_testv(testv): |
---|
11756 | + test_good = True |
---|
11757 | + for (offset, length, operator, specimen) in testv: |
---|
11758 | + data = "" |
---|
11759 | + if not testv_compare(data, operator, specimen): |
---|
11760 | + test_good = False |
---|
11761 | + break |
---|
11762 | + return test_good |
---|
11763 | |
---|
11764 | } |
---|
11765 | [Update the null backend to take into account interface changes. Also, it now records which shares are present, but not their contents. refs #999 |
---|
11766 | david-sarah@jacaranda.org**20110923205219 |
---|
11767 | Ignore-this: 42a23d7e253255003dc63facea783251 |
---|
11768 | ] { |
---|
11769 | hunk ./src/allmydata/storage/backends/null/null_backend.py 2 |
---|
11770 | |
---|
11771 | -import os, struct |
---|
11772 | - |
---|
11773 | from zope.interface import implements |
---|
11774 | |
---|
11775 | from allmydata.interfaces import IStorageBackend, IShareSet, IStoredShare, IStoredMutableShare |
---|
11776 | hunk ./src/allmydata/storage/backends/null/null_backend.py 6 |
---|
11777 | from allmydata.util.assertutil import precondition |
---|
11778 | -from allmydata.util.hashutil import constant_time_compare |
---|
11779 | -from allmydata.storage.backends.base import Backend, ShareSet |
---|
11780 | -from allmydata.storage.bucket import BucketWriter |
---|
11781 | +from allmydata.storage.backends.base import Backend, empty_check_testv |
---|
11782 | +from allmydata.storage.bucket import BucketWriter, BucketReader |
---|
11783 | from allmydata.storage.common import si_b2a |
---|
11784 | hunk ./src/allmydata/storage/backends/null/null_backend.py 9 |
---|
11785 | -from allmydata.storage.lease import LeaseInfo |
---|
11786 | |
---|
11787 | |
---|
11788 | class NullBackend(Backend): |
---|
11789 | hunk ./src/allmydata/storage/backends/null/null_backend.py 13 |
---|
11790 | implements(IStorageBackend) |
---|
11791 | + """ |
---|
11792 | + I am a test backend that records (in memory) which shares exist, but not their contents, leases, |
---|
11793 | + or write-enablers. |
---|
11794 | + """ |
---|
11795 | |
---|
11796 | def __init__(self): |
---|
11797 | Backend.__init__(self) |
---|
11798 | hunk ./src/allmydata/storage/backends/null/null_backend.py 20 |
---|
11799 | + # mapping from storageindex to NullShareSet |
---|
11800 | + self._sharesets = {} |
---|
11801 | |
---|
11802 | hunk ./src/allmydata/storage/backends/null/null_backend.py 23 |
---|
11803 | - def get_available_space(self, reserved_space): |
---|
11804 | + def get_available_space(self): |
---|
11805 | return None |
---|
11806 | |
---|
11807 | def get_sharesets_for_prefix(self, prefix): |
---|
11808 | hunk ./src/allmydata/storage/backends/null/null_backend.py 27 |
---|
11809 | - pass |
---|
11810 | + sharesets = [] |
---|
11811 | + for (si, shareset) in self._sharesets.iteritems(): |
---|
11812 | + if si_b2a(si).startswith(prefix): |
---|
11813 | + sharesets.append(shareset) |
---|
11814 | + |
---|
11815 | + def _by_base32si(b): |
---|
11816 | + return b.get_storage_index_string() |
---|
11817 | + sharesets.sort(key=_by_base32si) |
---|
11818 | + return sharesets |
---|
11819 | |
---|
11820 | def get_shareset(self, storageindex): |
---|
11821 | hunk ./src/allmydata/storage/backends/null/null_backend.py 38 |
---|
11822 | - return NullShareSet(storageindex) |
---|
11823 | + shareset = self._sharesets.get(storageindex, None) |
---|
11824 | + if shareset is None: |
---|
11825 | + shareset = NullShareSet(storageindex) |
---|
11826 | + self._sharesets[storageindex] = shareset |
---|
11827 | + return shareset |
---|
11828 | |
---|
11829 | def fill_in_space_stats(self, stats): |
---|
11830 | pass |
---|
11831 | hunk ./src/allmydata/storage/backends/null/null_backend.py 47 |
---|
11832 | |
---|
11833 | - def set_storage_server(self, ss): |
---|
11834 | - self.ss = ss |
---|
11835 | |
---|
11836 | hunk ./src/allmydata/storage/backends/null/null_backend.py 48 |
---|
11837 | - def advise_corrupt_share(self, sharetype, storageindex, shnum, reason): |
---|
11838 | - pass |
---|
11839 | - |
---|
11840 | - |
---|
11841 | -class NullShareSet(ShareSet): |
---|
11842 | +class NullShareSet(object): |
---|
11843 | implements(IShareSet) |
---|
11844 | |
---|
11845 | def __init__(self, storageindex): |
---|
11846 | hunk ./src/allmydata/storage/backends/null/null_backend.py 53 |
---|
11847 | self.storageindex = storageindex |
---|
11848 | + self._incoming_shnums = set() |
---|
11849 | + self._immutable_shnums = set() |
---|
11850 | + self._mutable_shnums = set() |
---|
11851 | + |
---|
11852 | + def close_shnum(self, shnum): |
---|
11853 | + self._incoming_shnums.remove(shnum) |
---|
11854 | + self._immutable_shnums.add(shnum) |
---|
11855 | |
---|
11856 | def get_overhead(self): |
---|
11857 | return 0 |
---|
11858 | hunk ./src/allmydata/storage/backends/null/null_backend.py 64 |
---|
11859 | |
---|
11860 | - def get_incoming_shnums(self): |
---|
11861 | - return frozenset() |
---|
11862 | - |
---|
11863 | def get_shares(self): |
---|
11864 | hunk ./src/allmydata/storage/backends/null/null_backend.py 65 |
---|
11865 | + for shnum in self._immutable_shnums: |
---|
11866 | + yield ImmutableNullShare(self, shnum) |
---|
11867 | + for shnum in self._mutable_shnums: |
---|
11868 | + yield MutableNullShare(self, shnum) |
---|
11869 | + |
---|
11870 | + def renew_lease(self, renew_secret, new_expiration_time): |
---|
11871 | + raise IndexError("no such lease to renew") |
---|
11872 | + |
---|
11873 | + def get_leases(self): |
---|
11874 | pass |
---|
11875 | |
---|
11876 | hunk ./src/allmydata/storage/backends/null/null_backend.py 76 |
---|
11877 | - def get_share(self, shnum): |
---|
11878 | - return None |
---|
11879 | + def add_or_renew_lease(self, lease_info): |
---|
11880 | + pass |
---|
11881 | + |
---|
11882 | + def has_incoming(self, shnum): |
---|
11883 | + return shnum in self._incoming_shnums |
---|
11884 | |
---|
11885 | def get_storage_index(self): |
---|
11886 | return self.storageindex |
---|
11887 | hunk ./src/allmydata/storage/backends/null/null_backend.py 89 |
---|
11888 | return si_b2a(self.storageindex) |
---|
11889 | |
---|
11890 | def make_bucket_writer(self, storageserver, shnum, max_space_per_bucket, lease_info, canary): |
---|
11891 | - immutableshare = ImmutableNullShare() |
---|
11892 | - return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary) |
---|
11893 | + self._incoming_shnums.add(shnum) |
---|
11894 | + immutableshare = ImmutableNullShare(self, shnum) |
---|
11895 | + bw = BucketWriter(storageserver, immutableshare, lease_info, canary) |
---|
11896 | + bw.throw_out_all_data = True |
---|
11897 | + return bw |
---|
11898 | |
---|
11899 | hunk ./src/allmydata/storage/backends/null/null_backend.py 95 |
---|
11900 | - def _create_mutable_share(self, storageserver, shnum, write_enabler): |
---|
11901 | - return MutableNullShare() |
---|
11902 | + def make_bucket_reader(self, storageserver, share): |
---|
11903 | + return BucketReader(storageserver, share) |
---|
11904 | |
---|
11905 | hunk ./src/allmydata/storage/backends/null/null_backend.py 98 |
---|
11906 | - def _clean_up_after_unlink(self): |
---|
11907 | - pass |
---|
11908 | + def testv_and_readv_and_writev(self, storageserver, secrets, |
---|
11909 | + test_and_write_vectors, read_vector, |
---|
11910 | + expiration_time): |
---|
11911 | + # evaluate test vectors |
---|
11912 | + testv_is_good = True |
---|
11913 | + for sharenum in test_and_write_vectors: |
---|
11914 | + # compare the vectors against an empty share, in which all |
---|
11915 | + # reads return empty strings |
---|
11916 | + (testv, datav, new_length) = test_and_write_vectors[sharenum] |
---|
11917 | + if not empty_check_testv(testv): |
---|
11918 | + storageserver.log("testv failed (empty): [%d] %r" % (sharenum, testv)) |
---|
11919 | + testv_is_good = False |
---|
11920 | + break |
---|
11921 | |
---|
11922 | hunk ./src/allmydata/storage/backends/null/null_backend.py 112 |
---|
11923 | + # gather the read vectors |
---|
11924 | + read_data = {} |
---|
11925 | + for shnum in self._mutable_shnums: |
---|
11926 | + read_data[shnum] = "" |
---|
11927 | |
---|
11928 | hunk ./src/allmydata/storage/backends/null/null_backend.py 117 |
---|
11929 | -class ImmutableNullShare: |
---|
11930 | - implements(IStoredShare) |
---|
11931 | - sharetype = "immutable" |
---|
11932 | + if testv_is_good: |
---|
11933 | + # now apply the write vectors |
---|
11934 | + for shnum in test_and_write_vectors: |
---|
11935 | + (testv, datav, new_length) = test_and_write_vectors[shnum] |
---|
11936 | + if new_length == 0: |
---|
11937 | + self._mutable_shnums.remove(shnum) |
---|
11938 | + else: |
---|
11939 | + self._mutable_shnums.add(shnum) |
---|
11940 | |
---|
11941 | hunk ./src/allmydata/storage/backends/null/null_backend.py 126 |
---|
11942 | - def __init__(self): |
---|
11943 | - """ If max_size is not None then I won't allow more than |
---|
11944 | - max_size to be written to me. If create=True then max_size |
---|
11945 | - must not be None. """ |
---|
11946 | - pass |
---|
11947 | + return (testv_is_good, read_data) |
---|
11948 | + |
---|
11949 | + def readv(self, wanted_shnums, read_vector): |
---|
11950 | + return {} |
---|
11951 | + |
---|
11952 | + |
---|
11953 | +class NullShareBase(object): |
---|
11954 | + def __init__(self, shareset, shnum): |
---|
11955 | + self.shareset = shareset |
---|
11956 | + self.shnum = shnum |
---|
11957 | + |
---|
11958 | + def get_storage_index(self): |
---|
11959 | + return self.shareset.get_storage_index() |
---|
11960 | + |
---|
11961 | + def get_storage_index_string(self): |
---|
11962 | + return self.shareset.get_storage_index_string() |
---|
11963 | |
---|
11964 | def get_shnum(self): |
---|
11965 | return self.shnum |
---|
11966 | hunk ./src/allmydata/storage/backends/null/null_backend.py 146 |
---|
11967 | |
---|
11968 | + def get_data_length(self): |
---|
11969 | + return 0 |
---|
11970 | + |
---|
11971 | + def get_size(self): |
---|
11972 | + return 0 |
---|
11973 | + |
---|
11974 | + def get_used_space(self): |
---|
11975 | + return 0 |
---|
11976 | + |
---|
11977 | def unlink(self): |
---|
11978 | pass |
---|
11979 | |
---|
11980 | hunk ./src/allmydata/storage/backends/null/null_backend.py 166 |
---|
11981 | |
---|
11982 | def read_share_data(self, offset, length): |
---|
11983 | precondition(offset >= 0) |
---|
11984 | - # Reads beyond the end of the data are truncated. Reads that start |
---|
11985 | - # beyond the end of the data return an empty string. |
---|
11986 | - seekpos = self._data_offset+offset |
---|
11987 | - fsize = os.path.getsize(self.fname) |
---|
11988 | - actuallength = max(0, min(length, fsize-seekpos)) # XXX #1528 |
---|
11989 | - if actuallength == 0: |
---|
11990 | - return "" |
---|
11991 | - f = open(self.fname, 'rb') |
---|
11992 | - f.seek(seekpos) |
---|
11993 | - return f.read(actuallength) |
---|
11994 | + return "" |
---|
11995 | |
---|
11996 | def write_share_data(self, offset, data): |
---|
11997 | pass |
---|
11998 | hunk ./src/allmydata/storage/backends/null/null_backend.py 171 |
---|
11999 | |
---|
12000 | - def _write_lease_record(self, f, lease_number, lease_info): |
---|
12001 | - offset = self._lease_offset + lease_number * self.LEASE_SIZE |
---|
12002 | - f.seek(offset) |
---|
12003 | - assert f.tell() == offset |
---|
12004 | - f.write(lease_info.to_immutable_data()) |
---|
12005 | - |
---|
12006 | - def _read_num_leases(self, f): |
---|
12007 | - f.seek(0x08) |
---|
12008 | - (num_leases,) = struct.unpack(">L", f.read(4)) |
---|
12009 | - return num_leases |
---|
12010 | - |
---|
12011 | - def _write_num_leases(self, f, num_leases): |
---|
12012 | - f.seek(0x08) |
---|
12013 | - f.write(struct.pack(">L", num_leases)) |
---|
12014 | - |
---|
12015 | - def _truncate_leases(self, f, num_leases): |
---|
12016 | - f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE) |
---|
12017 | - |
---|
12018 | def get_leases(self): |
---|
12019 | hunk ./src/allmydata/storage/backends/null/null_backend.py 172 |
---|
12020 | - """Yields a LeaseInfo instance for all leases.""" |
---|
12021 | - f = open(self.fname, 'rb') |
---|
12022 | - (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
12023 | - f.seek(self._lease_offset) |
---|
12024 | - for i in range(num_leases): |
---|
12025 | - data = f.read(self.LEASE_SIZE) |
---|
12026 | - if data: |
---|
12027 | - yield LeaseInfo().from_immutable_data(data) |
---|
12028 | + pass |
---|
12029 | |
---|
12030 | def add_lease(self, lease): |
---|
12031 | pass |
---|
12032 | hunk ./src/allmydata/storage/backends/null/null_backend.py 178 |
---|
12033 | |
---|
12034 | def renew_lease(self, renew_secret, new_expire_time): |
---|
12035 | - for i,lease in enumerate(self.get_leases()): |
---|
12036 | - if constant_time_compare(lease.renew_secret, renew_secret): |
---|
12037 | - # yup. See if we need to update the owner time. |
---|
12038 | - if new_expire_time > lease.expiration_time: |
---|
12039 | - # yes |
---|
12040 | - lease.expiration_time = new_expire_time |
---|
12041 | - f = open(self.fname, 'rb+') |
---|
12042 | - self._write_lease_record(f, i, lease) |
---|
12043 | - f.close() |
---|
12044 | - return |
---|
12045 | raise IndexError("unable to renew non-existent lease") |
---|
12046 | |
---|
12047 | def add_or_renew_lease(self, lease_info): |
---|
12048 | hunk ./src/allmydata/storage/backends/null/null_backend.py 181 |
---|
12049 | - try: |
---|
12050 | - self.renew_lease(lease_info.renew_secret, |
---|
12051 | - lease_info.expiration_time) |
---|
12052 | - except IndexError: |
---|
12053 | - self.add_lease(lease_info) |
---|
12054 | + pass |
---|
12055 | |
---|
12056 | |
---|
12057 | hunk ./src/allmydata/storage/backends/null/null_backend.py 184 |
---|
12058 | -class MutableNullShare: |
---|
12059 | +class ImmutableNullShare(NullShareBase): |
---|
12060 | + implements(IStoredShare) |
---|
12061 | + sharetype = "immutable" |
---|
12062 | + |
---|
12063 | + def close(self): |
---|
12064 | + self.shareset.close_shnum(self.shnum) |
---|
12065 | + |
---|
12066 | + |
---|
12067 | +class MutableNullShare(NullShareBase): |
---|
12068 | implements(IStoredMutableShare) |
---|
12069 | sharetype = "mutable" |
---|
12070 | hunk ./src/allmydata/storage/backends/null/null_backend.py 195 |
---|
12071 | + |
---|
12072 | + def check_write_enabler(self, write_enabler): |
---|
12073 | + # Null backend doesn't check write enablers. |
---|
12074 | + pass |
---|
12075 | + |
---|
12076 | + def check_testv(self, testv): |
---|
12077 | + return empty_check_testv(testv) |
---|
12078 | + |
---|
12079 | + def writev(self, datav, new_length): |
---|
12080 | + pass |
---|
12081 | + |
---|
12082 | + def close(self): |
---|
12083 | + pass |
---|
12084 | |
---|
12085 | hunk ./src/allmydata/storage/backends/null/null_backend.py 209 |
---|
12086 | - """ XXX: TODO """ |
---|
12087 | } |
---|
12088 | [Update the S3 backend. refs #999 |
---|
12089 | david-sarah@jacaranda.org**20110923205345 |
---|
12090 | Ignore-this: 5ca623a17e09ddad4cab2f51b49aec0a |
---|
12091 | ] { |
---|
12092 | hunk ./src/allmydata/storage/backends/s3/immutable.py 11 |
---|
12093 | from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError |
---|
12094 | |
---|
12095 | |
---|
12096 | -# Each share file (in storage/shares/$PREFIX/$STORAGEINDEX/$SHNUM) contains |
---|
12097 | +# Each share file (with key 'shares/$PREFIX/$STORAGEINDEX/$SHNUM') contains |
---|
12098 | # lease information [currently inaccessible] and share data. The share data is |
---|
12099 | # accessed by RIBucketWriter.write and RIBucketReader.read . |
---|
12100 | |
---|
12101 | hunk ./src/allmydata/storage/backends/s3/immutable.py 65 |
---|
12102 | # in case a share file is copied from a disk backend, or in case we |
---|
12103 | # need them in future. |
---|
12104 | # TODO: filesize = size of S3 object |
---|
12105 | + filesize = 0 |
---|
12106 | self._end_offset = filesize - (num_leases * self.LEASE_SIZE) |
---|
12107 | self._data_offset = 0xc |
---|
12108 | |
---|
12109 | hunk ./src/allmydata/storage/backends/s3/immutable.py 122 |
---|
12110 | return "\x00"*actuallength |
---|
12111 | |
---|
12112 | def write_share_data(self, offset, data): |
---|
12113 | - assert offset >= self._size, "offset = %r, size = %r" % (offset, self._size) |
---|
12114 | + length = len(data) |
---|
12115 | + precondition(offset >= self._size, "offset = %r, size = %r" % (offset, self._size)) |
---|
12116 | + if self._max_size is not None and offset+length > self._max_size: |
---|
12117 | + raise DataTooLargeError(self._max_size, offset, length) |
---|
12118 | |
---|
12119 | # TODO: write data to S3. If offset > self._size, fill the space |
---|
12120 | # between with zeroes. |
---|
12121 | hunk ./src/allmydata/storage/backends/s3/mutable.py 17 |
---|
12122 | from allmydata.storage.backends.base import testv_compare |
---|
12123 | |
---|
12124 | |
---|
12125 | -# The MutableDiskShare is like the ImmutableDiskShare, but used for mutable data. |
---|
12126 | +# The MutableS3Share is like the ImmutableS3Share, but used for mutable data. |
---|
12127 | # It has a different layout. See docs/mutable.rst for more details. |
---|
12128 | |
---|
12129 | # # offset size name |
---|
12130 | hunk ./src/allmydata/storage/backends/s3/mutable.py 43 |
---|
12131 | assert struct.calcsize(">Q") == 8, struct.calcsize(">Q") |
---|
12132 | |
---|
12133 | |
---|
12134 | -class MutableDiskShare(object): |
---|
12135 | +class MutableS3Share(object): |
---|
12136 | implements(IStoredMutableShare) |
---|
12137 | |
---|
12138 | sharetype = "mutable" |
---|
12139 | hunk ./src/allmydata/storage/backends/s3/mutable.py 111 |
---|
12140 | f.close() |
---|
12141 | |
---|
12142 | def __repr__(self): |
---|
12143 | - return ("<MutableDiskShare %s:%r at %s>" |
---|
12144 | + return ("<MutableS3Share %s:%r at %s>" |
---|
12145 | % (si_b2a(self._storageindex), self._shnum, quote_filepath(self._home))) |
---|
12146 | |
---|
12147 | def get_used_space(self): |
---|
12148 | hunk ./src/allmydata/storage/backends/s3/mutable.py 311 |
---|
12149 | except IndexError: |
---|
12150 | return |
---|
12151 | |
---|
12152 | - # These lease operations are intended for use by disk_backend.py. |
---|
12153 | - # Other non-test clients should not depend on the fact that the disk |
---|
12154 | - # backend stores leases in share files. |
---|
12155 | - |
---|
12156 | - def add_lease(self, lease_info): |
---|
12157 | - precondition(lease_info.owner_num != 0) # 0 means "no lease here" |
---|
12158 | - f = self._home.open('rb+') |
---|
12159 | - try: |
---|
12160 | - num_lease_slots = self._get_num_lease_slots(f) |
---|
12161 | - empty_slot = self._get_first_empty_lease_slot(f) |
---|
12162 | - if empty_slot is not None: |
---|
12163 | - self._write_lease_record(f, empty_slot, lease_info) |
---|
12164 | - else: |
---|
12165 | - self._write_lease_record(f, num_lease_slots, lease_info) |
---|
12166 | - finally: |
---|
12167 | - f.close() |
---|
12168 | - |
---|
12169 | - def renew_lease(self, renew_secret, new_expire_time): |
---|
12170 | - accepting_nodeids = set() |
---|
12171 | - f = self._home.open('rb+') |
---|
12172 | - try: |
---|
12173 | - for (leasenum, lease) in self._enumerate_leases(f): |
---|
12174 | - if constant_time_compare(lease.renew_secret, renew_secret): |
---|
12175 | - # yup. See if we need to update the owner time. |
---|
12176 | - if new_expire_time > lease.expiration_time: |
---|
12177 | - # yes |
---|
12178 | - lease.expiration_time = new_expire_time |
---|
12179 | - self._write_lease_record(f, leasenum, lease) |
---|
12180 | - return |
---|
12181 | - accepting_nodeids.add(lease.nodeid) |
---|
12182 | - finally: |
---|
12183 | - f.close() |
---|
12184 | - # Return the accepting_nodeids set, to give the client a chance to |
---|
12185 | - # update the leases on a share that has been migrated from its |
---|
12186 | - # original server to a new one. |
---|
12187 | - msg = ("Unable to renew non-existent lease. I have leases accepted by" |
---|
12188 | - " nodeids: ") |
---|
12189 | - msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid)) |
---|
12190 | - for anid in accepting_nodeids]) |
---|
12191 | - msg += " ." |
---|
12192 | - raise IndexError(msg) |
---|
12193 | - |
---|
12194 | - def add_or_renew_lease(self, lease_info): |
---|
12195 | - precondition(lease_info.owner_num != 0) # 0 means "no lease here" |
---|
12196 | - try: |
---|
12197 | - self.renew_lease(lease_info.renew_secret, |
---|
12198 | - lease_info.expiration_time) |
---|
12199 | - except IndexError: |
---|
12200 | - self.add_lease(lease_info) |
---|
12201 | - |
---|
12202 | - def cancel_lease(self, cancel_secret): |
---|
12203 | - """Remove any leases with the given cancel_secret. If the last lease |
---|
12204 | - is cancelled, the file will be removed. Return the number of bytes |
---|
12205 | - that were freed (by truncating the list of leases, and possibly by |
---|
12206 | - deleting the file). Raise IndexError if there was no lease with the |
---|
12207 | - given cancel_secret.""" |
---|
12208 | - |
---|
12209 | - # XXX can this be more like ImmutableDiskShare.cancel_lease? |
---|
12210 | - |
---|
12211 | - accepting_nodeids = set() |
---|
12212 | - modified = 0 |
---|
12213 | - remaining = 0 |
---|
12214 | - blank_lease = LeaseInfo(owner_num=0, |
---|
12215 | - renew_secret="\x00"*32, |
---|
12216 | - cancel_secret="\x00"*32, |
---|
12217 | - expiration_time=0, |
---|
12218 | - nodeid="\x00"*20) |
---|
12219 | - f = self._home.open('rb+') |
---|
12220 | - try: |
---|
12221 | - for (leasenum, lease) in self._enumerate_leases(f): |
---|
12222 | - accepting_nodeids.add(lease.nodeid) |
---|
12223 | - if constant_time_compare(lease.cancel_secret, cancel_secret): |
---|
12224 | - self._write_lease_record(f, leasenum, blank_lease) |
---|
12225 | - modified += 1 |
---|
12226 | - else: |
---|
12227 | - remaining += 1 |
---|
12228 | - if modified: |
---|
12229 | - freed_space = self._pack_leases(f) |
---|
12230 | - finally: |
---|
12231 | - f.close() |
---|
12232 | - |
---|
12233 | - if modified > 0: |
---|
12234 | - if remaining == 0: |
---|
12235 | - freed_space = fileutil.get_used_space(self._home) |
---|
12236 | - self.unlink() |
---|
12237 | - return freed_space |
---|
12238 | - |
---|
12239 | - msg = ("Unable to cancel non-existent lease. I have leases " |
---|
12240 | - "accepted by nodeids: ") |
---|
12241 | - msg += ",".join([("'%s'" % idlib.nodeid_b2a(anid)) |
---|
12242 | - for anid in accepting_nodeids]) |
---|
12243 | - msg += " ." |
---|
12244 | - raise IndexError(msg) |
---|
12245 | - |
---|
12246 | - def _pack_leases(self, f): |
---|
12247 | - # TODO: reclaim space from cancelled leases |
---|
12248 | - return 0 |
---|
12249 | - |
---|
12250 | def _read_write_enabler_and_nodeid(self, f): |
---|
12251 | f.seek(0) |
---|
12252 | data = f.read(self.HEADER_SIZE) |
---|
12253 | hunk ./src/allmydata/storage/backends/s3/mutable.py 394 |
---|
12254 | pass |
---|
12255 | |
---|
12256 | |
---|
12257 | -def create_mutable_disk_share(storageindex, shnum, fp, serverid, write_enabler, parent): |
---|
12258 | - ms = MutableDiskShare(storageindex, shnum, fp, parent) |
---|
12259 | +def create_mutable_s3_share(storageindex, shnum, fp, serverid, write_enabler, parent): |
---|
12260 | + ms = MutableS3Share(storageindex, shnum, fp, parent) |
---|
12261 | ms.create(serverid, write_enabler) |
---|
12262 | del ms |
---|
12263 | hunk ./src/allmydata/storage/backends/s3/mutable.py 398 |
---|
12264 | - return MutableDiskShare(storageindex, shnum, fp, parent) |
---|
12265 | + return MutableS3Share(storageindex, shnum, fp, parent) |
---|
12266 | hunk ./src/allmydata/storage/backends/s3/s3_backend.py 10 |
---|
12267 | from allmydata.storage.backends.s3.immutable import ImmutableS3Share |
---|
12268 | from allmydata.storage.backends.s3.mutable import MutableS3Share |
---|
12269 | |
---|
12270 | -# The S3 bucket has keys of the form shares/$STORAGEINDEX/$SHARENUM |
---|
12271 | - |
---|
12272 | +# The S3 bucket has keys of the form shares/$PREFIX/$STORAGEINDEX/$SHNUM . |
---|
12273 | |
---|
12274 | class S3Backend(Backend): |
---|
12275 | implements(IStorageBackend) |
---|
12276 | } |
---|
12277 | [Minor cleanup to disk backend. refs #999 |
---|
12278 | david-sarah@jacaranda.org**20110923205510 |
---|
12279 | Ignore-this: 79f92d7c2edb14cfedb167247c3f0d08 |
---|
12280 | ] { |
---|
12281 | hunk ./src/allmydata/storage/backends/disk/immutable.py 87 |
---|
12282 | (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
12283 | finally: |
---|
12284 | f.close() |
---|
12285 | - filesize = self._home.getsize() |
---|
12286 | if version != 1: |
---|
12287 | msg = "sharefile %s had version %d but we wanted 1" % \ |
---|
12288 | (self._home, version) |
---|
12289 | hunk ./src/allmydata/storage/backends/disk/immutable.py 91 |
---|
12290 | raise UnknownImmutableContainerVersionError(msg) |
---|
12291 | + |
---|
12292 | + filesize = self._home.getsize() |
---|
12293 | self._num_leases = num_leases |
---|
12294 | self._lease_offset = filesize - (num_leases * self.LEASE_SIZE) |
---|
12295 | self._data_offset = 0xc |
---|
12296 | } |
---|
12297 | [Add 'has-immutable-readv' to server version information. refs #999 |
---|
12298 | david-sarah@jacaranda.org**20110923220935 |
---|
12299 | Ignore-this: c3c4358f2ab8ac503f99c968ace8efcf |
---|
12300 | ] { |
---|
12301 | hunk ./src/allmydata/storage/server.py 174 |
---|
12302 | "delete-mutable-shares-with-zero-length-writev": True, |
---|
12303 | "fills-holes-with-zero-bytes": True, |
---|
12304 | "prevents-read-past-end-of-share-data": True, |
---|
12305 | + "has-immutable-readv": True, |
---|
12306 | }, |
---|
12307 | "application-version": str(allmydata.__full_version__), |
---|
12308 | } |
---|
12309 | hunk ./src/allmydata/test/test_storage.py 339 |
---|
12310 | sv1 = ver['http://allmydata.org/tahoe/protocols/storage/v1'] |
---|
12311 | self.failUnless(sv1.get('prevents-read-past-end-of-share-data'), sv1) |
---|
12312 | |
---|
12313 | + def test_has_immutable_readv(self): |
---|
12314 | + ss = self.create("test_has_immutable_readv") |
---|
12315 | + ver = ss.remote_get_version() |
---|
12316 | + sv1 = ver['http://allmydata.org/tahoe/protocols/storage/v1'] |
---|
12317 | + self.failUnless(sv1.get('has-immutable-readv'), sv1) |
---|
12318 | + |
---|
12319 | + # TODO: test that we actually support it |
---|
12320 | + |
---|
12321 | def allocate(self, ss, storage_index, sharenums, size, canary=None): |
---|
12322 | renew_secret = hashutil.tagged_hash("blah", "%d" % self._lease_secret.next()) |
---|
12323 | cancel_secret = hashutil.tagged_hash("blah", "%d" % self._lease_secret.next()) |
---|
12324 | } |
---|
12325 | [util/deferredutil.py: add some utilities for asynchronous iteration. refs #999 |
---|
12326 | david-sarah@jacaranda.org**20110927070947 |
---|
12327 | Ignore-this: ac4946c1e5779ea64b85a1a420d34c9e |
---|
12328 | ] { |
---|
12329 | hunk ./src/allmydata/util/deferredutil.py 1 |
---|
12330 | + |
---|
12331 | +from foolscap.api import fireEventually |
---|
12332 | from twisted.internet import defer |
---|
12333 | |
---|
12334 | # utility wrapper for DeferredList |
---|
12335 | hunk ./src/allmydata/util/deferredutil.py 38 |
---|
12336 | d.addCallbacks(_parseDListResult, _unwrapFirstError) |
---|
12337 | return d |
---|
12338 | |
---|
12339 | + |
---|
12340 | +def async_accumulate(accumulator, body): |
---|
12341 | + """ |
---|
12342 | + I execute an asynchronous loop in which, for each iteration, I eventually |
---|
12343 | + call 'body' with the current value of an accumulator. 'body' should return a |
---|
12344 | + (possibly deferred) pair: (result, should_continue). If should_continue is |
---|
12345 | + a (possibly deferred) True value, the loop will continue with result as the |
---|
12346 | + new accumulator, otherwise it will terminate. |
---|
12347 | + |
---|
12348 | + I return a Deferred that fires with the final result, or that fails with |
---|
12349 | + the first failure of 'body'. |
---|
12350 | + """ |
---|
12351 | + d = defer.succeed(accumulator) |
---|
12352 | + d.addCallback(body) |
---|
12353 | + def _iterate((result, should_continue)): |
---|
12354 | + if not should_continue: |
---|
12355 | + return result |
---|
12356 | + d2 = fireEventually(result) |
---|
12357 | + d2.addCallback(async_accumulate, body) |
---|
12358 | + return d2 |
---|
12359 | + d.addCallback(_iterate) |
---|
12360 | + return d |
---|
12361 | + |
---|
12362 | +def async_iterate(process, iterable): |
---|
12363 | + """ |
---|
12364 | + I iterate over the elements of 'iterable' (which may be deferred), eventually |
---|
12365 | + applying 'process' to each one. 'process' should return a (possibly deferred) |
---|
12366 | + boolean: True to continue the iteration, False to stop. |
---|
12367 | + |
---|
12368 | + I return a Deferred that fires with True if all elements of the iterable |
---|
12369 | + were processed (i.e. 'process' only returned True values); with False if |
---|
12370 | + the iteration was stopped by 'process' returning False; or that fails with |
---|
12371 | + the first failure of either 'process' or the iterator. |
---|
12372 | + """ |
---|
12373 | + iterator = iter(iterable) |
---|
12374 | + |
---|
12375 | + def _body(accumulator): |
---|
12376 | + d = defer.maybeDeferred(iterator.next) |
---|
12377 | + def _cb(item): |
---|
12378 | + d2 = defer.maybeDeferred(process, item) |
---|
12379 | + d2.addCallback(lambda res: (res, res)) |
---|
12380 | + return d2 |
---|
12381 | + def _eb(f): |
---|
12382 | + if f.trap(StopIteration): |
---|
12383 | + return (True, False) |
---|
12384 | + d.addCallbacks(_cb, _eb) |
---|
12385 | + return d |
---|
12386 | + |
---|
12387 | + return async_accumulate(False, _body) |
---|
12388 | + |
---|
12389 | +def async_foldl(process, unit, iterable): |
---|
12390 | + """ |
---|
12391 | + I perform an asynchronous left fold, similar to Haskell 'foldl process unit iterable'. |
---|
12392 | + Each call to process is eventual. |
---|
12393 | + |
---|
12394 | + I return a Deferred that fires with the result of the fold, or that fails with |
---|
12395 | + the first failure of either 'process' or the iterator. |
---|
12396 | + """ |
---|
12397 | + iterator = iter(iterable) |
---|
12398 | + |
---|
12399 | + def _body(accumulator): |
---|
12400 | + d = defer.maybeDeferred(iterator.next) |
---|
12401 | + def _cb(item): |
---|
12402 | + d2 = defer.maybeDeferred(process, accumulator, item) |
---|
12403 | + d2.addCallback(lambda res: (res, True)) |
---|
12404 | + return d2 |
---|
12405 | + def _eb(f): |
---|
12406 | + if f.trap(StopIteration): |
---|
12407 | + return (accumulator, False) |
---|
12408 | + d.addCallbacks(_cb, _eb) |
---|
12409 | + return d |
---|
12410 | + |
---|
12411 | + return async_accumulate(unit, _body) |
---|
12412 | } |
---|
12413 | [test_storage.py: fix test_status_bad_disk_stats. refs #999 |
---|
12414 | david-sarah@jacaranda.org**20110927071403 |
---|
12415 | Ignore-this: 6108fee69a60962be2df2ad11b483a11 |
---|
12416 | ] hunk ./src/allmydata/storage/backends/disk/disk_backend.py 123 |
---|
12417 | def get_available_space(self): |
---|
12418 | if self._readonly: |
---|
12419 | return 0 |
---|
12420 | - return fileutil.get_available_space(self._sharedir, self._reserved_space) |
---|
12421 | + try: |
---|
12422 | + return fileutil.get_available_space(self._sharedir, self._reserved_space) |
---|
12423 | + except EnvironmentError: |
---|
12424 | + return 0 |
---|
12425 | |
---|
12426 | |
---|
12427 | class DiskShareSet(ShareSet): |
---|
12428 | [Cleanups to disk backend. refs #999 |
---|
12429 | david-sarah@jacaranda.org**20110927071544 |
---|
12430 | Ignore-this: e9d3fd0e85aaf301c04342fffdc8f26 |
---|
12431 | ] { |
---|
12432 | hunk ./src/allmydata/storage/backends/disk/immutable.py 46 |
---|
12433 | |
---|
12434 | sharetype = "immutable" |
---|
12435 | LEASE_SIZE = struct.calcsize(">L32s32sL") |
---|
12436 | - |
---|
12437 | + HEADER = ">LLL" |
---|
12438 | + HEADER_SIZE = struct.calcsize(HEADER) |
---|
12439 | |
---|
12440 | def __init__(self, storageindex, shnum, home, finalhome=None, max_size=None): |
---|
12441 | """ |
---|
12442 | hunk ./src/allmydata/storage/backends/disk/immutable.py 79 |
---|
12443 | # the largest length that can fit into the field. That way, even |
---|
12444 | # if this does happen, the old < v1.3.0 server will still allow |
---|
12445 | # clients to read the first part of the share. |
---|
12446 | - self._home.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) ) |
---|
12447 | - self._lease_offset = max_size + 0x0c |
---|
12448 | + self._home.setContent(struct.pack(self.HEADER, 1, min(2**32-1, max_size), 0) ) |
---|
12449 | + self._lease_offset = self.HEADER_SIZE + max_size |
---|
12450 | self._num_leases = 0 |
---|
12451 | else: |
---|
12452 | f = self._home.open(mode='rb') |
---|
12453 | hunk ./src/allmydata/storage/backends/disk/immutable.py 85 |
---|
12454 | try: |
---|
12455 | - (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
12456 | + (version, unused, num_leases) = struct.unpack(self.HEADER, f.read(self.HEADER_SIZE)) |
---|
12457 | finally: |
---|
12458 | f.close() |
---|
12459 | if version != 1: |
---|
12460 | hunk ./src/allmydata/storage/backends/disk/immutable.py 229 |
---|
12461 | """Yields a LeaseInfo instance for all leases.""" |
---|
12462 | f = self._home.open(mode='rb') |
---|
12463 | try: |
---|
12464 | - (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
12465 | + (version, unused, num_leases) = struct.unpack(self.HEADER, f.read(self.HEADER_SIZE)) |
---|
12466 | f.seek(self._lease_offset) |
---|
12467 | for i in range(num_leases): |
---|
12468 | data = f.read(self.LEASE_SIZE) |
---|
12469 | } |
---|
12470 | [Cleanups to S3 backend (not including Deferred changes). refs #999 |
---|
12471 | david-sarah@jacaranda.org**20110927071855 |
---|
12472 | Ignore-this: f0dca788190d92b1edb1ee1498fb34dc |
---|
12473 | ] { |
---|
12474 | hunk ./src/allmydata/storage/backends/s3/immutable.py 7 |
---|
12475 | from zope.interface import implements |
---|
12476 | |
---|
12477 | from allmydata.interfaces import IStoredShare |
---|
12478 | + |
---|
12479 | from allmydata.util.assertutil import precondition |
---|
12480 | from allmydata.storage.common import si_b2a, UnknownImmutableContainerVersionError, DataTooLargeError |
---|
12481 | |
---|
12482 | hunk ./src/allmydata/storage/backends/s3/immutable.py 29 |
---|
12483 | |
---|
12484 | sharetype = "immutable" |
---|
12485 | LEASE_SIZE = struct.calcsize(">L32s32sL") # for compatibility |
---|
12486 | + HEADER = ">LLL" |
---|
12487 | + HEADER_SIZE = struct.calcsize(HEADER) |
---|
12488 | |
---|
12489 | hunk ./src/allmydata/storage/backends/s3/immutable.py 32 |
---|
12490 | - |
---|
12491 | - def __init__(self, storageindex, shnum, s3bucket, create=False, max_size=None): |
---|
12492 | + def __init__(self, storageindex, shnum, s3bucket, max_size=None, data=None): |
---|
12493 | """ |
---|
12494 | If max_size is not None then I won't allow more than max_size to be written to me. |
---|
12495 | """ |
---|
12496 | hunk ./src/allmydata/storage/backends/s3/immutable.py 36 |
---|
12497 | - precondition((max_size is not None) or not create, max_size, create) |
---|
12498 | + precondition((max_size is not None) or (data is not None), max_size, data) |
---|
12499 | self._storageindex = storageindex |
---|
12500 | hunk ./src/allmydata/storage/backends/s3/immutable.py 38 |
---|
12501 | + self._shnum = shnum |
---|
12502 | + self._s3bucket = s3bucket |
---|
12503 | self._max_size = max_size |
---|
12504 | hunk ./src/allmydata/storage/backends/s3/immutable.py 41 |
---|
12505 | + self._data = data |
---|
12506 | |
---|
12507 | hunk ./src/allmydata/storage/backends/s3/immutable.py 43 |
---|
12508 | - self._s3bucket = s3bucket |
---|
12509 | - si_s = si_b2a(storageindex) |
---|
12510 | - self._key = "storage/shares/%s/%s/%d" % (si_s[:2], si_s, shnum) |
---|
12511 | - self._shnum = shnum |
---|
12512 | + sistr = self.get_storage_index_string() |
---|
12513 | + self._key = "shares/%s/%s/%d" % (sistr[:2], sistr, shnum) |
---|
12514 | |
---|
12515 | hunk ./src/allmydata/storage/backends/s3/immutable.py 46 |
---|
12516 | - if create: |
---|
12517 | + if data is None: # creating share |
---|
12518 | # The second field, which was the four-byte share data length in |
---|
12519 | # Tahoe-LAFS versions prior to 1.3.0, is not used; we always write 0. |
---|
12520 | # We also write 0 for the number of leases. |
---|
12521 | hunk ./src/allmydata/storage/backends/s3/immutable.py 50 |
---|
12522 | - self._home.setContent(struct.pack(">LLL", 1, 0, 0) ) |
---|
12523 | - self._end_offset = max_size + 0x0c |
---|
12524 | - |
---|
12525 | - # TODO: start write to S3. |
---|
12526 | + self._home.setContent(struct.pack(self.HEADER, 1, 0, 0) ) |
---|
12527 | + self._end_offset = self.HEADER_SIZE + max_size |
---|
12528 | + self._size = self.HEADER_SIZE |
---|
12529 | + self._writes = [] |
---|
12530 | else: |
---|
12531 | hunk ./src/allmydata/storage/backends/s3/immutable.py 55 |
---|
12532 | - # TODO: get header |
---|
12533 | - header = "\x00"*12 |
---|
12534 | - (version, unused, num_leases) = struct.unpack(">LLL", header) |
---|
12535 | + (version, unused, num_leases) = struct.unpack(self.HEADER, data[:self.HEADER_SIZE]) |
---|
12536 | |
---|
12537 | if version != 1: |
---|
12538 | hunk ./src/allmydata/storage/backends/s3/immutable.py 58 |
---|
12539 | - msg = "sharefile %s had version %d but we wanted 1" % \ |
---|
12540 | - (self._home, version) |
---|
12541 | + msg = "%r had version %d but we wanted 1" % (self, version) |
---|
12542 | raise UnknownImmutableContainerVersionError(msg) |
---|
12543 | |
---|
12544 | # We cannot write leases in share files, but allow them to be present |
---|
12545 | hunk ./src/allmydata/storage/backends/s3/immutable.py 64 |
---|
12546 | # in case a share file is copied from a disk backend, or in case we |
---|
12547 | # need them in future. |
---|
12548 | - # TODO: filesize = size of S3 object |
---|
12549 | - filesize = 0 |
---|
12550 | - self._end_offset = filesize - (num_leases * self.LEASE_SIZE) |
---|
12551 | - self._data_offset = 0xc |
---|
12552 | + self._size = len(data) |
---|
12553 | + self._end_offset = self._size - (num_leases * self.LEASE_SIZE) |
---|
12554 | + self._data_offset = self.HEADER_SIZE |
---|
12555 | |
---|
12556 | def __repr__(self): |
---|
12557 | hunk ./src/allmydata/storage/backends/s3/immutable.py 69 |
---|
12558 | - return ("<ImmutableS3Share %s:%r at %r>" |
---|
12559 | - % (si_b2a(self._storageindex), self._shnum, self._key)) |
---|
12560 | + return ("<ImmutableS3Share at %r>" % (self._key,)) |
---|
12561 | |
---|
12562 | def close(self): |
---|
12563 | # TODO: finalize write to S3. |
---|
12564 | hunk ./src/allmydata/storage/backends/s3/immutable.py 88 |
---|
12565 | return self._shnum |
---|
12566 | |
---|
12567 | def unlink(self): |
---|
12568 | - # TODO: remove the S3 object. |
---|
12569 | - pass |
---|
12570 | + self._data = None |
---|
12571 | + self._writes = None |
---|
12572 | + return self._s3bucket.delete_object(self._key) |
---|
12573 | |
---|
12574 | def get_allocated_size(self): |
---|
12575 | return self._max_size |
---|
12576 | hunk ./src/allmydata/storage/backends/s3/immutable.py 126 |
---|
12577 | if self._max_size is not None and offset+length > self._max_size: |
---|
12578 | raise DataTooLargeError(self._max_size, offset, length) |
---|
12579 | |
---|
12580 | - # TODO: write data to S3. If offset > self._size, fill the space |
---|
12581 | - # between with zeroes. |
---|
12582 | - |
---|
12583 | + if offset > self._size: |
---|
12584 | + self._writes.append("\x00" * (offset - self._size)) |
---|
12585 | + self._writes.append(data) |
---|
12586 | self._size = offset + len(data) |
---|
12587 | |
---|
12588 | def add_lease(self, lease_info): |
---|
12589 | hunk ./src/allmydata/storage/backends/s3/s3_backend.py 2 |
---|
12590 | |
---|
12591 | -from zope.interface import implements |
---|
12592 | +import re |
---|
12593 | + |
---|
12594 | +from zope.interface import implements, Interface |
---|
12595 | from allmydata.interfaces import IStorageBackend, IShareSet |
---|
12596 | hunk ./src/allmydata/storage/backends/s3/s3_backend.py 6 |
---|
12597 | -from allmydata.storage.common import si_b2a, si_a2b |
---|
12598 | + |
---|
12599 | +from allmydata.storage.common import si_a2b |
---|
12600 | from allmydata.storage.bucket import BucketWriter |
---|
12601 | from allmydata.storage.backends.base import Backend, ShareSet |
---|
12602 | from allmydata.storage.backends.s3.immutable import ImmutableS3Share |
---|
12603 | hunk ./src/allmydata/storage/backends/s3/s3_backend.py 15 |
---|
12604 | |
---|
12605 | # The S3 bucket has keys of the form shares/$PREFIX/$STORAGEINDEX/$SHNUM . |
---|
12606 | |
---|
12607 | +NUM_RE=re.compile("^[0-9]+$") |
---|
12608 | + |
---|
12609 | + |
---|
12610 | +class IS3Bucket(Interface): |
---|
12611 | + """ |
---|
12612 | + I represent an S3 bucket. |
---|
12613 | + """ |
---|
12614 | + def create(self): |
---|
12615 | + """ |
---|
12616 | + Create this bucket. |
---|
12617 | + """ |
---|
12618 | + |
---|
12619 | + def delete(self): |
---|
12620 | + """ |
---|
12621 | + Delete this bucket. |
---|
12622 | + The bucket must be empty before it can be deleted. |
---|
12623 | + """ |
---|
12624 | + |
---|
12625 | + def list_objects(self, prefix=""): |
---|
12626 | + """ |
---|
12627 | + Get a list of all the objects in this bucket whose object names start with |
---|
12628 | + the given prefix. |
---|
12629 | + """ |
---|
12630 | + |
---|
12631 | + def put_object(self, object_name, data, content_type=None, metadata={}): |
---|
12632 | + """ |
---|
12633 | + Put an object in this bucket. |
---|
12634 | + Any existing object of the same name will be replaced. |
---|
12635 | + """ |
---|
12636 | + |
---|
12637 | + def get_object(self, object_name): |
---|
12638 | + """ |
---|
12639 | + Get an object from this bucket. |
---|
12640 | + """ |
---|
12641 | + |
---|
12642 | + def head_object(self, object_name): |
---|
12643 | + """ |
---|
12644 | + Retrieve object metadata only. |
---|
12645 | + """ |
---|
12646 | + |
---|
12647 | + def delete_object(self, object_name): |
---|
12648 | + """ |
---|
12649 | + Delete an object from this bucket. |
---|
12650 | + Once deleted, there is no method to restore or undelete an object. |
---|
12651 | + """ |
---|
12652 | + |
---|
12653 | + |
---|
12654 | class S3Backend(Backend): |
---|
12655 | implements(IStorageBackend) |
---|
12656 | |
---|
12657 | hunk ./src/allmydata/storage/backends/s3/s3_backend.py 74 |
---|
12658 | else: |
---|
12659 | self._max_space = int(max_space) |
---|
12660 | |
---|
12661 | - # TODO: any set-up for S3? |
---|
12662 | - |
---|
12663 | # we don't actually create the corruption-advisory dir until necessary |
---|
12664 | self._corruption_advisory_dir = corruption_advisory_dir |
---|
12665 | |
---|
12666 | hunk ./src/allmydata/storage/backends/s3/s3_backend.py 103 |
---|
12667 | def __init__(self, storageindex, s3bucket): |
---|
12668 | ShareSet.__init__(self, storageindex) |
---|
12669 | self._s3bucket = s3bucket |
---|
12670 | + sistr = self.get_storage_index_string() |
---|
12671 | + self._key = 'shares/%s/%s/' % (sistr[:2], sistr) |
---|
12672 | |
---|
12673 | def get_overhead(self): |
---|
12674 | return 0 |
---|
12675 | hunk ./src/allmydata/storage/backends/s3/s3_backend.py 129 |
---|
12676 | def _create_mutable_share(self, storageserver, shnum, write_enabler): |
---|
12677 | # TODO |
---|
12678 | serverid = storageserver.get_serverid() |
---|
12679 | - return MutableS3Share(self.get_storage_index(), shnum, self._s3bucket, serverid, write_enabler, storageserver) |
---|
12680 | + return MutableS3Share(self.get_storage_index(), shnum, self._s3bucket, serverid, |
---|
12681 | + write_enabler, storageserver) |
---|
12682 | |
---|
12683 | def _clean_up_after_unlink(self): |
---|
12684 | pass |
---|
12685 | } |
---|
12686 | [test_storage.py: fix test_no_st_blocks. refs #999 |
---|
12687 | david-sarah@jacaranda.org**20110927072848 |
---|
12688 | Ignore-this: 5f12b784920f87d09c97c676d0afa6f8 |
---|
12689 | ] { |
---|
12690 | hunk ./src/allmydata/test/test_storage.py 3034 |
---|
12691 | LeaseCheckerClass = InstrumentedLeaseCheckingCrawler |
---|
12692 | |
---|
12693 | |
---|
12694 | -class BrokenStatResults: |
---|
12695 | - pass |
---|
12696 | - |
---|
12697 | -class No_ST_BLOCKS_LeaseCheckingCrawler(LeaseCheckingCrawler): |
---|
12698 | - def stat(self, fn): |
---|
12699 | - s = os.stat(fn) |
---|
12700 | - bsr = BrokenStatResults() |
---|
12701 | - for attrname in dir(s): |
---|
12702 | - if attrname.startswith("_"): |
---|
12703 | - continue |
---|
12704 | - if attrname == "st_blocks": |
---|
12705 | - continue |
---|
12706 | - setattr(bsr, attrname, getattr(s, attrname)) |
---|
12707 | - return bsr |
---|
12708 | - |
---|
12709 | -class No_ST_BLOCKS_StorageServer(StorageServer): |
---|
12710 | - LeaseCheckerClass = No_ST_BLOCKS_LeaseCheckingCrawler |
---|
12711 | - |
---|
12712 | - |
---|
12713 | class LeaseCrawler(unittest.TestCase, pollmixin.PollMixin, WebRenderingMixin): |
---|
12714 | |
---|
12715 | def setUp(self): |
---|
12716 | hunk ./src/allmydata/test/test_storage.py 3830 |
---|
12717 | return d |
---|
12718 | |
---|
12719 | def test_no_st_blocks(self): |
---|
12720 | - basedir = "storage/LeaseCrawler/no_st_blocks" |
---|
12721 | - fp = FilePath(basedir) |
---|
12722 | - backend = DiskBackend(fp) |
---|
12723 | + # TODO: replace with @patch that supports Deferreds. |
---|
12724 | |
---|
12725 | hunk ./src/allmydata/test/test_storage.py 3832 |
---|
12726 | - # A negative 'override_lease_duration' means that the "configured-" |
---|
12727 | - # space-recovered counts will be non-zero, since all shares will have |
---|
12728 | - # expired by then. |
---|
12729 | - expiration_policy = { |
---|
12730 | - 'enabled': True, |
---|
12731 | - 'mode': 'age', |
---|
12732 | - 'override_lease_duration': -1000, |
---|
12733 | - 'sharetypes': ('mutable', 'immutable'), |
---|
12734 | - } |
---|
12735 | - ss = No_ST_BLOCKS_StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy) |
---|
12736 | + class BrokenStatResults: |
---|
12737 | + pass |
---|
12738 | |
---|
12739 | hunk ./src/allmydata/test/test_storage.py 3835 |
---|
12740 | - # make it start sooner than usual. |
---|
12741 | - lc = ss.lease_checker |
---|
12742 | - lc.slow_start = 0 |
---|
12743 | + def call_stat(fn): |
---|
12744 | + s = self.old_os_stat(fn) |
---|
12745 | + bsr = BrokenStatResults() |
---|
12746 | + for attrname in dir(s): |
---|
12747 | + if attrname.startswith("_"): |
---|
12748 | + continue |
---|
12749 | + if attrname == "st_blocks": |
---|
12750 | + continue |
---|
12751 | + setattr(bsr, attrname, getattr(s, attrname)) |
---|
12752 | + return bsr |
---|
12753 | |
---|
12754 | hunk ./src/allmydata/test/test_storage.py 3846 |
---|
12755 | - self.make_shares(ss) |
---|
12756 | - ss.setServiceParent(self.s) |
---|
12757 | - def _wait(): |
---|
12758 | - return bool(lc.get_state()["last-cycle-finished"] is not None) |
---|
12759 | - d = self.poll(_wait) |
---|
12760 | + def _cleanup(res): |
---|
12761 | + os.stat = self.old_os_stat |
---|
12762 | + return res |
---|
12763 | |
---|
12764 | hunk ./src/allmydata/test/test_storage.py 3850 |
---|
12765 | - def _check(ignored): |
---|
12766 | - s = lc.get_state() |
---|
12767 | - last = s["history"][0] |
---|
12768 | - rec = last["space-recovered"] |
---|
12769 | - self.failUnlessEqual(rec["configured-buckets"], 4) |
---|
12770 | - self.failUnlessEqual(rec["configured-shares"], 4) |
---|
12771 | - self.failUnless(rec["configured-sharebytes"] > 0, |
---|
12772 | - rec["configured-sharebytes"]) |
---|
12773 | - # without the .st_blocks field in os.stat() results, we should be |
---|
12774 | - # reporting diskbytes==sharebytes |
---|
12775 | - self.failUnlessEqual(rec["configured-sharebytes"], |
---|
12776 | - rec["configured-diskbytes"]) |
---|
12777 | - d.addCallback(_check) |
---|
12778 | - return d |
---|
12779 | + self.old_os_stat = os.stat |
---|
12780 | + try: |
---|
12781 | + os.stat = call_stat |
---|
12782 | + |
---|
12783 | + basedir = "storage/LeaseCrawler/no_st_blocks" |
---|
12784 | + fp = FilePath(basedir) |
---|
12785 | + backend = DiskBackend(fp) |
---|
12786 | + |
---|
12787 | + # A negative 'override_lease_duration' means that the "configured-" |
---|
12788 | + # space-recovered counts will be non-zero, since all shares will have |
---|
12789 | + # expired by then. |
---|
12790 | + expiration_policy = { |
---|
12791 | + 'enabled': True, |
---|
12792 | + 'mode': 'age', |
---|
12793 | + 'override_lease_duration': -1000, |
---|
12794 | + 'sharetypes': ('mutable', 'immutable'), |
---|
12795 | + } |
---|
12796 | + ss = StorageServer("\x00" * 20, backend, fp, expiration_policy=expiration_policy) |
---|
12797 | + |
---|
12798 | + # make it start sooner than usual. |
---|
12799 | + lc = ss.lease_checker |
---|
12800 | + lc.slow_start = 0 |
---|
12801 | + |
---|
12802 | + d = defer.succeed(None) |
---|
12803 | + d.addCallback(lambda ign: self.make_shares(ss)) |
---|
12804 | + d.addCallback(lambda ign: ss.setServiceParent(self.s)) |
---|
12805 | + def _wait(): |
---|
12806 | + return bool(lc.get_state()["last-cycle-finished"] is not None) |
---|
12807 | + d.addCallback(lambda ign: self.poll(_wait)) |
---|
12808 | + |
---|
12809 | + def _check(ignored): |
---|
12810 | + s = lc.get_state() |
---|
12811 | + last = s["history"][0] |
---|
12812 | + rec = last["space-recovered"] |
---|
12813 | + self.failUnlessEqual(rec["configured-buckets"], 4) |
---|
12814 | + self.failUnlessEqual(rec["configured-shares"], 4) |
---|
12815 | + self.failUnless(rec["configured-sharebytes"] > 0, |
---|
12816 | + rec["configured-sharebytes"]) |
---|
12817 | + # without the .st_blocks field in os.stat() results, we should be |
---|
12818 | + # reporting diskbytes==sharebytes |
---|
12819 | + self.failUnlessEqual(rec["configured-sharebytes"], |
---|
12820 | + rec["configured-diskbytes"]) |
---|
12821 | + d.addCallback(_check) |
---|
12822 | + d.addBoth(_cleanup) |
---|
12823 | + return d |
---|
12824 | + finally: |
---|
12825 | + _cleanup(None) |
---|
12826 | |
---|
12827 | def test_share_corruption(self): |
---|
12828 | self._poll_should_ignore_these_errors = [ |
---|
12829 | } |
---|
12830 | [mutable/publish.py: resolve conflicting patches. refs #999 |
---|
12831 | david-sarah@jacaranda.org**20110927073530 |
---|
12832 | Ignore-this: 6154a113723dc93148151288bd032439 |
---|
12833 | ] { |
---|
12834 | hunk ./src/allmydata/mutable/publish.py 6 |
---|
12835 | import os, time |
---|
12836 | from StringIO import StringIO |
---|
12837 | from itertools import count |
---|
12838 | -from copy import copy |
---|
12839 | from zope.interface import implements |
---|
12840 | from twisted.internet import defer |
---|
12841 | from twisted.python import failure |
---|
12842 | hunk ./src/allmydata/mutable/publish.py 867 |
---|
12843 | ds = [] |
---|
12844 | verification_key = self._pubkey.serialize() |
---|
12845 | |
---|
12846 | - |
---|
12847 | - # TODO: Bad, since we remove from this same dict. We need to |
---|
12848 | - # make a copy, or just use a non-iterated value. |
---|
12849 | - for (shnum, writer) in self.writers.iteritems(): |
---|
12850 | + for (shnum, writer) in self.writers.copy().iteritems(): |
---|
12851 | writer.put_verification_key(verification_key) |
---|
12852 | self.num_outstanding += 1 |
---|
12853 | def _no_longer_outstanding(res): |
---|
12854 | } |
---|
12855 | |
---|
12856 | Context: |
---|
12857 | |
---|
12858 | [docs/configuration.rst: add section about the types of node, and clarify when setting web.port enables web-API service. fixes #1444 |
---|
12859 | zooko@zooko.com**20110926203801 |
---|
12860 | Ignore-this: ab94d470c68e720101a7ff3c207a719e |
---|
12861 | ] |
---|
12862 | [TAG allmydata-tahoe-1.9.0a2 |
---|
12863 | warner@lothar.com**20110925234811 |
---|
12864 | Ignore-this: e9649c58f9c9017a7d55008938dba64f |
---|
12865 | ] |
---|
12866 | [NEWS: tidy up a little bit, reprioritize some items, hide some non-user-visible items |
---|
12867 | warner@lothar.com**20110925233529 |
---|
12868 | Ignore-this: 61f334cc3fa2539742c3e5d2801aee81 |
---|
12869 | ] |
---|
12870 | [docs: fix some broken .rst links. refs #1542 |
---|
12871 | david-sarah@jacaranda.org**20110925051001 |
---|
12872 | Ignore-this: 5714ee650abfcaab0914537e1f206972 |
---|
12873 | ] |
---|
12874 | [mutable/publish.py: fix an unused import. refs #1542 |
---|
12875 | david-sarah@jacaranda.org**20110925052206 |
---|
12876 | Ignore-this: 2d69ac9e605e789c0aedfecb8877b7d7 |
---|
12877 | ] |
---|
12878 | [NEWS: fix .rst formatting. |
---|
12879 | david-sarah@jacaranda.org**20110925050119 |
---|
12880 | Ignore-this: aa1d20acd23bdb8f8f6d0fa048ea0277 |
---|
12881 | ] |
---|
12882 | [NEWS: updates for 1.9alpha2. |
---|
12883 | david-sarah@jacaranda.org**20110925045343 |
---|
12884 | Ignore-this: d2c44e4e05d2ed662b7adfd2e43928bc |
---|
12885 | ] |
---|
12886 | [mutable/layout.py: make unpack_sdmf_checkstring and unpack_mdmf_checkstring more similar, and change an assert to give a more useful message if it fails. refs #1540 |
---|
12887 | david-sarah@jacaranda.org**20110925023651 |
---|
12888 | Ignore-this: 977aaa8cb16e06a6dcc3e27cb6e23956 |
---|
12889 | ] |
---|
12890 | [mutable/publish: handle unknown mutable share formats when handling errors |
---|
12891 | kevan@isnotajoke.com**20110925004305 |
---|
12892 | Ignore-this: 4d5fa44ef7d777c432eb10c9584ad51f |
---|
12893 | ] |
---|
12894 | [mutable/layout: break unpack_checkstring into unpack_mdmf_checkstring and unpack_sdmf_checkstring, add distinguisher function for checkstrings |
---|
12895 | kevan@isnotajoke.com**20110925004134 |
---|
12896 | Ignore-this: 57f49ed5a72e418a69c7286a225cc8fb |
---|
12897 | ] |
---|
12898 | [test/test_mutable: reenable mdmf publish surprise test |
---|
12899 | kevan@isnotajoke.com**20110924235415 |
---|
12900 | Ignore-this: f752e47a703684491305cc83d16248fb |
---|
12901 | ] |
---|
12902 | [mutable/publish: use unpack_mdmf_checkstring and unpack_sdmf_checkstring instead of unpack_checkstring. fixes #1540 |
---|
12903 | kevan@isnotajoke.com**20110924235137 |
---|
12904 | Ignore-this: 52ca3d9627b8b0ba758367b2bd6c7085 |
---|
12905 | ] |
---|
12906 | [mutable/publish.py: copy the self.writers dict before iterating over it, since we remove elements from it during the iteration. refs #393 |
---|
12907 | david-sarah@jacaranda.org**20110924211208 |
---|
12908 | Ignore-this: 76d4066b55d50ace2a34b87443b39094 |
---|
12909 | ] |
---|
12910 | [mutable/publish.py: simplify by refactoring self.outstanding to self.num_outstanding. refs #393 |
---|
12911 | david-sarah@jacaranda.org**20110924205004 |
---|
12912 | Ignore-this: 902768cfc529ae13ae0b7f67768a3643 |
---|
12913 | ] |
---|
12914 | [test_mutable.py: update SkipTest message for test_publish_surprise_mdmf to reference the right ticket number. refs #1540. |
---|
12915 | david-sarah@jacaranda.org**20110923211622 |
---|
12916 | Ignore-this: 44f16a6817a6b75930bbba18b0a516be |
---|
12917 | ] |
---|
12918 | [control.py: unbreak speed-test: overwrite() wants a MutableData, not str |
---|
12919 | Brian Warner <warner@lothar.com>**20110923073748 |
---|
12920 | Ignore-this: 7dad7aff3d66165868a64ae22d225fa3 |
---|
12921 | |
---|
12922 | Really, all the upload/modify APIs should take a string or a filehandle, and |
---|
12923 | internally wrap it as needed. Callers should not need to be aware of |
---|
12924 | Uploadable() or MutableData() classes. |
---|
12925 | ] |
---|
12926 | [test_mutable.py: skip test_publish_surprise_mdmf, which is causing an error. refs #1534, #393 |
---|
12927 | david-sarah@jacaranda.org**20110920183319 |
---|
12928 | Ignore-this: 6fb020e09e8de437cbcc2c9f57835b31 |
---|
12929 | ] |
---|
12930 | [test/test_mutable: write publish surprise test for MDMF, rename existing test_publish_surprise to clarify that it is for SDMF |
---|
12931 | kevan@isnotajoke.com**20110918003657 |
---|
12932 | Ignore-this: 722c507e8f5b537ff920e0555951059a |
---|
12933 | ] |
---|
12934 | [test/test_mutable: refactor publish surprise test into common test fixture, rewrite test_publish_surprise to use test fixture |
---|
12935 | kevan@isnotajoke.com**20110918003533 |
---|
12936 | Ignore-this: 6f135888d400a99a09b5f9a4be443b6e |
---|
12937 | ] |
---|
12938 | [mutable/publish: add errback immediately after write, don't consume errors from other parts of the publisher |
---|
12939 | kevan@isnotajoke.com**20110917234708 |
---|
12940 | Ignore-this: 12bf6b0918a5dc5ffc30ece669fad51d |
---|
12941 | ] |
---|
12942 | [.darcs-boringfile: minor cleanups. |
---|
12943 | david-sarah@jacaranda.org**20110920154918 |
---|
12944 | Ignore-this: cab78e30d293da7e2832207dbee2ffeb |
---|
12945 | ] |
---|
12946 | [uri.py: fix two interface violations in verifier URI classes. refs #1474 |
---|
12947 | david-sarah@jacaranda.org**20110920030156 |
---|
12948 | Ignore-this: 454ddd1419556cb1d7576d914cb19598 |
---|
12949 | ] |
---|
12950 | [misc/coding_tools/check_interfaces.py: report all violations rather than only one for a given class, by including a forked version of verifyClass. refs #1474 |
---|
12951 | david-sarah@jacaranda.org**20110916223450 |
---|
12952 | Ignore-this: 927efeecf4d12588316826a4b3479aa9 |
---|
12953 | ] |
---|
12954 | [misc/coding_tools/check_interfaces.py: use os.walk instead of FilePath, since this script shouldn't really depend on Twisted. refs #1474 |
---|
12955 | david-sarah@jacaranda.org**20110916212633 |
---|
12956 | Ignore-this: 46eeb4236b34375227dac71ef53f5428 |
---|
12957 | ] |
---|
12958 | [misc/coding_tools/check-interfaces.py: reduce false-positives by adding Dummy* to the set of excluded classnames, and bench-* to the set of excluded basenames. refs #1474 |
---|
12959 | david-sarah@jacaranda.org**20110916212624 |
---|
12960 | Ignore-this: 4e78f6e6fe6c0e9be9df826a0e206804 |
---|
12961 | ] |
---|
12962 | [Add a script 'misc/coding_tools/check-interfaces.py' that checks whether zope interfaces are enforced. Also add 'check-interfaces', 'version-and-path', and 'code-checks' targets to the Makefile. fixes #1474 |
---|
12963 | david-sarah@jacaranda.org**20110915161532 |
---|
12964 | Ignore-this: 32d9bdc5bc4a86d21e927724560ad4b4 |
---|
12965 | ] |
---|
12966 | [Make platform-detection code tolerate linux-3.0, patch by zooko. |
---|
12967 | Brian Warner <warner@lothar.com>**20110915202620 |
---|
12968 | Ignore-this: af63cf9177ae531984dea7a1cad03762 |
---|
12969 | |
---|
12970 | Otherwise address-autodetection can't find ifconfig. refs #1536 |
---|
12971 | ] |
---|
12972 | [test_web.py: fix a bug in _count_leases that was causing us to check only the lease count of one share file, not of all share files as intended. |
---|
12973 | david-sarah@jacaranda.org**20110915185126 |
---|
12974 | Ignore-this: d96632bc48d770b9b577cda1bbd8ff94 |
---|
12975 | ] |
---|
12976 | [docs: insert a newline at the beginning of known_issues.rst to see if this makes it render more nicely in trac |
---|
12977 | zooko@zooko.com**20110914064728 |
---|
12978 | Ignore-this: aca15190fa22083c5d4114d3965f5d65 |
---|
12979 | ] |
---|
12980 | [docs: remove the coding: utf-8 declaration at the to of known_issues.rst, since the trac rendering doesn't hide it |
---|
12981 | zooko@zooko.com**20110914055713 |
---|
12982 | Ignore-this: 941ed32f83ead377171aa7a6bd198fcf |
---|
12983 | ] |
---|
12984 | [docs: more cleanup of known_issues.rst -- now it passes "rst2html --verbose" without comment |
---|
12985 | zooko@zooko.com**20110914055419 |
---|
12986 | Ignore-this: 5505b3d76934bd97d0312cc59ed53879 |
---|
12987 | ] |
---|
12988 | [docs: more formatting improvements to known_issues.rst |
---|
12989 | zooko@zooko.com**20110914051639 |
---|
12990 | Ignore-this: 9ae9230ec9a38a312cbacaf370826691 |
---|
12991 | ] |
---|
12992 | [docs: reformatting of known_issues.rst |
---|
12993 | zooko@zooko.com**20110914050240 |
---|
12994 | Ignore-this: b8be0375079fb478be9d07500f9aaa87 |
---|
12995 | ] |
---|
12996 | [docs: fix formatting error in docs/known_issues.rst |
---|
12997 | zooko@zooko.com**20110914045909 |
---|
12998 | Ignore-this: f73fe74ad2b9e655aa0c6075acced15a |
---|
12999 | ] |
---|
13000 | [merge Tahoe-LAFS v1.8.3 release announcement with trunk |
---|
13001 | zooko@zooko.com**20110913210544 |
---|
13002 | Ignore-this: 163f2c3ddacca387d7308e4b9332516e |
---|
13003 | ] |
---|
13004 | [docs: release notes for Tahoe-LAFS v1.8.3 |
---|
13005 | zooko@zooko.com**20110913165826 |
---|
13006 | Ignore-this: 84223604985b14733a956d2fbaeb4e9f |
---|
13007 | ] |
---|
13008 | [tests: bump up the timeout in this test that fails on FreeStorm's CentOS in order to see if it is just very slow |
---|
13009 | zooko@zooko.com**20110913024255 |
---|
13010 | Ignore-this: 6a86d691e878cec583722faad06fb8e4 |
---|
13011 | ] |
---|
13012 | [interfaces: document that the 'fills-holes-with-zero-bytes' key should be used to detect whether a storage server has that behavior. refs #1528 |
---|
13013 | david-sarah@jacaranda.org**20110913002843 |
---|
13014 | Ignore-this: 1a00a6029d40f6792af48c5578c1fd69 |
---|
13015 | ] |
---|
13016 | [CREDITS: more CREDITS for Kevan and David-Sarah |
---|
13017 | zooko@zooko.com**20110912223357 |
---|
13018 | Ignore-this: 4ea8f0d6f2918171d2f5359c25ad1ada |
---|
13019 | ] |
---|
13020 | [merge NEWS about the mutable file bounds fixes with NEWS about work-in-progress |
---|
13021 | zooko@zooko.com**20110913205521 |
---|
13022 | Ignore-this: 4289a4225f848d6ae6860dd39bc92fa8 |
---|
13023 | ] |
---|
13024 | [doc: add NEWS item about fixes to potential palimpsest issues in mutable files |
---|
13025 | zooko@zooko.com**20110912223329 |
---|
13026 | Ignore-this: 9d63c95ddf95c7d5453c94a1ba4d406a |
---|
13027 | ref. #1528 |
---|
13028 | ] |
---|
13029 | [merge the NEWS about the security fix (#1528) with the work-in-progress NEWS |
---|
13030 | zooko@zooko.com**20110913205153 |
---|
13031 | Ignore-this: 88e88a2ad140238c62010cf7c66953fc |
---|
13032 | ] |
---|
13033 | [doc: add NEWS entry about the issue which allows unauthorized deletion of shares |
---|
13034 | zooko@zooko.com**20110912223246 |
---|
13035 | Ignore-this: 77e06d09103d2ef6bb51ea3e5d6e80b0 |
---|
13036 | ref. #1528 |
---|
13037 | ] |
---|
13038 | [doc: add entry in known_issues.rst about the issue which allows unauthorized deletion of shares |
---|
13039 | zooko@zooko.com**20110912223135 |
---|
13040 | Ignore-this: b26c6ea96b6c8740b93da1f602b5a4cd |
---|
13041 | ref. #1528 |
---|
13042 | ] |
---|
13043 | [storage: more paranoid handling of bounds and palimpsests in mutable share files |
---|
13044 | zooko@zooko.com**20110912222655 |
---|
13045 | Ignore-this: a20782fa423779ee851ea086901e1507 |
---|
13046 | * storage server ignores requests to extend shares by sending a new_length |
---|
13047 | * storage server fills exposed holes (created by sending a write vector whose offset begins after the end of the current data) with 0 to avoid "palimpsest" exposure of previous contents |
---|
13048 | * storage server zeroes out lease info at the old location when moving it to a new location |
---|
13049 | ref. #1528 |
---|
13050 | ] |
---|
13051 | [storage: test that the storage server ignores requests to extend shares by sending a new_length, and that the storage server fills exposed holes with 0 to avoid "palimpsest" exposure of previous contents |
---|
13052 | zooko@zooko.com**20110912222554 |
---|
13053 | Ignore-this: 61ebd7b11250963efdf5b1734a35271 |
---|
13054 | ref. #1528 |
---|
13055 | ] |
---|
13056 | [immutable: prevent clients from reading past the end of share data, which would allow them to learn the cancellation secret |
---|
13057 | zooko@zooko.com**20110912222458 |
---|
13058 | Ignore-this: da1ebd31433ea052087b75b2e3480c25 |
---|
13059 | Declare explicitly that we prevent this problem in the server's version dict. |
---|
13060 | fixes #1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them) |
---|
13061 | ] |
---|
13062 | [storage: remove the storage server's "remote_cancel_lease" function |
---|
13063 | zooko@zooko.com**20110912222331 |
---|
13064 | Ignore-this: 1c32dee50e0981408576daffad648c50 |
---|
13065 | We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file. |
---|
13066 | fixes #1528 (there are two patches that are each a sufficient fix to #1528 and this is one of them) |
---|
13067 | ] |
---|
13068 | [storage: test that the storage server does *not* have a "remote_cancel_lease" function |
---|
13069 | zooko@zooko.com**20110912222324 |
---|
13070 | Ignore-this: 21c652009704652d35f34651f98dd403 |
---|
13071 | We're removing this function because it is currently unused, because it is dangerous, and because the bug described in #1528 leaks the cancellation secret, which allows anyone who knows a file's storage index to abuse this function to delete shares of that file. |
---|
13072 | ref. #1528 |
---|
13073 | ] |
---|
13074 | [immutable: test whether the server allows clients to read past the end of share data, which would allow them to learn the cancellation secret |
---|
13075 | zooko@zooko.com**20110912221201 |
---|
13076 | Ignore-this: 376e47b346c713d37096531491176349 |
---|
13077 | Also test whether the server explicitly declares that it prevents this problem. |
---|
13078 | ref #1528 |
---|
13079 | ] |
---|
13080 | [Retrieve._activate_enough_peers: rewrite Verify logic |
---|
13081 | Brian Warner <warner@lothar.com>**20110909181150 |
---|
13082 | Ignore-this: 9367c11e1eacbf025f75ce034030d717 |
---|
13083 | ] |
---|
13084 | [Retrieve: implement/test stopProducing |
---|
13085 | Brian Warner <warner@lothar.com>**20110909181150 |
---|
13086 | Ignore-this: 47b2c3df7dc69835e0a066ca12e3c178 |
---|
13087 | ] |
---|
13088 | [move DownloadStopped from download.common to interfaces |
---|
13089 | Brian Warner <warner@lothar.com>**20110909181150 |
---|
13090 | Ignore-this: 8572acd3bb16e50341dbed8eb1d90a50 |
---|
13091 | ] |
---|
13092 | [retrieve.py: remove vestigal self._validated_readers |
---|
13093 | Brian Warner <warner@lothar.com>**20110909181150 |
---|
13094 | Ignore-this: faab2ec14e314a53a2ffb714de626e2d |
---|
13095 | ] |
---|
13096 | [Retrieve: rewrite flow-control: use a top-level loop() to catch all errors |
---|
13097 | Brian Warner <warner@lothar.com>**20110909181150 |
---|
13098 | Ignore-this: e162d2cd53b3d3144fc6bc757e2c7714 |
---|
13099 | |
---|
13100 | This ought to close the potential for dropped errors and hanging downloads. |
---|
13101 | Verify needs to be examined, I may have broken it, although all tests pass. |
---|
13102 | ] |
---|
13103 | [Retrieve: merge _validate_active_prefixes into _add_active_peers |
---|
13104 | Brian Warner <warner@lothar.com>**20110909181150 |
---|
13105 | Ignore-this: d3ead31e17e69394ae7058eeb5beaf4c |
---|
13106 | ] |
---|
13107 | [Retrieve: remove the initial prefix-is-still-good check |
---|
13108 | Brian Warner <warner@lothar.com>**20110909181150 |
---|
13109 | Ignore-this: da66ee51c894eaa4e862e2dffb458acc |
---|
13110 | |
---|
13111 | This check needs to be done with each fetch from the storage server, to |
---|
13112 | detect when someone has changed the share (i.e. our servermap goes stale). |
---|
13113 | Doing it just once at the beginning of retrieve isn't enough: a write might |
---|
13114 | occur after the first segment but before the second, etc. |
---|
13115 | |
---|
13116 | _try_to_validate_prefix() was not removed: it will be used by the future |
---|
13117 | check-with-each-fetch code. |
---|
13118 | |
---|
13119 | test_mutable.Roundtrip.test_corrupt_all_seqnum_late was disabled, since it |
---|
13120 | fails until this check is brought back. (the corruption it applies only |
---|
13121 | touches the prefix, not the block data, so the check-less retrieve actually |
---|
13122 | tolerates it). Don't forget to re-enable it once the check is brought back. |
---|
13123 | ] |
---|
13124 | [MDMFSlotReadProxy: remove the queue |
---|
13125 | Brian Warner <warner@lothar.com>**20110909181150 |
---|
13126 | Ignore-this: 96673cb8dda7a87a423de2f4897d66d2 |
---|
13127 | |
---|
13128 | This is a neat trick to reduce Foolscap overhead, but the need for an |
---|
13129 | explicit flush() complicates the Retrieve path and makes it prone to |
---|
13130 | lost-progress bugs. |
---|
13131 | |
---|
13132 | Also change test_mutable.FakeStorageServer to tolerate multiple reads of the |
---|
13133 | same share in a row, a limitation exposed by turning off the queue. |
---|
13134 | ] |
---|
13135 | [rearrange Retrieve: first step, shouldn't change order of execution |
---|
13136 | Brian Warner <warner@lothar.com>**20110909181149 |
---|
13137 | Ignore-this: e3006368bfd2802b82ea45c52409e8d6 |
---|
13138 | ] |
---|
13139 | [CLI: test_cli.py -- remove an unnecessary call in test_mkdir_mutable_type. refs #1527 |
---|
13140 | david-sarah@jacaranda.org**20110906183730 |
---|
13141 | Ignore-this: 122e2ffbee84861c32eda766a57759cf |
---|
13142 | ] |
---|
13143 | [CLI: improve test for 'tahoe mkdir --mutable-type='. refs #1527 |
---|
13144 | david-sarah@jacaranda.org**20110906183020 |
---|
13145 | Ignore-this: f1d4598e6c536f0a2b15050b3bc0ef9d |
---|
13146 | ] |
---|
13147 | [CLI: make the --mutable-type option value for 'tahoe put' and 'tahoe mkdir' case-insensitive, and change --help for these commands accordingly. fixes #1527 |
---|
13148 | david-sarah@jacaranda.org**20110905020922 |
---|
13149 | Ignore-this: 75a6df0a2df9c467d8c010579e9a024e |
---|
13150 | ] |
---|
13151 | [cli: make --mutable-type imply --mutable in 'tahoe put' |
---|
13152 | Kevan Carstensen <kevan@isnotajoke.com>**20110903190920 |
---|
13153 | Ignore-this: 23336d3c43b2a9554e40c2a11c675e93 |
---|
13154 | ] |
---|
13155 | [SFTP: add a comment about a subtle interaction between OverwriteableFileConsumer and GeneralSFTPFile, and test the case it is commenting on. |
---|
13156 | david-sarah@jacaranda.org**20110903222304 |
---|
13157 | Ignore-this: 980c61d4dd0119337f1463a69aeebaf0 |
---|
13158 | ] |
---|
13159 | [improve the storage/mutable.py asserts even more |
---|
13160 | warner@lothar.com**20110901160543 |
---|
13161 | Ignore-this: 5b2b13c49bc4034f96e6e3aaaa9a9946 |
---|
13162 | ] |
---|
13163 | [storage/mutable.py: special characters in struct.foo arguments indicate standard as opposed to native sizes, we should be using these characters in these asserts |
---|
13164 | wilcoxjg@gmail.com**20110901084144 |
---|
13165 | Ignore-this: 28ace2b2678642e4d7269ddab8c67f30 |
---|
13166 | ] |
---|
13167 | [docs/write_coordination.rst: fix formatting and add more specific warning about access via sshfs. |
---|
13168 | david-sarah@jacaranda.org**20110831232148 |
---|
13169 | Ignore-this: cd9c851d3eb4e0a1e088f337c291586c |
---|
13170 | ] |
---|
13171 | [test_mutable.Version: consolidate some tests, reduce runtime from 19s to 15s |
---|
13172 | warner@lothar.com**20110831050451 |
---|
13173 | Ignore-this: 64815284d9e536f8f3798b5f44cf580c |
---|
13174 | ] |
---|
13175 | [mutable/retrieve: handle the case where self._read_length is 0. |
---|
13176 | Kevan Carstensen <kevan@isnotajoke.com>**20110830210141 |
---|
13177 | Ignore-this: fceafbe485851ca53f2774e5a4fd8d30 |
---|
13178 | |
---|
13179 | Note that the downloader will still fetch a segment for a zero-length |
---|
13180 | read, which is wasteful. Fixing that isn't specifically required to fix |
---|
13181 | #1512, but it should probably be fixed before 1.9. |
---|
13182 | ] |
---|
13183 | [NEWS: added summary of all changes since 1.8.2. Needs editing. |
---|
13184 | Brian Warner <warner@lothar.com>**20110830163205 |
---|
13185 | Ignore-this: 273899b37a899fc6919b74572454b8b2 |
---|
13186 | ] |
---|
13187 | [test_mutable.Update: only upload the files needed for each test. refs #1500 |
---|
13188 | Brian Warner <warner@lothar.com>**20110829072717 |
---|
13189 | Ignore-this: 4d2ab4c7523af9054af7ecca9c3d9dc7 |
---|
13190 | |
---|
13191 | This first step shaves 15% off the runtime: from 139s to 119s on my laptop. |
---|
13192 | It also fixes a couple of places where a Deferred was being dropped, which |
---|
13193 | would cause two tests to run in parallel and also confuse error reporting. |
---|
13194 | ] |
---|
13195 | [Let Uploader retain History instead of passing it into upload(). Fixes #1079. |
---|
13196 | Brian Warner <warner@lothar.com>**20110829063246 |
---|
13197 | Ignore-this: 3902c58ec12bd4b2d876806248e19f17 |
---|
13198 | |
---|
13199 | This consistently records all immutable uploads in the Recent Uploads And |
---|
13200 | Downloads page, regardless of code path. Previously, certain webapi upload |
---|
13201 | operations (like PUT /uri/$DIRCAP/newchildname) failed to pass the History |
---|
13202 | object and were left out. |
---|
13203 | ] |
---|
13204 | [Fix mutable publish/retrieve timing status displays. Fixes #1505. |
---|
13205 | Brian Warner <warner@lothar.com>**20110828232221 |
---|
13206 | Ignore-this: 4080ce065cf481b2180fd711c9772dd6 |
---|
13207 | |
---|
13208 | publish: |
---|
13209 | * encrypt and encode times are cumulative, not just current-segment |
---|
13210 | |
---|
13211 | retrieve: |
---|
13212 | * same for decrypt and decode times |
---|
13213 | * update "current status" to include segment number |
---|
13214 | * set status to Finished/Failed when download is complete |
---|
13215 | * set progress to 1.0 when complete |
---|
13216 | |
---|
13217 | More improvements to consider: |
---|
13218 | * progress is currently 0% or 100%: should calculate how many segments are |
---|
13219 | involved (remembering retrieve can be less than the whole file) and set it |
---|
13220 | to a fraction |
---|
13221 | * "fetch" time is fuzzy: what we want is to know how much of the delay is not |
---|
13222 | our own fault, but since we do decode/decrypt work while waiting for more |
---|
13223 | shares, it's not straightforward |
---|
13224 | ] |
---|
13225 | [Teach 'tahoe debug catalog-shares about MDMF. Closes #1507. |
---|
13226 | Brian Warner <warner@lothar.com>**20110828080931 |
---|
13227 | Ignore-this: 56ef2951db1a648353d7daac6a04c7d1 |
---|
13228 | ] |
---|
13229 | [debug.py: remove some dead comments |
---|
13230 | Brian Warner <warner@lothar.com>**20110828074556 |
---|
13231 | Ignore-this: 40e74040dd4d14fd2f4e4baaae506b31 |
---|
13232 | ] |
---|
13233 | [hush pyflakes |
---|
13234 | Brian Warner <warner@lothar.com>**20110828074254 |
---|
13235 | Ignore-this: bef9d537a969fa82fe4decc4ba2acb09 |
---|
13236 | ] |
---|
13237 | [MutableFileNode.set_downloader_hints: never depend upon order of dict.values() |
---|
13238 | Brian Warner <warner@lothar.com>**20110828074103 |
---|
13239 | Ignore-this: caaf1aa518dbdde4d797b7f335230faa |
---|
13240 | |
---|
13241 | The old code was calculating the "extension parameters" (a list) from the |
---|
13242 | downloader hints (a dictionary) with hints.values(), which is not stable, and |
---|
13243 | would result in corrupted filecaps (with the 'k' and 'segsize' hints |
---|
13244 | occasionally swapped). The new code always uses [k,segsize]. |
---|
13245 | ] |
---|
13246 | [layout.py: fix MDMF share layout documentation |
---|
13247 | Brian Warner <warner@lothar.com>**20110828073921 |
---|
13248 | Ignore-this: 3f13366fed75b5e31b51ae895450a225 |
---|
13249 | ] |
---|
13250 | [teach 'tahoe debug dump-share' about MDMF and offsets. refs #1507 |
---|
13251 | Brian Warner <warner@lothar.com>**20110828073834 |
---|
13252 | Ignore-this: 3a9d2ef9c47a72bf1506ba41199a1dea |
---|
13253 | ] |
---|
13254 | [test_mutable.Version.test_debug: use splitlines() to fix buildslaves |
---|
13255 | Brian Warner <warner@lothar.com>**20110828064728 |
---|
13256 | Ignore-this: c7f6245426fc80b9d1ae901d5218246a |
---|
13257 | |
---|
13258 | Any slave running in a directory with spaces in the name was miscounting |
---|
13259 | shares, causing the test to fail. |
---|
13260 | ] |
---|
13261 | [test_mutable.Version: exercise 'tahoe debug find-shares' on MDMF. refs #1507 |
---|
13262 | Brian Warner <warner@lothar.com>**20110828005542 |
---|
13263 | Ignore-this: cb20bea1c28bfa50a72317d70e109672 |
---|
13264 | |
---|
13265 | Also changes NoNetworkGrid to put shares in storage/shares/ . |
---|
13266 | ] |
---|
13267 | [test_mutable.py: oops, missed a .todo |
---|
13268 | Brian Warner <warner@lothar.com>**20110828002118 |
---|
13269 | Ignore-this: fda09ae86481352b7a627c278d2a3940 |
---|
13270 | ] |
---|
13271 | [test_mutable: merge davidsarah's patch with my Version refactorings |
---|
13272 | warner@lothar.com**20110827235707 |
---|
13273 | Ignore-this: b5aaf481c90d99e33827273b5d118fd0 |
---|
13274 | ] |
---|
13275 | [Make the immutable/read-only constraint checking for MDMF URIs identical to that for SSK URIs. refs #393 |
---|
13276 | david-sarah@jacaranda.org**20110823012720 |
---|
13277 | Ignore-this: e1f59d7ff2007c81dbef2aeb14abd721 |
---|
13278 | ] |
---|
13279 | [Additional tests for MDMF URIs and for zero-length files. refs #393 |
---|
13280 | david-sarah@jacaranda.org**20110823011532 |
---|
13281 | Ignore-this: a7cc0c09d1d2d72413f9cd227c47a9d5 |
---|
13282 | ] |
---|
13283 | [Additional tests for zero-length partial reads and updates to mutable versions. refs #393 |
---|
13284 | david-sarah@jacaranda.org**20110822014111 |
---|
13285 | Ignore-this: 5fc6f4d06e11910124e4a277ec8a43ea |
---|
13286 | ] |
---|
13287 | [test_mutable.Version: factor out some expensive uploads, save 25% runtime |
---|
13288 | Brian Warner <warner@lothar.com>**20110827232737 |
---|
13289 | Ignore-this: ea37383eb85ea0894b254fe4dfb45544 |
---|
13290 | ] |
---|
13291 | [SDMF: update filenode with correct k/N after Retrieve. Fixes #1510. |
---|
13292 | Brian Warner <warner@lothar.com>**20110827225031 |
---|
13293 | Ignore-this: b50ae6e1045818c400079f118b4ef48 |
---|
13294 | |
---|
13295 | Without this, we get a regression when modifying a mutable file that was |
---|
13296 | created with more shares (larger N) than our current tahoe.cfg . The |
---|
13297 | modification attempt creates new versions of the (0,1,..,newN-1) shares, but |
---|
13298 | leaves the old versions of the (newN,..,oldN-1) shares alone (and throws a |
---|
13299 | assertion error in SDMFSlotWriteProxy.finish_publishing in the process). |
---|
13300 | |
---|
13301 | The mixed versions that result (some shares with e.g. N=10, some with N=20, |
---|
13302 | such that both versions are recoverable) cause problems for the Publish code, |
---|
13303 | even before MDMF landed. Might be related to refs #1390 and refs #1042. |
---|
13304 | ] |
---|
13305 | [layout.py: annotate assertion to figure out 'tahoe backup' failure |
---|
13306 | Brian Warner <warner@lothar.com>**20110827195253 |
---|
13307 | Ignore-this: 9b92b954e3ed0d0f80154fff1ff674e5 |
---|
13308 | ] |
---|
13309 | [Add 'tahoe debug dump-cap' support for MDMF, DIR2-CHK, DIR2-MDMF. refs #1507. |
---|
13310 | Brian Warner <warner@lothar.com>**20110827195048 |
---|
13311 | Ignore-this: 61c6af5e33fc88e0251e697a50addb2c |
---|
13312 | |
---|
13313 | This also adds tests for all those cases, and fixes an omission in uri.py |
---|
13314 | that broke parsing of DIR2-MDMF-Verifier and DIR2-CHK-Verifier. |
---|
13315 | ] |
---|
13316 | [MDMF: more writable/writeable consistentifications |
---|
13317 | warner@lothar.com**20110827190602 |
---|
13318 | Ignore-this: 22492a9e20c1819ddb12091062888b55 |
---|
13319 | ] |
---|
13320 | [MDMF: s/Writable/Writeable/g, for consistency with existing SDMF code |
---|
13321 | warner@lothar.com**20110827183357 |
---|
13322 | Ignore-this: 9dd312acedbdb2fc2f7bef0d0fb17c0b |
---|
13323 | ] |
---|
13324 | [setup.cfg: remove no-longer-supported test_mac_diskimage alias. refs #1479 |
---|
13325 | david-sarah@jacaranda.org**20110826230345 |
---|
13326 | Ignore-this: 40e908b8937322a290fb8012bfcad02a |
---|
13327 | ] |
---|
13328 | [test_mutable.Update: increase timeout from 120s to 400s, slaves are failing |
---|
13329 | Brian Warner <warner@lothar.com>**20110825230140 |
---|
13330 | Ignore-this: 101b1924a30cdbda9b2e419e95ca15ec |
---|
13331 | ] |
---|
13332 | [tests: fix check_memory test |
---|
13333 | zooko@zooko.com**20110825201116 |
---|
13334 | Ignore-this: 4d66299fa8cb61d2ca04b3f45344d835 |
---|
13335 | fixes #1503 |
---|
13336 | ] |
---|
13337 | [TAG allmydata-tahoe-1.9.0a1 |
---|
13338 | warner@lothar.com**20110825161122 |
---|
13339 | Ignore-this: 3cbf49f00dbda58189f893c427f65605 |
---|
13340 | ] |
---|
13341 | Patch bundle hash: |
---|
13342 | e35abd897ebb14917bfa89262b71dfd6c6556f8b |
---|