Ticket #999: work-in-progress-2011-07-14_21_23.darcs.patch

File work-in-progress-2011-07-14_21_23.darcs.patch, 229.5 KB (added by zooko, at 2011-07-14T21:24:15Z)
Line 
125 patches for repository /home/zooko/playground/tahoe-lafs/pristine:
2
3Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
4  * storage: new mocking tests of storage server read and write
5  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
6
7Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
8  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
9  sloppy not for production
10
11Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
12  * snapshot of progress on backend implementation (not suitable for trunk)
13
14Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
15  * checkpoint patch
16
17Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
18  * checkpoint4
19
20Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
21  * checkpoint5
22
23Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
24  * checkpoint 6
25
26Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
27  * checkpoint 7
28
29Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
30  * checkpoint8
31    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
32
33Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
34  * checkpoint 9
35
36Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
37  * checkpoint10
38
39Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
40  * jacp 11
41
42Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
43  * checkpoint12 testing correct behavior with regard to incoming and final
44
45Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
46  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
47
48Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
49  * adding comments to clarify what I'm about to do.
50
51Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
52  * branching back, no longer attempting to mock inside TestServerFSBackend
53
54Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
55  * checkpoint12 TestServerFSBackend no longer mocks filesystem
56
57Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
58  * JACP
59
60Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
61  * testing get incoming
62
63Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
64  * ImmutableShareFile does not know its StorageIndex
65
66Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
67  * get_incoming correctly reports the 0 share after it has arrived
68
69Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
70  * jacp14
71
72Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
73  * jacp14 or so
74
75Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
76  * temporary work-in-progress patch to be unrecorded
77  tidy up a few tests, work done in pair-programming with Zancas
78
79Thu Jul 14 15:21:39 MDT 2011  zooko@zooko.com
80  * work in progress intended to be unrecorded and never committed to trunk
81  switch from os.path.join to filepath
82  incomplete refactoring of common "stay in your subtree" tester code into a superclass
83 
84
85New patches:
86
87[storage: new mocking tests of storage server read and write
88wilcoxjg@gmail.com**20110325203514
89 Ignore-this: df65c3c4f061dd1516f88662023fdb41
90 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
91] {
92addfile ./src/allmydata/test/test_server.py
93hunk ./src/allmydata/test/test_server.py 1
94+from twisted.trial import unittest
95+
96+from StringIO import StringIO
97+
98+from allmydata.test.common_util import ReallyEqualMixin
99+
100+import mock
101+
102+# This is the code that we're going to be testing.
103+from allmydata.storage.server import StorageServer
104+
105+# The following share file contents was generated with
106+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
107+# with share data == 'a'.
108+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
109+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
110+
111+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
112+
113+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
114+    @mock.patch('__builtin__.open')
115+    def test_create_server(self, mockopen):
116+        """ This tests whether a server instance can be constructed. """
117+
118+        def call_open(fname, mode):
119+            if fname == 'testdir/bucket_counter.state':
120+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
121+            elif fname == 'testdir/lease_checker.state':
122+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
123+            elif fname == 'testdir/lease_checker.history':
124+                return StringIO()
125+        mockopen.side_effect = call_open
126+
127+        # Now begin the test.
128+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
129+
130+        # You passed!
131+
132+class TestServer(unittest.TestCase, ReallyEqualMixin):
133+    @mock.patch('__builtin__.open')
134+    def setUp(self, mockopen):
135+        def call_open(fname, mode):
136+            if fname == 'testdir/bucket_counter.state':
137+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
138+            elif fname == 'testdir/lease_checker.state':
139+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
140+            elif fname == 'testdir/lease_checker.history':
141+                return StringIO()
142+        mockopen.side_effect = call_open
143+
144+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
145+
146+
147+    @mock.patch('time.time')
148+    @mock.patch('os.mkdir')
149+    @mock.patch('__builtin__.open')
150+    @mock.patch('os.listdir')
151+    @mock.patch('os.path.isdir')
152+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
153+        """Handle a report of corruption."""
154+
155+        def call_listdir(dirname):
156+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
157+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
158+
159+        mocklistdir.side_effect = call_listdir
160+
161+        class MockFile:
162+            def __init__(self):
163+                self.buffer = ''
164+                self.pos = 0
165+            def write(self, instring):
166+                begin = self.pos
167+                padlen = begin - len(self.buffer)
168+                if padlen > 0:
169+                    self.buffer += '\x00' * padlen
170+                end = self.pos + len(instring)
171+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
172+                self.pos = end
173+            def close(self):
174+                pass
175+            def seek(self, pos):
176+                self.pos = pos
177+            def read(self, numberbytes):
178+                return self.buffer[self.pos:self.pos+numberbytes]
179+            def tell(self):
180+                return self.pos
181+
182+        mocktime.return_value = 0
183+
184+        sharefile = MockFile()
185+        def call_open(fname, mode):
186+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
187+            return sharefile
188+
189+        mockopen.side_effect = call_open
190+        # Now begin the test.
191+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
192+        print bs
193+        bs[0].remote_write(0, 'a')
194+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
195+
196+
197+    @mock.patch('os.path.exists')
198+    @mock.patch('os.path.getsize')
199+    @mock.patch('__builtin__.open')
200+    @mock.patch('os.listdir')
201+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
202+        """ This tests whether the code correctly finds and reads
203+        shares written out by old (Tahoe-LAFS <= v1.8.2)
204+        servers. There is a similar test in test_download, but that one
205+        is from the perspective of the client and exercises a deeper
206+        stack of code. This one is for exercising just the
207+        StorageServer object. """
208+
209+        def call_listdir(dirname):
210+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
211+            return ['0']
212+
213+        mocklistdir.side_effect = call_listdir
214+
215+        def call_open(fname, mode):
216+            self.failUnlessReallyEqual(fname, sharefname)
217+            self.failUnless('r' in mode, mode)
218+            self.failUnless('b' in mode, mode)
219+
220+            return StringIO(share_file_data)
221+        mockopen.side_effect = call_open
222+
223+        datalen = len(share_file_data)
224+        def call_getsize(fname):
225+            self.failUnlessReallyEqual(fname, sharefname)
226+            return datalen
227+        mockgetsize.side_effect = call_getsize
228+
229+        def call_exists(fname):
230+            self.failUnlessReallyEqual(fname, sharefname)
231+            return True
232+        mockexists.side_effect = call_exists
233+
234+        # Now begin the test.
235+        bs = self.s.remote_get_buckets('teststorage_index')
236+
237+        self.failUnlessEqual(len(bs), 1)
238+        b = bs[0]
239+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
240+        # If you try to read past the end you get the as much data as is there.
241+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
242+        # If you start reading past the end of the file you get the empty string.
243+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
244}
245[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
246wilcoxjg@gmail.com**20110624202850
247 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
248 sloppy not for production
249] {
250move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
251hunk ./src/allmydata/storage/crawler.py 13
252     pass
253 
254 class ShareCrawler(service.MultiService):
255-    """A ShareCrawler subclass is attached to a StorageServer, and
256+    """A subcless of ShareCrawler is attached to a StorageServer, and
257     periodically walks all of its shares, processing each one in some
258     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
259     since large servers can easily have a terabyte of shares, in several
260hunk ./src/allmydata/storage/crawler.py 31
261     We assume that the normal upload/download/get_buckets traffic of a tahoe
262     grid will cause the prefixdir contents to be mostly cached in the kernel,
263     or that the number of buckets in each prefixdir will be small enough to
264-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
265+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
266     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
267     prefix. On this server, each prefixdir took 130ms-200ms to list the first
268     time, and 17ms to list the second time.
269hunk ./src/allmydata/storage/crawler.py 68
270     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
271     minimum_cycle_time = 300 # don't run a cycle faster than this
272 
273-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
274+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
275         service.MultiService.__init__(self)
276         if allowed_cpu_percentage is not None:
277             self.allowed_cpu_percentage = allowed_cpu_percentage
278hunk ./src/allmydata/storage/crawler.py 72
279-        self.server = server
280-        self.sharedir = server.sharedir
281-        self.statefile = statefile
282+        self.backend = backend
283         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
284                          for i in range(2**10)]
285         self.prefixes.sort()
286hunk ./src/allmydata/storage/crawler.py 446
287 
288     minimum_cycle_time = 60*60 # we don't need this more than once an hour
289 
290-    def __init__(self, server, statefile, num_sample_prefixes=1):
291-        ShareCrawler.__init__(self, server, statefile)
292+    def __init__(self, statefile, num_sample_prefixes=1):
293+        ShareCrawler.__init__(self, statefile)
294         self.num_sample_prefixes = num_sample_prefixes
295 
296     def add_initial_state(self):
297hunk ./src/allmydata/storage/expirer.py 15
298     removed.
299 
300     I collect statistics on the leases and make these available to a web
301-    status page, including::
302+    status page, including:
303 
304     Space recovered during this cycle-so-far:
305      actual (only if expiration_enabled=True):
306hunk ./src/allmydata/storage/expirer.py 51
307     slow_start = 360 # wait 6 minutes after startup
308     minimum_cycle_time = 12*60*60 # not more than twice per day
309 
310-    def __init__(self, server, statefile, historyfile,
311+    def __init__(self, statefile, historyfile,
312                  expiration_enabled, mode,
313                  override_lease_duration, # used if expiration_mode=="age"
314                  cutoff_date, # used if expiration_mode=="cutoff-date"
315hunk ./src/allmydata/storage/expirer.py 71
316         else:
317             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
318         self.sharetypes_to_expire = sharetypes
319-        ShareCrawler.__init__(self, server, statefile)
320+        ShareCrawler.__init__(self, statefile)
321 
322     def add_initial_state(self):
323         # we fill ["cycle-to-date"] here (even though they will be reset in
324hunk ./src/allmydata/storage/immutable.py 44
325     sharetype = "immutable"
326 
327     def __init__(self, filename, max_size=None, create=False):
328-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
329+        """ If max_size is not None then I won't allow more than
330+        max_size to be written to me. If create=True then max_size
331+        must not be None. """
332         precondition((max_size is not None) or (not create), max_size, create)
333         self.home = filename
334         self._max_size = max_size
335hunk ./src/allmydata/storage/immutable.py 87
336 
337     def read_share_data(self, offset, length):
338         precondition(offset >= 0)
339-        # reads beyond the end of the data are truncated. Reads that start
340-        # beyond the end of the data return an empty string. I wonder why
341-        # Python doesn't do the following computation for me?
342+        # Reads beyond the end of the data are truncated. Reads that start
343+        # beyond the end of the data return an empty string.
344         seekpos = self._data_offset+offset
345         fsize = os.path.getsize(self.home)
346         actuallength = max(0, min(length, fsize-seekpos))
347hunk ./src/allmydata/storage/immutable.py 198
348             space_freed += os.stat(self.home)[stat.ST_SIZE]
349             self.unlink()
350         return space_freed
351+class NullBucketWriter(Referenceable):
352+    implements(RIBucketWriter)
353 
354hunk ./src/allmydata/storage/immutable.py 201
355+    def remote_write(self, offset, data):
356+        return
357 
358 class BucketWriter(Referenceable):
359     implements(RIBucketWriter)
360hunk ./src/allmydata/storage/server.py 7
361 from twisted.application import service
362 
363 from zope.interface import implements
364-from allmydata.interfaces import RIStorageServer, IStatsProducer
365+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
366 from allmydata.util import fileutil, idlib, log, time_format
367 import allmydata # for __full_version__
368 
369hunk ./src/allmydata/storage/server.py 16
370 from allmydata.storage.lease import LeaseInfo
371 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
372      create_mutable_sharefile
373-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
374+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
375 from allmydata.storage.crawler import BucketCountingCrawler
376 from allmydata.storage.expirer import LeaseCheckingCrawler
377 
378hunk ./src/allmydata/storage/server.py 20
379+from zope.interface import implements
380+
381+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
382+# be started and stopped.
383+class Backend(service.MultiService):
384+    implements(IStatsProducer)
385+    def __init__(self):
386+        service.MultiService.__init__(self)
387+
388+    def get_bucket_shares(self):
389+        """XXX"""
390+        raise NotImplementedError
391+
392+    def get_share(self):
393+        """XXX"""
394+        raise NotImplementedError
395+
396+    def make_bucket_writer(self):
397+        """XXX"""
398+        raise NotImplementedError
399+
400+class NullBackend(Backend):
401+    def __init__(self):
402+        Backend.__init__(self)
403+
404+    def get_available_space(self):
405+        return None
406+
407+    def get_bucket_shares(self, storage_index):
408+        return set()
409+
410+    def get_share(self, storage_index, sharenum):
411+        return None
412+
413+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
414+        return NullBucketWriter()
415+
416+class FSBackend(Backend):
417+    def __init__(self, storedir, readonly=False, reserved_space=0):
418+        Backend.__init__(self)
419+
420+        self._setup_storage(storedir, readonly, reserved_space)
421+        self._setup_corruption_advisory()
422+        self._setup_bucket_counter()
423+        self._setup_lease_checkerf()
424+
425+    def _setup_storage(self, storedir, readonly, reserved_space):
426+        self.storedir = storedir
427+        self.readonly = readonly
428+        self.reserved_space = int(reserved_space)
429+        if self.reserved_space:
430+            if self.get_available_space() is None:
431+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
432+                        umid="0wZ27w", level=log.UNUSUAL)
433+
434+        self.sharedir = os.path.join(self.storedir, "shares")
435+        fileutil.make_dirs(self.sharedir)
436+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
437+        self._clean_incomplete()
438+
439+    def _clean_incomplete(self):
440+        fileutil.rm_dir(self.incomingdir)
441+        fileutil.make_dirs(self.incomingdir)
442+
443+    def _setup_corruption_advisory(self):
444+        # we don't actually create the corruption-advisory dir until necessary
445+        self.corruption_advisory_dir = os.path.join(self.storedir,
446+                                                    "corruption-advisories")
447+
448+    def _setup_bucket_counter(self):
449+        statefile = os.path.join(self.storedir, "bucket_counter.state")
450+        self.bucket_counter = BucketCountingCrawler(statefile)
451+        self.bucket_counter.setServiceParent(self)
452+
453+    def _setup_lease_checkerf(self):
454+        statefile = os.path.join(self.storedir, "lease_checker.state")
455+        historyfile = os.path.join(self.storedir, "lease_checker.history")
456+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
457+                                   expiration_enabled, expiration_mode,
458+                                   expiration_override_lease_duration,
459+                                   expiration_cutoff_date,
460+                                   expiration_sharetypes)
461+        self.lease_checker.setServiceParent(self)
462+
463+    def get_available_space(self):
464+        if self.readonly:
465+            return 0
466+        return fileutil.get_available_space(self.storedir, self.reserved_space)
467+
468+    def get_bucket_shares(self, storage_index):
469+        """Return a list of (shnum, pathname) tuples for files that hold
470+        shares for this storage_index. In each tuple, 'shnum' will always be
471+        the integer form of the last component of 'pathname'."""
472+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
473+        try:
474+            for f in os.listdir(storagedir):
475+                if NUM_RE.match(f):
476+                    filename = os.path.join(storagedir, f)
477+                    yield (int(f), filename)
478+        except OSError:
479+            # Commonly caused by there being no buckets at all.
480+            pass
481+
482 # storage/
483 # storage/shares/incoming
484 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
485hunk ./src/allmydata/storage/server.py 143
486     name = 'storage'
487     LeaseCheckerClass = LeaseCheckingCrawler
488 
489-    def __init__(self, storedir, nodeid, reserved_space=0,
490-                 discard_storage=False, readonly_storage=False,
491+    def __init__(self, nodeid, backend, reserved_space=0,
492+                 readonly_storage=False,
493                  stats_provider=None,
494                  expiration_enabled=False,
495                  expiration_mode="age",
496hunk ./src/allmydata/storage/server.py 155
497         assert isinstance(nodeid, str)
498         assert len(nodeid) == 20
499         self.my_nodeid = nodeid
500-        self.storedir = storedir
501-        sharedir = os.path.join(storedir, "shares")
502-        fileutil.make_dirs(sharedir)
503-        self.sharedir = sharedir
504-        # we don't actually create the corruption-advisory dir until necessary
505-        self.corruption_advisory_dir = os.path.join(storedir,
506-                                                    "corruption-advisories")
507-        self.reserved_space = int(reserved_space)
508-        self.no_storage = discard_storage
509-        self.readonly_storage = readonly_storage
510         self.stats_provider = stats_provider
511         if self.stats_provider:
512             self.stats_provider.register_producer(self)
513hunk ./src/allmydata/storage/server.py 158
514-        self.incomingdir = os.path.join(sharedir, 'incoming')
515-        self._clean_incomplete()
516-        fileutil.make_dirs(self.incomingdir)
517         self._active_writers = weakref.WeakKeyDictionary()
518hunk ./src/allmydata/storage/server.py 159
519+        self.backend = backend
520+        self.backend.setServiceParent(self)
521         log.msg("StorageServer created", facility="tahoe.storage")
522 
523hunk ./src/allmydata/storage/server.py 163
524-        if reserved_space:
525-            if self.get_available_space() is None:
526-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
527-                        umin="0wZ27w", level=log.UNUSUAL)
528-
529         self.latencies = {"allocate": [], # immutable
530                           "write": [],
531                           "close": [],
532hunk ./src/allmydata/storage/server.py 174
533                           "renew": [],
534                           "cancel": [],
535                           }
536-        self.add_bucket_counter()
537-
538-        statefile = os.path.join(self.storedir, "lease_checker.state")
539-        historyfile = os.path.join(self.storedir, "lease_checker.history")
540-        klass = self.LeaseCheckerClass
541-        self.lease_checker = klass(self, statefile, historyfile,
542-                                   expiration_enabled, expiration_mode,
543-                                   expiration_override_lease_duration,
544-                                   expiration_cutoff_date,
545-                                   expiration_sharetypes)
546-        self.lease_checker.setServiceParent(self)
547 
548     def __repr__(self):
549         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
550hunk ./src/allmydata/storage/server.py 178
551 
552-    def add_bucket_counter(self):
553-        statefile = os.path.join(self.storedir, "bucket_counter.state")
554-        self.bucket_counter = BucketCountingCrawler(self, statefile)
555-        self.bucket_counter.setServiceParent(self)
556-
557     def count(self, name, delta=1):
558         if self.stats_provider:
559             self.stats_provider.count("storage_server." + name, delta)
560hunk ./src/allmydata/storage/server.py 233
561             kwargs["facility"] = "tahoe.storage"
562         return log.msg(*args, **kwargs)
563 
564-    def _clean_incomplete(self):
565-        fileutil.rm_dir(self.incomingdir)
566-
567     def get_stats(self):
568         # remember: RIStatsProvider requires that our return dict
569         # contains numeric values.
570hunk ./src/allmydata/storage/server.py 269
571             stats['storage_server.total_bucket_count'] = bucket_count
572         return stats
573 
574-    def get_available_space(self):
575-        """Returns available space for share storage in bytes, or None if no
576-        API to get this information is available."""
577-
578-        if self.readonly_storage:
579-            return 0
580-        return fileutil.get_available_space(self.storedir, self.reserved_space)
581-
582     def allocated_size(self):
583         space = 0
584         for bw in self._active_writers:
585hunk ./src/allmydata/storage/server.py 276
586         return space
587 
588     def remote_get_version(self):
589-        remaining_space = self.get_available_space()
590+        remaining_space = self.backend.get_available_space()
591         if remaining_space is None:
592             # We're on a platform that has no API to get disk stats.
593             remaining_space = 2**64
594hunk ./src/allmydata/storage/server.py 301
595         self.count("allocate")
596         alreadygot = set()
597         bucketwriters = {} # k: shnum, v: BucketWriter
598-        si_dir = storage_index_to_dir(storage_index)
599-        si_s = si_b2a(storage_index)
600 
601hunk ./src/allmydata/storage/server.py 302
602+        si_s = si_b2a(storage_index)
603         log.msg("storage: allocate_buckets %s" % si_s)
604 
605         # in this implementation, the lease information (including secrets)
606hunk ./src/allmydata/storage/server.py 316
607 
608         max_space_per_bucket = allocated_size
609 
610-        remaining_space = self.get_available_space()
611+        remaining_space = self.backend.get_available_space()
612         limited = remaining_space is not None
613         if limited:
614             # this is a bit conservative, since some of this allocated_size()
615hunk ./src/allmydata/storage/server.py 329
616         # they asked about: this will save them a lot of work. Add or update
617         # leases for all of them: if they want us to hold shares for this
618         # file, they'll want us to hold leases for this file.
619-        for (shnum, fn) in self._get_bucket_shares(storage_index):
620+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
621             alreadygot.add(shnum)
622             sf = ShareFile(fn)
623             sf.add_or_renew_lease(lease_info)
624hunk ./src/allmydata/storage/server.py 335
625 
626         for shnum in sharenums:
627-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
628-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
629-            if os.path.exists(finalhome):
630+            share = self.backend.get_share(storage_index, shnum)
631+
632+            if not share:
633+                if (not limited) or (remaining_space >= max_space_per_bucket):
634+                    # ok! we need to create the new share file.
635+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
636+                                      max_space_per_bucket, lease_info, canary)
637+                    bucketwriters[shnum] = bw
638+                    self._active_writers[bw] = 1
639+                    if limited:
640+                        remaining_space -= max_space_per_bucket
641+                else:
642+                    # bummer! not enough space to accept this bucket
643+                    pass
644+
645+            elif share.is_complete():
646                 # great! we already have it. easy.
647                 pass
648hunk ./src/allmydata/storage/server.py 353
649-            elif os.path.exists(incominghome):
650+            elif not share.is_complete():
651                 # Note that we don't create BucketWriters for shnums that
652                 # have a partial share (in incoming/), so if a second upload
653                 # occurs while the first is still in progress, the second
654hunk ./src/allmydata/storage/server.py 359
655                 # uploader will use different storage servers.
656                 pass
657-            elif (not limited) or (remaining_space >= max_space_per_bucket):
658-                # ok! we need to create the new share file.
659-                bw = BucketWriter(self, incominghome, finalhome,
660-                                  max_space_per_bucket, lease_info, canary)
661-                if self.no_storage:
662-                    bw.throw_out_all_data = True
663-                bucketwriters[shnum] = bw
664-                self._active_writers[bw] = 1
665-                if limited:
666-                    remaining_space -= max_space_per_bucket
667-            else:
668-                # bummer! not enough space to accept this bucket
669-                pass
670-
671-        if bucketwriters:
672-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
673 
674         self.add_latency("allocate", time.time() - start)
675         return alreadygot, bucketwriters
676hunk ./src/allmydata/storage/server.py 437
677             self.stats_provider.count('storage_server.bytes_added', consumed_size)
678         del self._active_writers[bw]
679 
680-    def _get_bucket_shares(self, storage_index):
681-        """Return a list of (shnum, pathname) tuples for files that hold
682-        shares for this storage_index. In each tuple, 'shnum' will always be
683-        the integer form of the last component of 'pathname'."""
684-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
685-        try:
686-            for f in os.listdir(storagedir):
687-                if NUM_RE.match(f):
688-                    filename = os.path.join(storagedir, f)
689-                    yield (int(f), filename)
690-        except OSError:
691-            # Commonly caused by there being no buckets at all.
692-            pass
693 
694     def remote_get_buckets(self, storage_index):
695         start = time.time()
696hunk ./src/allmydata/storage/server.py 444
697         si_s = si_b2a(storage_index)
698         log.msg("storage: get_buckets %s" % si_s)
699         bucketreaders = {} # k: sharenum, v: BucketReader
700-        for shnum, filename in self._get_bucket_shares(storage_index):
701+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
702             bucketreaders[shnum] = BucketReader(self, filename,
703                                                 storage_index, shnum)
704         self.add_latency("get", time.time() - start)
705hunk ./src/allmydata/test/test_backends.py 10
706 import mock
707 
708 # This is the code that we're going to be testing.
709-from allmydata.storage.server import StorageServer
710+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
711 
712 # The following share file contents was generated with
713 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
714hunk ./src/allmydata/test/test_backends.py 21
715 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
716 
717 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
718+    @mock.patch('time.time')
719+    @mock.patch('os.mkdir')
720+    @mock.patch('__builtin__.open')
721+    @mock.patch('os.listdir')
722+    @mock.patch('os.path.isdir')
723+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
724+        """ This tests whether a server instance can be constructed
725+        with a null backend. The server instance fails the test if it
726+        tries to read or write to the file system. """
727+
728+        # Now begin the test.
729+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
730+
731+        self.failIf(mockisdir.called)
732+        self.failIf(mocklistdir.called)
733+        self.failIf(mockopen.called)
734+        self.failIf(mockmkdir.called)
735+
736+        # You passed!
737+
738+    @mock.patch('time.time')
739+    @mock.patch('os.mkdir')
740     @mock.patch('__builtin__.open')
741hunk ./src/allmydata/test/test_backends.py 44
742-    def test_create_server(self, mockopen):
743-        """ This tests whether a server instance can be constructed. """
744+    @mock.patch('os.listdir')
745+    @mock.patch('os.path.isdir')
746+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
747+        """ This tests whether a server instance can be constructed
748+        with a filesystem backend. To pass the test, it has to use the
749+        filesystem in only the prescribed ways. """
750 
751         def call_open(fname, mode):
752             if fname == 'testdir/bucket_counter.state':
753hunk ./src/allmydata/test/test_backends.py 58
754                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
755             elif fname == 'testdir/lease_checker.history':
756                 return StringIO()
757+            else:
758+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
759         mockopen.side_effect = call_open
760 
761         # Now begin the test.
762hunk ./src/allmydata/test/test_backends.py 63
763-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
764+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
765+
766+        self.failIf(mockisdir.called)
767+        self.failIf(mocklistdir.called)
768+        self.failIf(mockopen.called)
769+        self.failIf(mockmkdir.called)
770+        self.failIf(mocktime.called)
771 
772         # You passed!
773 
774hunk ./src/allmydata/test/test_backends.py 73
775-class TestServer(unittest.TestCase, ReallyEqualMixin):
776+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
777+    def setUp(self):
778+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
779+
780+    @mock.patch('os.mkdir')
781+    @mock.patch('__builtin__.open')
782+    @mock.patch('os.listdir')
783+    @mock.patch('os.path.isdir')
784+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
785+        """ Write a new share. """
786+
787+        # Now begin the test.
788+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
789+        bs[0].remote_write(0, 'a')
790+        self.failIf(mockisdir.called)
791+        self.failIf(mocklistdir.called)
792+        self.failIf(mockopen.called)
793+        self.failIf(mockmkdir.called)
794+
795+    @mock.patch('os.path.exists')
796+    @mock.patch('os.path.getsize')
797+    @mock.patch('__builtin__.open')
798+    @mock.patch('os.listdir')
799+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
800+        """ This tests whether the code correctly finds and reads
801+        shares written out by old (Tahoe-LAFS <= v1.8.2)
802+        servers. There is a similar test in test_download, but that one
803+        is from the perspective of the client and exercises a deeper
804+        stack of code. This one is for exercising just the
805+        StorageServer object. """
806+
807+        # Now begin the test.
808+        bs = self.s.remote_get_buckets('teststorage_index')
809+
810+        self.failUnlessEqual(len(bs), 0)
811+        self.failIf(mocklistdir.called)
812+        self.failIf(mockopen.called)
813+        self.failIf(mockgetsize.called)
814+        self.failIf(mockexists.called)
815+
816+
817+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
818     @mock.patch('__builtin__.open')
819     def setUp(self, mockopen):
820         def call_open(fname, mode):
821hunk ./src/allmydata/test/test_backends.py 126
822                 return StringIO()
823         mockopen.side_effect = call_open
824 
825-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
826-
827+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
828 
829     @mock.patch('time.time')
830     @mock.patch('os.mkdir')
831hunk ./src/allmydata/test/test_backends.py 134
832     @mock.patch('os.listdir')
833     @mock.patch('os.path.isdir')
834     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
835-        """Handle a report of corruption."""
836+        """ Write a new share. """
837 
838         def call_listdir(dirname):
839             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
840hunk ./src/allmydata/test/test_backends.py 173
841         mockopen.side_effect = call_open
842         # Now begin the test.
843         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
844-        print bs
845         bs[0].remote_write(0, 'a')
846         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
847 
848hunk ./src/allmydata/test/test_backends.py 176
849-
850     @mock.patch('os.path.exists')
851     @mock.patch('os.path.getsize')
852     @mock.patch('__builtin__.open')
853hunk ./src/allmydata/test/test_backends.py 218
854 
855         self.failUnlessEqual(len(bs), 1)
856         b = bs[0]
857+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
858         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
859         # If you try to read past the end you get the as much data as is there.
860         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
861hunk ./src/allmydata/test/test_backends.py 224
862         # If you start reading past the end of the file you get the empty string.
863         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
864+
865+
866}
867[snapshot of progress on backend implementation (not suitable for trunk)
868wilcoxjg@gmail.com**20110626053244
869 Ignore-this: 50c764af791c2b99ada8289546806a0a
870] {
871adddir ./src/allmydata/storage/backends
872adddir ./src/allmydata/storage/backends/das
873move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
874adddir ./src/allmydata/storage/backends/null
875hunk ./src/allmydata/interfaces.py 270
876         store that on disk.
877         """
878 
879+class IStorageBackend(Interface):
880+    """
881+    Objects of this kind live on the server side and are used by the
882+    storage server object.
883+    """
884+    def get_available_space(self, reserved_space):
885+        """ Returns available space for share storage in bytes, or
886+        None if this information is not available or if the available
887+        space is unlimited.
888+
889+        If the backend is configured for read-only mode then this will
890+        return 0.
891+
892+        reserved_space is how many bytes to subtract from the answer, so
893+        you can pass how many bytes you would like to leave unused on this
894+        filesystem as reserved_space. """
895+
896+    def get_bucket_shares(self):
897+        """XXX"""
898+
899+    def get_share(self):
900+        """XXX"""
901+
902+    def make_bucket_writer(self):
903+        """XXX"""
904+
905+class IStorageBackendShare(Interface):
906+    """
907+    This object contains as much as all of the share data.  It is intended
908+    for lazy evaluation such that in many use cases substantially less than
909+    all of the share data will be accessed.
910+    """
911+    def is_complete(self):
912+        """
913+        Returns the share state, or None if the share does not exist.
914+        """
915+
916 class IStorageBucketWriter(Interface):
917     """
918     Objects of this kind live on the client side.
919hunk ./src/allmydata/interfaces.py 2492
920 
921 class EmptyPathnameComponentError(Exception):
922     """The webapi disallows empty pathname components."""
923+
924+class IShareStore(Interface):
925+    pass
926+
927addfile ./src/allmydata/storage/backends/__init__.py
928addfile ./src/allmydata/storage/backends/das/__init__.py
929addfile ./src/allmydata/storage/backends/das/core.py
930hunk ./src/allmydata/storage/backends/das/core.py 1
931+from allmydata.interfaces import IStorageBackend
932+from allmydata.storage.backends.base import Backend
933+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
934+from allmydata.util.assertutil import precondition
935+
936+import os, re, weakref, struct, time
937+
938+from foolscap.api import Referenceable
939+from twisted.application import service
940+
941+from zope.interface import implements
942+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
943+from allmydata.util import fileutil, idlib, log, time_format
944+import allmydata # for __full_version__
945+
946+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
947+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
948+from allmydata.storage.lease import LeaseInfo
949+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
950+     create_mutable_sharefile
951+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
952+from allmydata.storage.crawler import FSBucketCountingCrawler
953+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
954+
955+from zope.interface import implements
956+
957+class DASCore(Backend):
958+    implements(IStorageBackend)
959+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
960+        Backend.__init__(self)
961+
962+        self._setup_storage(storedir, readonly, reserved_space)
963+        self._setup_corruption_advisory()
964+        self._setup_bucket_counter()
965+        self._setup_lease_checkerf(expiration_policy)
966+
967+    def _setup_storage(self, storedir, readonly, reserved_space):
968+        self.storedir = storedir
969+        self.readonly = readonly
970+        self.reserved_space = int(reserved_space)
971+        if self.reserved_space:
972+            if self.get_available_space() is None:
973+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
974+                        umid="0wZ27w", level=log.UNUSUAL)
975+
976+        self.sharedir = os.path.join(self.storedir, "shares")
977+        fileutil.make_dirs(self.sharedir)
978+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
979+        self._clean_incomplete()
980+
981+    def _clean_incomplete(self):
982+        fileutil.rm_dir(self.incomingdir)
983+        fileutil.make_dirs(self.incomingdir)
984+
985+    def _setup_corruption_advisory(self):
986+        # we don't actually create the corruption-advisory dir until necessary
987+        self.corruption_advisory_dir = os.path.join(self.storedir,
988+                                                    "corruption-advisories")
989+
990+    def _setup_bucket_counter(self):
991+        statefname = os.path.join(self.storedir, "bucket_counter.state")
992+        self.bucket_counter = FSBucketCountingCrawler(statefname)
993+        self.bucket_counter.setServiceParent(self)
994+
995+    def _setup_lease_checkerf(self, expiration_policy):
996+        statefile = os.path.join(self.storedir, "lease_checker.state")
997+        historyfile = os.path.join(self.storedir, "lease_checker.history")
998+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
999+        self.lease_checker.setServiceParent(self)
1000+
1001+    def get_available_space(self):
1002+        if self.readonly:
1003+            return 0
1004+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1005+
1006+    def get_shares(self, storage_index):
1007+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1008+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1009+        try:
1010+            for f in os.listdir(finalstoragedir):
1011+                if NUM_RE.match(f):
1012+                    filename = os.path.join(finalstoragedir, f)
1013+                    yield FSBShare(filename, int(f))
1014+        except OSError:
1015+            # Commonly caused by there being no buckets at all.
1016+            pass
1017+       
1018+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1019+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1020+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1021+        return bw
1022+       
1023+
1024+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1025+# and share data. The share data is accessed by RIBucketWriter.write and
1026+# RIBucketReader.read . The lease information is not accessible through these
1027+# interfaces.
1028+
1029+# The share file has the following layout:
1030+#  0x00: share file version number, four bytes, current version is 1
1031+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1032+#  0x08: number of leases, four bytes big-endian
1033+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1034+#  A+0x0c = B: first lease. Lease format is:
1035+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1036+#   B+0x04: renew secret, 32 bytes (SHA256)
1037+#   B+0x24: cancel secret, 32 bytes (SHA256)
1038+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1039+#   B+0x48: next lease, or end of record
1040+
1041+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1042+# but it is still filled in by storage servers in case the storage server
1043+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1044+# share file is moved from one storage server to another. The value stored in
1045+# this field is truncated, so if the actual share data length is >= 2**32,
1046+# then the value stored in this field will be the actual share data length
1047+# modulo 2**32.
1048+
1049+class ImmutableShare:
1050+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1051+    sharetype = "immutable"
1052+
1053+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1054+        """ If max_size is not None then I won't allow more than
1055+        max_size to be written to me. If create=True then max_size
1056+        must not be None. """
1057+        precondition((max_size is not None) or (not create), max_size, create)
1058+        self.shnum = shnum
1059+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1060+        self._max_size = max_size
1061+        if create:
1062+            # touch the file, so later callers will see that we're working on
1063+            # it. Also construct the metadata.
1064+            assert not os.path.exists(self.fname)
1065+            fileutil.make_dirs(os.path.dirname(self.fname))
1066+            f = open(self.fname, 'wb')
1067+            # The second field -- the four-byte share data length -- is no
1068+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1069+            # there in case someone downgrades a storage server from >=
1070+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1071+            # server to another, etc. We do saturation -- a share data length
1072+            # larger than 2**32-1 (what can fit into the field) is marked as
1073+            # the largest length that can fit into the field. That way, even
1074+            # if this does happen, the old < v1.3.0 server will still allow
1075+            # clients to read the first part of the share.
1076+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1077+            f.close()
1078+            self._lease_offset = max_size + 0x0c
1079+            self._num_leases = 0
1080+        else:
1081+            f = open(self.fname, 'rb')
1082+            filesize = os.path.getsize(self.fname)
1083+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1084+            f.close()
1085+            if version != 1:
1086+                msg = "sharefile %s had version %d but we wanted 1" % \
1087+                      (self.fname, version)
1088+                raise UnknownImmutableContainerVersionError(msg)
1089+            self._num_leases = num_leases
1090+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1091+        self._data_offset = 0xc
1092+
1093+    def unlink(self):
1094+        os.unlink(self.fname)
1095+
1096+    def read_share_data(self, offset, length):
1097+        precondition(offset >= 0)
1098+        # Reads beyond the end of the data are truncated. Reads that start
1099+        # beyond the end of the data return an empty string.
1100+        seekpos = self._data_offset+offset
1101+        fsize = os.path.getsize(self.fname)
1102+        actuallength = max(0, min(length, fsize-seekpos))
1103+        if actuallength == 0:
1104+            return ""
1105+        f = open(self.fname, 'rb')
1106+        f.seek(seekpos)
1107+        return f.read(actuallength)
1108+
1109+    def write_share_data(self, offset, data):
1110+        length = len(data)
1111+        precondition(offset >= 0, offset)
1112+        if self._max_size is not None and offset+length > self._max_size:
1113+            raise DataTooLargeError(self._max_size, offset, length)
1114+        f = open(self.fname, 'rb+')
1115+        real_offset = self._data_offset+offset
1116+        f.seek(real_offset)
1117+        assert f.tell() == real_offset
1118+        f.write(data)
1119+        f.close()
1120+
1121+    def _write_lease_record(self, f, lease_number, lease_info):
1122+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1123+        f.seek(offset)
1124+        assert f.tell() == offset
1125+        f.write(lease_info.to_immutable_data())
1126+
1127+    def _read_num_leases(self, f):
1128+        f.seek(0x08)
1129+        (num_leases,) = struct.unpack(">L", f.read(4))
1130+        return num_leases
1131+
1132+    def _write_num_leases(self, f, num_leases):
1133+        f.seek(0x08)
1134+        f.write(struct.pack(">L", num_leases))
1135+
1136+    def _truncate_leases(self, f, num_leases):
1137+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1138+
1139+    def get_leases(self):
1140+        """Yields a LeaseInfo instance for all leases."""
1141+        f = open(self.fname, 'rb')
1142+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1143+        f.seek(self._lease_offset)
1144+        for i in range(num_leases):
1145+            data = f.read(self.LEASE_SIZE)
1146+            if data:
1147+                yield LeaseInfo().from_immutable_data(data)
1148+
1149+    def add_lease(self, lease_info):
1150+        f = open(self.fname, 'rb+')
1151+        num_leases = self._read_num_leases(f)
1152+        self._write_lease_record(f, num_leases, lease_info)
1153+        self._write_num_leases(f, num_leases+1)
1154+        f.close()
1155+
1156+    def renew_lease(self, renew_secret, new_expire_time):
1157+        for i,lease in enumerate(self.get_leases()):
1158+            if constant_time_compare(lease.renew_secret, renew_secret):
1159+                # yup. See if we need to update the owner time.
1160+                if new_expire_time > lease.expiration_time:
1161+                    # yes
1162+                    lease.expiration_time = new_expire_time
1163+                    f = open(self.fname, 'rb+')
1164+                    self._write_lease_record(f, i, lease)
1165+                    f.close()
1166+                return
1167+        raise IndexError("unable to renew non-existent lease")
1168+
1169+    def add_or_renew_lease(self, lease_info):
1170+        try:
1171+            self.renew_lease(lease_info.renew_secret,
1172+                             lease_info.expiration_time)
1173+        except IndexError:
1174+            self.add_lease(lease_info)
1175+
1176+
1177+    def cancel_lease(self, cancel_secret):
1178+        """Remove a lease with the given cancel_secret. If the last lease is
1179+        cancelled, the file will be removed. Return the number of bytes that
1180+        were freed (by truncating the list of leases, and possibly by
1181+        deleting the file. Raise IndexError if there was no lease with the
1182+        given cancel_secret.
1183+        """
1184+
1185+        leases = list(self.get_leases())
1186+        num_leases_removed = 0
1187+        for i,lease in enumerate(leases):
1188+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1189+                leases[i] = None
1190+                num_leases_removed += 1
1191+        if not num_leases_removed:
1192+            raise IndexError("unable to find matching lease to cancel")
1193+        if num_leases_removed:
1194+            # pack and write out the remaining leases. We write these out in
1195+            # the same order as they were added, so that if we crash while
1196+            # doing this, we won't lose any non-cancelled leases.
1197+            leases = [l for l in leases if l] # remove the cancelled leases
1198+            f = open(self.fname, 'rb+')
1199+            for i,lease in enumerate(leases):
1200+                self._write_lease_record(f, i, lease)
1201+            self._write_num_leases(f, len(leases))
1202+            self._truncate_leases(f, len(leases))
1203+            f.close()
1204+        space_freed = self.LEASE_SIZE * num_leases_removed
1205+        if not len(leases):
1206+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1207+            self.unlink()
1208+        return space_freed
1209hunk ./src/allmydata/storage/backends/das/expirer.py 2
1210 import time, os, pickle, struct
1211-from allmydata.storage.crawler import ShareCrawler
1212-from allmydata.storage.shares import get_share_file
1213+from allmydata.storage.crawler import FSShareCrawler
1214 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1215      UnknownImmutableContainerVersionError
1216 from twisted.python import log as twlog
1217hunk ./src/allmydata/storage/backends/das/expirer.py 7
1218 
1219-class LeaseCheckingCrawler(ShareCrawler):
1220+class FSLeaseCheckingCrawler(FSShareCrawler):
1221     """I examine the leases on all shares, determining which are still valid
1222     and which have expired. I can remove the expired leases (if so
1223     configured), and the share will be deleted when the last lease is
1224hunk ./src/allmydata/storage/backends/das/expirer.py 50
1225     slow_start = 360 # wait 6 minutes after startup
1226     minimum_cycle_time = 12*60*60 # not more than twice per day
1227 
1228-    def __init__(self, statefile, historyfile,
1229-                 expiration_enabled, mode,
1230-                 override_lease_duration, # used if expiration_mode=="age"
1231-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1232-                 sharetypes):
1233+    def __init__(self, statefile, historyfile, expiration_policy):
1234         self.historyfile = historyfile
1235hunk ./src/allmydata/storage/backends/das/expirer.py 52
1236-        self.expiration_enabled = expiration_enabled
1237-        self.mode = mode
1238+        self.expiration_enabled = expiration_policy['enabled']
1239+        self.mode = expiration_policy['mode']
1240         self.override_lease_duration = None
1241         self.cutoff_date = None
1242         if self.mode == "age":
1243hunk ./src/allmydata/storage/backends/das/expirer.py 57
1244-            assert isinstance(override_lease_duration, (int, type(None)))
1245-            self.override_lease_duration = override_lease_duration # seconds
1246+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1247+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1248         elif self.mode == "cutoff-date":
1249hunk ./src/allmydata/storage/backends/das/expirer.py 60
1250-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1251+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1252             assert cutoff_date is not None
1253hunk ./src/allmydata/storage/backends/das/expirer.py 62
1254-            self.cutoff_date = cutoff_date
1255+            self.cutoff_date = expiration_policy['cutoff_date']
1256         else:
1257hunk ./src/allmydata/storage/backends/das/expirer.py 64
1258-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1259-        self.sharetypes_to_expire = sharetypes
1260-        ShareCrawler.__init__(self, statefile)
1261+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1262+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1263+        FSShareCrawler.__init__(self, statefile)
1264 
1265     def add_initial_state(self):
1266         # we fill ["cycle-to-date"] here (even though they will be reset in
1267hunk ./src/allmydata/storage/backends/das/expirer.py 156
1268 
1269     def process_share(self, sharefilename):
1270         # first, find out what kind of a share it is
1271-        sf = get_share_file(sharefilename)
1272+        f = open(sharefilename, "rb")
1273+        prefix = f.read(32)
1274+        f.close()
1275+        if prefix == MutableShareFile.MAGIC:
1276+            sf = MutableShareFile(sharefilename)
1277+        else:
1278+            # otherwise assume it's immutable
1279+            sf = FSBShare(sharefilename)
1280         sharetype = sf.sharetype
1281         now = time.time()
1282         s = self.stat(sharefilename)
1283addfile ./src/allmydata/storage/backends/null/__init__.py
1284addfile ./src/allmydata/storage/backends/null/core.py
1285hunk ./src/allmydata/storage/backends/null/core.py 1
1286+from allmydata.storage.backends.base import Backend
1287+
1288+class NullCore(Backend):
1289+    def __init__(self):
1290+        Backend.__init__(self)
1291+
1292+    def get_available_space(self):
1293+        return None
1294+
1295+    def get_shares(self, storage_index):
1296+        return set()
1297+
1298+    def get_share(self, storage_index, sharenum):
1299+        return None
1300+
1301+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1302+        return NullBucketWriter()
1303hunk ./src/allmydata/storage/crawler.py 12
1304 class TimeSliceExceeded(Exception):
1305     pass
1306 
1307-class ShareCrawler(service.MultiService):
1308+class FSShareCrawler(service.MultiService):
1309     """A subcless of ShareCrawler is attached to a StorageServer, and
1310     periodically walks all of its shares, processing each one in some
1311     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1312hunk ./src/allmydata/storage/crawler.py 68
1313     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1314     minimum_cycle_time = 300 # don't run a cycle faster than this
1315 
1316-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1317+    def __init__(self, statefname, allowed_cpu_percentage=None):
1318         service.MultiService.__init__(self)
1319         if allowed_cpu_percentage is not None:
1320             self.allowed_cpu_percentage = allowed_cpu_percentage
1321hunk ./src/allmydata/storage/crawler.py 72
1322-        self.backend = backend
1323+        self.statefname = statefname
1324         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1325                          for i in range(2**10)]
1326         self.prefixes.sort()
1327hunk ./src/allmydata/storage/crawler.py 192
1328         #                            of the last bucket to be processed, or
1329         #                            None if we are sleeping between cycles
1330         try:
1331-            f = open(self.statefile, "rb")
1332+            f = open(self.statefname, "rb")
1333             state = pickle.load(f)
1334             f.close()
1335         except EnvironmentError:
1336hunk ./src/allmydata/storage/crawler.py 230
1337         else:
1338             last_complete_prefix = self.prefixes[lcpi]
1339         self.state["last-complete-prefix"] = last_complete_prefix
1340-        tmpfile = self.statefile + ".tmp"
1341+        tmpfile = self.statefname + ".tmp"
1342         f = open(tmpfile, "wb")
1343         pickle.dump(self.state, f)
1344         f.close()
1345hunk ./src/allmydata/storage/crawler.py 433
1346         pass
1347 
1348 
1349-class BucketCountingCrawler(ShareCrawler):
1350+class FSBucketCountingCrawler(FSShareCrawler):
1351     """I keep track of how many buckets are being managed by this server.
1352     This is equivalent to the number of distributed files and directories for
1353     which I am providing storage. The actual number of files+directories in
1354hunk ./src/allmydata/storage/crawler.py 446
1355 
1356     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1357 
1358-    def __init__(self, statefile, num_sample_prefixes=1):
1359-        ShareCrawler.__init__(self, statefile)
1360+    def __init__(self, statefname, num_sample_prefixes=1):
1361+        FSShareCrawler.__init__(self, statefname)
1362         self.num_sample_prefixes = num_sample_prefixes
1363 
1364     def add_initial_state(self):
1365hunk ./src/allmydata/storage/immutable.py 14
1366 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1367      DataTooLargeError
1368 
1369-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1370-# and share data. The share data is accessed by RIBucketWriter.write and
1371-# RIBucketReader.read . The lease information is not accessible through these
1372-# interfaces.
1373-
1374-# The share file has the following layout:
1375-#  0x00: share file version number, four bytes, current version is 1
1376-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1377-#  0x08: number of leases, four bytes big-endian
1378-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1379-#  A+0x0c = B: first lease. Lease format is:
1380-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1381-#   B+0x04: renew secret, 32 bytes (SHA256)
1382-#   B+0x24: cancel secret, 32 bytes (SHA256)
1383-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1384-#   B+0x48: next lease, or end of record
1385-
1386-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1387-# but it is still filled in by storage servers in case the storage server
1388-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1389-# share file is moved from one storage server to another. The value stored in
1390-# this field is truncated, so if the actual share data length is >= 2**32,
1391-# then the value stored in this field will be the actual share data length
1392-# modulo 2**32.
1393-
1394-class ShareFile:
1395-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1396-    sharetype = "immutable"
1397-
1398-    def __init__(self, filename, max_size=None, create=False):
1399-        """ If max_size is not None then I won't allow more than
1400-        max_size to be written to me. If create=True then max_size
1401-        must not be None. """
1402-        precondition((max_size is not None) or (not create), max_size, create)
1403-        self.home = filename
1404-        self._max_size = max_size
1405-        if create:
1406-            # touch the file, so later callers will see that we're working on
1407-            # it. Also construct the metadata.
1408-            assert not os.path.exists(self.home)
1409-            fileutil.make_dirs(os.path.dirname(self.home))
1410-            f = open(self.home, 'wb')
1411-            # The second field -- the four-byte share data length -- is no
1412-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1413-            # there in case someone downgrades a storage server from >=
1414-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1415-            # server to another, etc. We do saturation -- a share data length
1416-            # larger than 2**32-1 (what can fit into the field) is marked as
1417-            # the largest length that can fit into the field. That way, even
1418-            # if this does happen, the old < v1.3.0 server will still allow
1419-            # clients to read the first part of the share.
1420-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1421-            f.close()
1422-            self._lease_offset = max_size + 0x0c
1423-            self._num_leases = 0
1424-        else:
1425-            f = open(self.home, 'rb')
1426-            filesize = os.path.getsize(self.home)
1427-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1428-            f.close()
1429-            if version != 1:
1430-                msg = "sharefile %s had version %d but we wanted 1" % \
1431-                      (filename, version)
1432-                raise UnknownImmutableContainerVersionError(msg)
1433-            self._num_leases = num_leases
1434-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1435-        self._data_offset = 0xc
1436-
1437-    def unlink(self):
1438-        os.unlink(self.home)
1439-
1440-    def read_share_data(self, offset, length):
1441-        precondition(offset >= 0)
1442-        # Reads beyond the end of the data are truncated. Reads that start
1443-        # beyond the end of the data return an empty string.
1444-        seekpos = self._data_offset+offset
1445-        fsize = os.path.getsize(self.home)
1446-        actuallength = max(0, min(length, fsize-seekpos))
1447-        if actuallength == 0:
1448-            return ""
1449-        f = open(self.home, 'rb')
1450-        f.seek(seekpos)
1451-        return f.read(actuallength)
1452-
1453-    def write_share_data(self, offset, data):
1454-        length = len(data)
1455-        precondition(offset >= 0, offset)
1456-        if self._max_size is not None and offset+length > self._max_size:
1457-            raise DataTooLargeError(self._max_size, offset, length)
1458-        f = open(self.home, 'rb+')
1459-        real_offset = self._data_offset+offset
1460-        f.seek(real_offset)
1461-        assert f.tell() == real_offset
1462-        f.write(data)
1463-        f.close()
1464-
1465-    def _write_lease_record(self, f, lease_number, lease_info):
1466-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1467-        f.seek(offset)
1468-        assert f.tell() == offset
1469-        f.write(lease_info.to_immutable_data())
1470-
1471-    def _read_num_leases(self, f):
1472-        f.seek(0x08)
1473-        (num_leases,) = struct.unpack(">L", f.read(4))
1474-        return num_leases
1475-
1476-    def _write_num_leases(self, f, num_leases):
1477-        f.seek(0x08)
1478-        f.write(struct.pack(">L", num_leases))
1479-
1480-    def _truncate_leases(self, f, num_leases):
1481-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1482-
1483-    def get_leases(self):
1484-        """Yields a LeaseInfo instance for all leases."""
1485-        f = open(self.home, 'rb')
1486-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1487-        f.seek(self._lease_offset)
1488-        for i in range(num_leases):
1489-            data = f.read(self.LEASE_SIZE)
1490-            if data:
1491-                yield LeaseInfo().from_immutable_data(data)
1492-
1493-    def add_lease(self, lease_info):
1494-        f = open(self.home, 'rb+')
1495-        num_leases = self._read_num_leases(f)
1496-        self._write_lease_record(f, num_leases, lease_info)
1497-        self._write_num_leases(f, num_leases+1)
1498-        f.close()
1499-
1500-    def renew_lease(self, renew_secret, new_expire_time):
1501-        for i,lease in enumerate(self.get_leases()):
1502-            if constant_time_compare(lease.renew_secret, renew_secret):
1503-                # yup. See if we need to update the owner time.
1504-                if new_expire_time > lease.expiration_time:
1505-                    # yes
1506-                    lease.expiration_time = new_expire_time
1507-                    f = open(self.home, 'rb+')
1508-                    self._write_lease_record(f, i, lease)
1509-                    f.close()
1510-                return
1511-        raise IndexError("unable to renew non-existent lease")
1512-
1513-    def add_or_renew_lease(self, lease_info):
1514-        try:
1515-            self.renew_lease(lease_info.renew_secret,
1516-                             lease_info.expiration_time)
1517-        except IndexError:
1518-            self.add_lease(lease_info)
1519-
1520-
1521-    def cancel_lease(self, cancel_secret):
1522-        """Remove a lease with the given cancel_secret. If the last lease is
1523-        cancelled, the file will be removed. Return the number of bytes that
1524-        were freed (by truncating the list of leases, and possibly by
1525-        deleting the file. Raise IndexError if there was no lease with the
1526-        given cancel_secret.
1527-        """
1528-
1529-        leases = list(self.get_leases())
1530-        num_leases_removed = 0
1531-        for i,lease in enumerate(leases):
1532-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1533-                leases[i] = None
1534-                num_leases_removed += 1
1535-        if not num_leases_removed:
1536-            raise IndexError("unable to find matching lease to cancel")
1537-        if num_leases_removed:
1538-            # pack and write out the remaining leases. We write these out in
1539-            # the same order as they were added, so that if we crash while
1540-            # doing this, we won't lose any non-cancelled leases.
1541-            leases = [l for l in leases if l] # remove the cancelled leases
1542-            f = open(self.home, 'rb+')
1543-            for i,lease in enumerate(leases):
1544-                self._write_lease_record(f, i, lease)
1545-            self._write_num_leases(f, len(leases))
1546-            self._truncate_leases(f, len(leases))
1547-            f.close()
1548-        space_freed = self.LEASE_SIZE * num_leases_removed
1549-        if not len(leases):
1550-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1551-            self.unlink()
1552-        return space_freed
1553-class NullBucketWriter(Referenceable):
1554-    implements(RIBucketWriter)
1555-
1556-    def remote_write(self, offset, data):
1557-        return
1558-
1559 class BucketWriter(Referenceable):
1560     implements(RIBucketWriter)
1561 
1562hunk ./src/allmydata/storage/immutable.py 17
1563-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1564+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1565         self.ss = ss
1566hunk ./src/allmydata/storage/immutable.py 19
1567-        self.incominghome = incominghome
1568-        self.finalhome = finalhome
1569         self._max_size = max_size # don't allow the client to write more than this
1570         self._canary = canary
1571         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1572hunk ./src/allmydata/storage/immutable.py 24
1573         self.closed = False
1574         self.throw_out_all_data = False
1575-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1576+        self._sharefile = immutableshare
1577         # also, add our lease to the file now, so that other ones can be
1578         # added by simultaneous uploaders
1579         self._sharefile.add_lease(lease_info)
1580hunk ./src/allmydata/storage/server.py 16
1581 from allmydata.storage.lease import LeaseInfo
1582 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1583      create_mutable_sharefile
1584-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1585-from allmydata.storage.crawler import BucketCountingCrawler
1586-from allmydata.storage.expirer import LeaseCheckingCrawler
1587 
1588 from zope.interface import implements
1589 
1590hunk ./src/allmydata/storage/server.py 19
1591-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1592-# be started and stopped.
1593-class Backend(service.MultiService):
1594-    implements(IStatsProducer)
1595-    def __init__(self):
1596-        service.MultiService.__init__(self)
1597-
1598-    def get_bucket_shares(self):
1599-        """XXX"""
1600-        raise NotImplementedError
1601-
1602-    def get_share(self):
1603-        """XXX"""
1604-        raise NotImplementedError
1605-
1606-    def make_bucket_writer(self):
1607-        """XXX"""
1608-        raise NotImplementedError
1609-
1610-class NullBackend(Backend):
1611-    def __init__(self):
1612-        Backend.__init__(self)
1613-
1614-    def get_available_space(self):
1615-        return None
1616-
1617-    def get_bucket_shares(self, storage_index):
1618-        return set()
1619-
1620-    def get_share(self, storage_index, sharenum):
1621-        return None
1622-
1623-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1624-        return NullBucketWriter()
1625-
1626-class FSBackend(Backend):
1627-    def __init__(self, storedir, readonly=False, reserved_space=0):
1628-        Backend.__init__(self)
1629-
1630-        self._setup_storage(storedir, readonly, reserved_space)
1631-        self._setup_corruption_advisory()
1632-        self._setup_bucket_counter()
1633-        self._setup_lease_checkerf()
1634-
1635-    def _setup_storage(self, storedir, readonly, reserved_space):
1636-        self.storedir = storedir
1637-        self.readonly = readonly
1638-        self.reserved_space = int(reserved_space)
1639-        if self.reserved_space:
1640-            if self.get_available_space() is None:
1641-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1642-                        umid="0wZ27w", level=log.UNUSUAL)
1643-
1644-        self.sharedir = os.path.join(self.storedir, "shares")
1645-        fileutil.make_dirs(self.sharedir)
1646-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1647-        self._clean_incomplete()
1648-
1649-    def _clean_incomplete(self):
1650-        fileutil.rm_dir(self.incomingdir)
1651-        fileutil.make_dirs(self.incomingdir)
1652-
1653-    def _setup_corruption_advisory(self):
1654-        # we don't actually create the corruption-advisory dir until necessary
1655-        self.corruption_advisory_dir = os.path.join(self.storedir,
1656-                                                    "corruption-advisories")
1657-
1658-    def _setup_bucket_counter(self):
1659-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1660-        self.bucket_counter = BucketCountingCrawler(statefile)
1661-        self.bucket_counter.setServiceParent(self)
1662-
1663-    def _setup_lease_checkerf(self):
1664-        statefile = os.path.join(self.storedir, "lease_checker.state")
1665-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1666-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1667-                                   expiration_enabled, expiration_mode,
1668-                                   expiration_override_lease_duration,
1669-                                   expiration_cutoff_date,
1670-                                   expiration_sharetypes)
1671-        self.lease_checker.setServiceParent(self)
1672-
1673-    def get_available_space(self):
1674-        if self.readonly:
1675-            return 0
1676-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1677-
1678-    def get_bucket_shares(self, storage_index):
1679-        """Return a list of (shnum, pathname) tuples for files that hold
1680-        shares for this storage_index. In each tuple, 'shnum' will always be
1681-        the integer form of the last component of 'pathname'."""
1682-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1683-        try:
1684-            for f in os.listdir(storagedir):
1685-                if NUM_RE.match(f):
1686-                    filename = os.path.join(storagedir, f)
1687-                    yield (int(f), filename)
1688-        except OSError:
1689-            # Commonly caused by there being no buckets at all.
1690-            pass
1691-
1692 # storage/
1693 # storage/shares/incoming
1694 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1695hunk ./src/allmydata/storage/server.py 32
1696 # $SHARENUM matches this regex:
1697 NUM_RE=re.compile("^[0-9]+$")
1698 
1699-
1700-
1701 class StorageServer(service.MultiService, Referenceable):
1702     implements(RIStorageServer, IStatsProducer)
1703     name = 'storage'
1704hunk ./src/allmydata/storage/server.py 35
1705-    LeaseCheckerClass = LeaseCheckingCrawler
1706 
1707     def __init__(self, nodeid, backend, reserved_space=0,
1708                  readonly_storage=False,
1709hunk ./src/allmydata/storage/server.py 38
1710-                 stats_provider=None,
1711-                 expiration_enabled=False,
1712-                 expiration_mode="age",
1713-                 expiration_override_lease_duration=None,
1714-                 expiration_cutoff_date=None,
1715-                 expiration_sharetypes=("mutable", "immutable")):
1716+                 stats_provider=None ):
1717         service.MultiService.__init__(self)
1718         assert isinstance(nodeid, str)
1719         assert len(nodeid) == 20
1720hunk ./src/allmydata/storage/server.py 217
1721         # they asked about: this will save them a lot of work. Add or update
1722         # leases for all of them: if they want us to hold shares for this
1723         # file, they'll want us to hold leases for this file.
1724-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1725-            alreadygot.add(shnum)
1726-            sf = ShareFile(fn)
1727-            sf.add_or_renew_lease(lease_info)
1728-
1729-        for shnum in sharenums:
1730-            share = self.backend.get_share(storage_index, shnum)
1731+        for share in self.backend.get_shares(storage_index):
1732+            alreadygot.add(share.shnum)
1733+            share.add_or_renew_lease(lease_info)
1734 
1735hunk ./src/allmydata/storage/server.py 221
1736-            if not share:
1737-                if (not limited) or (remaining_space >= max_space_per_bucket):
1738-                    # ok! we need to create the new share file.
1739-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1740-                                      max_space_per_bucket, lease_info, canary)
1741-                    bucketwriters[shnum] = bw
1742-                    self._active_writers[bw] = 1
1743-                    if limited:
1744-                        remaining_space -= max_space_per_bucket
1745-                else:
1746-                    # bummer! not enough space to accept this bucket
1747-                    pass
1748+        for shnum in (sharenums - alreadygot):
1749+            if (not limited) or (remaining_space >= max_space_per_bucket):
1750+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1751+                self.backend.set_storage_server(self)
1752+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1753+                                                     max_space_per_bucket, lease_info, canary)
1754+                bucketwriters[shnum] = bw
1755+                self._active_writers[bw] = 1
1756+                if limited:
1757+                    remaining_space -= max_space_per_bucket
1758 
1759hunk ./src/allmydata/storage/server.py 232
1760-            elif share.is_complete():
1761-                # great! we already have it. easy.
1762-                pass
1763-            elif not share.is_complete():
1764-                # Note that we don't create BucketWriters for shnums that
1765-                # have a partial share (in incoming/), so if a second upload
1766-                # occurs while the first is still in progress, the second
1767-                # uploader will use different storage servers.
1768-                pass
1769+        #XXX We SHOULD DOCUMENT LATER.
1770 
1771         self.add_latency("allocate", time.time() - start)
1772         return alreadygot, bucketwriters
1773hunk ./src/allmydata/storage/server.py 238
1774 
1775     def _iter_share_files(self, storage_index):
1776-        for shnum, filename in self._get_bucket_shares(storage_index):
1777+        for shnum, filename in self._get_shares(storage_index):
1778             f = open(filename, 'rb')
1779             header = f.read(32)
1780             f.close()
1781hunk ./src/allmydata/storage/server.py 318
1782         si_s = si_b2a(storage_index)
1783         log.msg("storage: get_buckets %s" % si_s)
1784         bucketreaders = {} # k: sharenum, v: BucketReader
1785-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1786+        for shnum, filename in self.backend.get_shares(storage_index):
1787             bucketreaders[shnum] = BucketReader(self, filename,
1788                                                 storage_index, shnum)
1789         self.add_latency("get", time.time() - start)
1790hunk ./src/allmydata/storage/server.py 334
1791         # since all shares get the same lease data, we just grab the leases
1792         # from the first share
1793         try:
1794-            shnum, filename = self._get_bucket_shares(storage_index).next()
1795+            shnum, filename = self._get_shares(storage_index).next()
1796             sf = ShareFile(filename)
1797             return sf.get_leases()
1798         except StopIteration:
1799hunk ./src/allmydata/storage/shares.py 1
1800-#! /usr/bin/python
1801-
1802-from allmydata.storage.mutable import MutableShareFile
1803-from allmydata.storage.immutable import ShareFile
1804-
1805-def get_share_file(filename):
1806-    f = open(filename, "rb")
1807-    prefix = f.read(32)
1808-    f.close()
1809-    if prefix == MutableShareFile.MAGIC:
1810-        return MutableShareFile(filename)
1811-    # otherwise assume it's immutable
1812-    return ShareFile(filename)
1813-
1814rmfile ./src/allmydata/storage/shares.py
1815hunk ./src/allmydata/test/common_util.py 20
1816 
1817 def flip_one_bit(s, offset=0, size=None):
1818     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1819-    than offset+size. """
1820+    than offset+size. Return the new string. """
1821     if size is None:
1822         size=len(s)-offset
1823     i = randrange(offset, offset+size)
1824hunk ./src/allmydata/test/test_backends.py 7
1825 
1826 from allmydata.test.common_util import ReallyEqualMixin
1827 
1828-import mock
1829+import mock, os
1830 
1831 # This is the code that we're going to be testing.
1832hunk ./src/allmydata/test/test_backends.py 10
1833-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1834+from allmydata.storage.server import StorageServer
1835+
1836+from allmydata.storage.backends.das.core import DASCore
1837+from allmydata.storage.backends.null.core import NullCore
1838+
1839 
1840 # The following share file contents was generated with
1841 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1842hunk ./src/allmydata/test/test_backends.py 22
1843 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1844 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1845 
1846-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1847+tempdir = 'teststoredir'
1848+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1849+sharefname = os.path.join(sharedirname, '0')
1850 
1851 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1852     @mock.patch('time.time')
1853hunk ./src/allmydata/test/test_backends.py 58
1854         filesystem in only the prescribed ways. """
1855 
1856         def call_open(fname, mode):
1857-            if fname == 'testdir/bucket_counter.state':
1858-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1859-            elif fname == 'testdir/lease_checker.state':
1860-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1861-            elif fname == 'testdir/lease_checker.history':
1862+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1863+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1864+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1865+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1866+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1867                 return StringIO()
1868             else:
1869                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1870hunk ./src/allmydata/test/test_backends.py 124
1871     @mock.patch('__builtin__.open')
1872     def setUp(self, mockopen):
1873         def call_open(fname, mode):
1874-            if fname == 'testdir/bucket_counter.state':
1875-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1876-            elif fname == 'testdir/lease_checker.state':
1877-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1878-            elif fname == 'testdir/lease_checker.history':
1879+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1880+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1881+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1882+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1883+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1884                 return StringIO()
1885         mockopen.side_effect = call_open
1886hunk ./src/allmydata/test/test_backends.py 131
1887-
1888-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1889+        expiration_policy = {'enabled' : False,
1890+                             'mode' : 'age',
1891+                             'override_lease_duration' : None,
1892+                             'cutoff_date' : None,
1893+                             'sharetypes' : None}
1894+        testbackend = DASCore(tempdir, expiration_policy)
1895+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1896 
1897     @mock.patch('time.time')
1898     @mock.patch('os.mkdir')
1899hunk ./src/allmydata/test/test_backends.py 148
1900         """ Write a new share. """
1901 
1902         def call_listdir(dirname):
1903-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1904-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1905+            self.failUnlessReallyEqual(dirname, sharedirname)
1906+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1907 
1908         mocklistdir.side_effect = call_listdir
1909 
1910hunk ./src/allmydata/test/test_backends.py 178
1911 
1912         sharefile = MockFile()
1913         def call_open(fname, mode):
1914-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1915+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1916             return sharefile
1917 
1918         mockopen.side_effect = call_open
1919hunk ./src/allmydata/test/test_backends.py 200
1920         StorageServer object. """
1921 
1922         def call_listdir(dirname):
1923-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1924+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1925             return ['0']
1926 
1927         mocklistdir.side_effect = call_listdir
1928}
1929[checkpoint patch
1930wilcoxjg@gmail.com**20110626165715
1931 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1932] {
1933hunk ./src/allmydata/storage/backends/das/core.py 21
1934 from allmydata.storage.lease import LeaseInfo
1935 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1936      create_mutable_sharefile
1937-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1938+from allmydata.storage.immutable import BucketWriter, BucketReader
1939 from allmydata.storage.crawler import FSBucketCountingCrawler
1940 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1941 
1942hunk ./src/allmydata/storage/backends/das/core.py 27
1943 from zope.interface import implements
1944 
1945+# $SHARENUM matches this regex:
1946+NUM_RE=re.compile("^[0-9]+$")
1947+
1948 class DASCore(Backend):
1949     implements(IStorageBackend)
1950     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1951hunk ./src/allmydata/storage/backends/das/core.py 80
1952         return fileutil.get_available_space(self.storedir, self.reserved_space)
1953 
1954     def get_shares(self, storage_index):
1955-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1956+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
1957         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1958         try:
1959             for f in os.listdir(finalstoragedir):
1960hunk ./src/allmydata/storage/backends/das/core.py 86
1961                 if NUM_RE.match(f):
1962                     filename = os.path.join(finalstoragedir, f)
1963-                    yield FSBShare(filename, int(f))
1964+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
1965         except OSError:
1966             # Commonly caused by there being no buckets at all.
1967             pass
1968hunk ./src/allmydata/storage/backends/das/core.py 95
1969         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1970         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1971         return bw
1972+
1973+    def set_storage_server(self, ss):
1974+        self.ss = ss
1975         
1976 
1977 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
1978hunk ./src/allmydata/storage/server.py 29
1979 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
1980 # base-32 chars).
1981 
1982-# $SHARENUM matches this regex:
1983-NUM_RE=re.compile("^[0-9]+$")
1984 
1985 class StorageServer(service.MultiService, Referenceable):
1986     implements(RIStorageServer, IStatsProducer)
1987}
1988[checkpoint4
1989wilcoxjg@gmail.com**20110628202202
1990 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
1991] {
1992hunk ./src/allmydata/storage/backends/das/core.py 96
1993         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1994         return bw
1995 
1996+    def make_bucket_reader(self, share):
1997+        return BucketReader(self.ss, share)
1998+
1999     def set_storage_server(self, ss):
2000         self.ss = ss
2001         
2002hunk ./src/allmydata/storage/backends/das/core.py 138
2003         must not be None. """
2004         precondition((max_size is not None) or (not create), max_size, create)
2005         self.shnum = shnum
2006+        self.storage_index = storageindex
2007         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2008         self._max_size = max_size
2009         if create:
2010hunk ./src/allmydata/storage/backends/das/core.py 173
2011             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2012         self._data_offset = 0xc
2013 
2014+    def get_shnum(self):
2015+        return self.shnum
2016+
2017     def unlink(self):
2018         os.unlink(self.fname)
2019 
2020hunk ./src/allmydata/storage/backends/null/core.py 2
2021 from allmydata.storage.backends.base import Backend
2022+from allmydata.storage.immutable import BucketWriter, BucketReader
2023 
2024 class NullCore(Backend):
2025     def __init__(self):
2026hunk ./src/allmydata/storage/backends/null/core.py 17
2027     def get_share(self, storage_index, sharenum):
2028         return None
2029 
2030-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2031-        return NullBucketWriter()
2032+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2033+       
2034+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2035+
2036+    def set_storage_server(self, ss):
2037+        self.ss = ss
2038+
2039+class ImmutableShare:
2040+    sharetype = "immutable"
2041+
2042+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2043+        """ If max_size is not None then I won't allow more than
2044+        max_size to be written to me. If create=True then max_size
2045+        must not be None. """
2046+        precondition((max_size is not None) or (not create), max_size, create)
2047+        self.shnum = shnum
2048+        self.storage_index = storageindex
2049+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2050+        self._max_size = max_size
2051+        if create:
2052+            # touch the file, so later callers will see that we're working on
2053+            # it. Also construct the metadata.
2054+            assert not os.path.exists(self.fname)
2055+            fileutil.make_dirs(os.path.dirname(self.fname))
2056+            f = open(self.fname, 'wb')
2057+            # The second field -- the four-byte share data length -- is no
2058+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2059+            # there in case someone downgrades a storage server from >=
2060+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2061+            # server to another, etc. We do saturation -- a share data length
2062+            # larger than 2**32-1 (what can fit into the field) is marked as
2063+            # the largest length that can fit into the field. That way, even
2064+            # if this does happen, the old < v1.3.0 server will still allow
2065+            # clients to read the first part of the share.
2066+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2067+            f.close()
2068+            self._lease_offset = max_size + 0x0c
2069+            self._num_leases = 0
2070+        else:
2071+            f = open(self.fname, 'rb')
2072+            filesize = os.path.getsize(self.fname)
2073+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2074+            f.close()
2075+            if version != 1:
2076+                msg = "sharefile %s had version %d but we wanted 1" % \
2077+                      (self.fname, version)
2078+                raise UnknownImmutableContainerVersionError(msg)
2079+            self._num_leases = num_leases
2080+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2081+        self._data_offset = 0xc
2082+
2083+    def get_shnum(self):
2084+        return self.shnum
2085+
2086+    def unlink(self):
2087+        os.unlink(self.fname)
2088+
2089+    def read_share_data(self, offset, length):
2090+        precondition(offset >= 0)
2091+        # Reads beyond the end of the data are truncated. Reads that start
2092+        # beyond the end of the data return an empty string.
2093+        seekpos = self._data_offset+offset
2094+        fsize = os.path.getsize(self.fname)
2095+        actuallength = max(0, min(length, fsize-seekpos))
2096+        if actuallength == 0:
2097+            return ""
2098+        f = open(self.fname, 'rb')
2099+        f.seek(seekpos)
2100+        return f.read(actuallength)
2101+
2102+    def write_share_data(self, offset, data):
2103+        length = len(data)
2104+        precondition(offset >= 0, offset)
2105+        if self._max_size is not None and offset+length > self._max_size:
2106+            raise DataTooLargeError(self._max_size, offset, length)
2107+        f = open(self.fname, 'rb+')
2108+        real_offset = self._data_offset+offset
2109+        f.seek(real_offset)
2110+        assert f.tell() == real_offset
2111+        f.write(data)
2112+        f.close()
2113+
2114+    def _write_lease_record(self, f, lease_number, lease_info):
2115+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2116+        f.seek(offset)
2117+        assert f.tell() == offset
2118+        f.write(lease_info.to_immutable_data())
2119+
2120+    def _read_num_leases(self, f):
2121+        f.seek(0x08)
2122+        (num_leases,) = struct.unpack(">L", f.read(4))
2123+        return num_leases
2124+
2125+    def _write_num_leases(self, f, num_leases):
2126+        f.seek(0x08)
2127+        f.write(struct.pack(">L", num_leases))
2128+
2129+    def _truncate_leases(self, f, num_leases):
2130+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2131+
2132+    def get_leases(self):
2133+        """Yields a LeaseInfo instance for all leases."""
2134+        f = open(self.fname, 'rb')
2135+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2136+        f.seek(self._lease_offset)
2137+        for i in range(num_leases):
2138+            data = f.read(self.LEASE_SIZE)
2139+            if data:
2140+                yield LeaseInfo().from_immutable_data(data)
2141+
2142+    def add_lease(self, lease_info):
2143+        f = open(self.fname, 'rb+')
2144+        num_leases = self._read_num_leases(f)
2145+        self._write_lease_record(f, num_leases, lease_info)
2146+        self._write_num_leases(f, num_leases+1)
2147+        f.close()
2148+
2149+    def renew_lease(self, renew_secret, new_expire_time):
2150+        for i,lease in enumerate(self.get_leases()):
2151+            if constant_time_compare(lease.renew_secret, renew_secret):
2152+                # yup. See if we need to update the owner time.
2153+                if new_expire_time > lease.expiration_time:
2154+                    # yes
2155+                    lease.expiration_time = new_expire_time
2156+                    f = open(self.fname, 'rb+')
2157+                    self._write_lease_record(f, i, lease)
2158+                    f.close()
2159+                return
2160+        raise IndexError("unable to renew non-existent lease")
2161+
2162+    def add_or_renew_lease(self, lease_info):
2163+        try:
2164+            self.renew_lease(lease_info.renew_secret,
2165+                             lease_info.expiration_time)
2166+        except IndexError:
2167+            self.add_lease(lease_info)
2168+
2169+
2170+    def cancel_lease(self, cancel_secret):
2171+        """Remove a lease with the given cancel_secret. If the last lease is
2172+        cancelled, the file will be removed. Return the number of bytes that
2173+        were freed (by truncating the list of leases, and possibly by
2174+        deleting the file. Raise IndexError if there was no lease with the
2175+        given cancel_secret.
2176+        """
2177+
2178+        leases = list(self.get_leases())
2179+        num_leases_removed = 0
2180+        for i,lease in enumerate(leases):
2181+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2182+                leases[i] = None
2183+                num_leases_removed += 1
2184+        if not num_leases_removed:
2185+            raise IndexError("unable to find matching lease to cancel")
2186+        if num_leases_removed:
2187+            # pack and write out the remaining leases. We write these out in
2188+            # the same order as they were added, so that if we crash while
2189+            # doing this, we won't lose any non-cancelled leases.
2190+            leases = [l for l in leases if l] # remove the cancelled leases
2191+            f = open(self.fname, 'rb+')
2192+            for i,lease in enumerate(leases):
2193+                self._write_lease_record(f, i, lease)
2194+            self._write_num_leases(f, len(leases))
2195+            self._truncate_leases(f, len(leases))
2196+            f.close()
2197+        space_freed = self.LEASE_SIZE * num_leases_removed
2198+        if not len(leases):
2199+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2200+            self.unlink()
2201+        return space_freed
2202hunk ./src/allmydata/storage/immutable.py 114
2203 class BucketReader(Referenceable):
2204     implements(RIBucketReader)
2205 
2206-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2207+    def __init__(self, ss, share):
2208         self.ss = ss
2209hunk ./src/allmydata/storage/immutable.py 116
2210-        self._share_file = ShareFile(sharefname)
2211-        self.storage_index = storage_index
2212-        self.shnum = shnum
2213+        self._share_file = share
2214+        self.storage_index = share.storage_index
2215+        self.shnum = share.shnum
2216 
2217     def __repr__(self):
2218         return "<%s %s %s>" % (self.__class__.__name__,
2219hunk ./src/allmydata/storage/server.py 316
2220         si_s = si_b2a(storage_index)
2221         log.msg("storage: get_buckets %s" % si_s)
2222         bucketreaders = {} # k: sharenum, v: BucketReader
2223-        for shnum, filename in self.backend.get_shares(storage_index):
2224-            bucketreaders[shnum] = BucketReader(self, filename,
2225-                                                storage_index, shnum)
2226+        self.backend.set_storage_server(self)
2227+        for share in self.backend.get_shares(storage_index):
2228+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2229         self.add_latency("get", time.time() - start)
2230         return bucketreaders
2231 
2232hunk ./src/allmydata/test/test_backends.py 25
2233 tempdir = 'teststoredir'
2234 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2235 sharefname = os.path.join(sharedirname, '0')
2236+expiration_policy = {'enabled' : False,
2237+                     'mode' : 'age',
2238+                     'override_lease_duration' : None,
2239+                     'cutoff_date' : None,
2240+                     'sharetypes' : None}
2241 
2242 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2243     @mock.patch('time.time')
2244hunk ./src/allmydata/test/test_backends.py 43
2245         tries to read or write to the file system. """
2246 
2247         # Now begin the test.
2248-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2249+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2250 
2251         self.failIf(mockisdir.called)
2252         self.failIf(mocklistdir.called)
2253hunk ./src/allmydata/test/test_backends.py 74
2254         mockopen.side_effect = call_open
2255 
2256         # Now begin the test.
2257-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2258+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2259 
2260         self.failIf(mockisdir.called)
2261         self.failIf(mocklistdir.called)
2262hunk ./src/allmydata/test/test_backends.py 86
2263 
2264 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2265     def setUp(self):
2266-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2267+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2268 
2269     @mock.patch('os.mkdir')
2270     @mock.patch('__builtin__.open')
2271hunk ./src/allmydata/test/test_backends.py 136
2272             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2273                 return StringIO()
2274         mockopen.side_effect = call_open
2275-        expiration_policy = {'enabled' : False,
2276-                             'mode' : 'age',
2277-                             'override_lease_duration' : None,
2278-                             'cutoff_date' : None,
2279-                             'sharetypes' : None}
2280         testbackend = DASCore(tempdir, expiration_policy)
2281         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2282 
2283}
2284[checkpoint5
2285wilcoxjg@gmail.com**20110705034626
2286 Ignore-this: 255780bd58299b0aa33c027e9d008262
2287] {
2288addfile ./src/allmydata/storage/backends/base.py
2289hunk ./src/allmydata/storage/backends/base.py 1
2290+from twisted.application import service
2291+
2292+class Backend(service.MultiService):
2293+    def __init__(self):
2294+        service.MultiService.__init__(self)
2295hunk ./src/allmydata/storage/backends/null/core.py 19
2296 
2297     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2298         
2299+        immutableshare = ImmutableShare()
2300         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2301 
2302     def set_storage_server(self, ss):
2303hunk ./src/allmydata/storage/backends/null/core.py 28
2304 class ImmutableShare:
2305     sharetype = "immutable"
2306 
2307-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2308+    def __init__(self):
2309         """ If max_size is not None then I won't allow more than
2310         max_size to be written to me. If create=True then max_size
2311         must not be None. """
2312hunk ./src/allmydata/storage/backends/null/core.py 32
2313-        precondition((max_size is not None) or (not create), max_size, create)
2314-        self.shnum = shnum
2315-        self.storage_index = storageindex
2316-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2317-        self._max_size = max_size
2318-        if create:
2319-            # touch the file, so later callers will see that we're working on
2320-            # it. Also construct the metadata.
2321-            assert not os.path.exists(self.fname)
2322-            fileutil.make_dirs(os.path.dirname(self.fname))
2323-            f = open(self.fname, 'wb')
2324-            # The second field -- the four-byte share data length -- is no
2325-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2326-            # there in case someone downgrades a storage server from >=
2327-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2328-            # server to another, etc. We do saturation -- a share data length
2329-            # larger than 2**32-1 (what can fit into the field) is marked as
2330-            # the largest length that can fit into the field. That way, even
2331-            # if this does happen, the old < v1.3.0 server will still allow
2332-            # clients to read the first part of the share.
2333-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2334-            f.close()
2335-            self._lease_offset = max_size + 0x0c
2336-            self._num_leases = 0
2337-        else:
2338-            f = open(self.fname, 'rb')
2339-            filesize = os.path.getsize(self.fname)
2340-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2341-            f.close()
2342-            if version != 1:
2343-                msg = "sharefile %s had version %d but we wanted 1" % \
2344-                      (self.fname, version)
2345-                raise UnknownImmutableContainerVersionError(msg)
2346-            self._num_leases = num_leases
2347-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2348-        self._data_offset = 0xc
2349+        pass
2350 
2351     def get_shnum(self):
2352         return self.shnum
2353hunk ./src/allmydata/storage/backends/null/core.py 54
2354         return f.read(actuallength)
2355 
2356     def write_share_data(self, offset, data):
2357-        length = len(data)
2358-        precondition(offset >= 0, offset)
2359-        if self._max_size is not None and offset+length > self._max_size:
2360-            raise DataTooLargeError(self._max_size, offset, length)
2361-        f = open(self.fname, 'rb+')
2362-        real_offset = self._data_offset+offset
2363-        f.seek(real_offset)
2364-        assert f.tell() == real_offset
2365-        f.write(data)
2366-        f.close()
2367+        pass
2368 
2369     def _write_lease_record(self, f, lease_number, lease_info):
2370         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2371hunk ./src/allmydata/storage/backends/null/core.py 84
2372             if data:
2373                 yield LeaseInfo().from_immutable_data(data)
2374 
2375-    def add_lease(self, lease_info):
2376-        f = open(self.fname, 'rb+')
2377-        num_leases = self._read_num_leases(f)
2378-        self._write_lease_record(f, num_leases, lease_info)
2379-        self._write_num_leases(f, num_leases+1)
2380-        f.close()
2381+    def add_lease(self, lease):
2382+        pass
2383 
2384     def renew_lease(self, renew_secret, new_expire_time):
2385         for i,lease in enumerate(self.get_leases()):
2386hunk ./src/allmydata/test/test_backends.py 32
2387                      'sharetypes' : None}
2388 
2389 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2390-    @mock.patch('time.time')
2391-    @mock.patch('os.mkdir')
2392-    @mock.patch('__builtin__.open')
2393-    @mock.patch('os.listdir')
2394-    @mock.patch('os.path.isdir')
2395-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2396-        """ This tests whether a server instance can be constructed
2397-        with a null backend. The server instance fails the test if it
2398-        tries to read or write to the file system. """
2399-
2400-        # Now begin the test.
2401-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2402-
2403-        self.failIf(mockisdir.called)
2404-        self.failIf(mocklistdir.called)
2405-        self.failIf(mockopen.called)
2406-        self.failIf(mockmkdir.called)
2407-
2408-        # You passed!
2409-
2410     @mock.patch('time.time')
2411     @mock.patch('os.mkdir')
2412     @mock.patch('__builtin__.open')
2413hunk ./src/allmydata/test/test_backends.py 53
2414                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2415         mockopen.side_effect = call_open
2416 
2417-        # Now begin the test.
2418-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2419-
2420-        self.failIf(mockisdir.called)
2421-        self.failIf(mocklistdir.called)
2422-        self.failIf(mockopen.called)
2423-        self.failIf(mockmkdir.called)
2424-        self.failIf(mocktime.called)
2425-
2426-        # You passed!
2427-
2428-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2429-    def setUp(self):
2430-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2431-
2432-    @mock.patch('os.mkdir')
2433-    @mock.patch('__builtin__.open')
2434-    @mock.patch('os.listdir')
2435-    @mock.patch('os.path.isdir')
2436-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2437-        """ Write a new share. """
2438-
2439-        # Now begin the test.
2440-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2441-        bs[0].remote_write(0, 'a')
2442-        self.failIf(mockisdir.called)
2443-        self.failIf(mocklistdir.called)
2444-        self.failIf(mockopen.called)
2445-        self.failIf(mockmkdir.called)
2446+        def call_isdir(fname):
2447+            if fname == os.path.join(tempdir,'shares'):
2448+                return True
2449+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2450+                return True
2451+            else:
2452+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2453+        mockisdir.side_effect = call_isdir
2454 
2455hunk ./src/allmydata/test/test_backends.py 62
2456-    @mock.patch('os.path.exists')
2457-    @mock.patch('os.path.getsize')
2458-    @mock.patch('__builtin__.open')
2459-    @mock.patch('os.listdir')
2460-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2461-        """ This tests whether the code correctly finds and reads
2462-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2463-        servers. There is a similar test in test_download, but that one
2464-        is from the perspective of the client and exercises a deeper
2465-        stack of code. This one is for exercising just the
2466-        StorageServer object. """
2467+        def call_mkdir(fname, mode):
2468+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2469+            self.failUnlessEqual(0777, mode)
2470+            if fname == tempdir:
2471+                return None
2472+            elif fname == os.path.join(tempdir,'shares'):
2473+                return None
2474+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2475+                return None
2476+            else:
2477+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2478+        mockmkdir.side_effect = call_mkdir
2479 
2480         # Now begin the test.
2481hunk ./src/allmydata/test/test_backends.py 76
2482-        bs = self.s.remote_get_buckets('teststorage_index')
2483+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2484 
2485hunk ./src/allmydata/test/test_backends.py 78
2486-        self.failUnlessEqual(len(bs), 0)
2487-        self.failIf(mocklistdir.called)
2488-        self.failIf(mockopen.called)
2489-        self.failIf(mockgetsize.called)
2490-        self.failIf(mockexists.called)
2491+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2492 
2493 
2494 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2495hunk ./src/allmydata/test/test_backends.py 193
2496         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2497 
2498 
2499+
2500+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2501+    @mock.patch('time.time')
2502+    @mock.patch('os.mkdir')
2503+    @mock.patch('__builtin__.open')
2504+    @mock.patch('os.listdir')
2505+    @mock.patch('os.path.isdir')
2506+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2507+        """ This tests whether a file system backend instance can be
2508+        constructed. To pass the test, it has to use the
2509+        filesystem in only the prescribed ways. """
2510+
2511+        def call_open(fname, mode):
2512+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2513+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2514+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2515+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2516+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2517+                return StringIO()
2518+            else:
2519+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2520+        mockopen.side_effect = call_open
2521+
2522+        def call_isdir(fname):
2523+            if fname == os.path.join(tempdir,'shares'):
2524+                return True
2525+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2526+                return True
2527+            else:
2528+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2529+        mockisdir.side_effect = call_isdir
2530+
2531+        def call_mkdir(fname, mode):
2532+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2533+            self.failUnlessEqual(0777, mode)
2534+            if fname == tempdir:
2535+                return None
2536+            elif fname == os.path.join(tempdir,'shares'):
2537+                return None
2538+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2539+                return None
2540+            else:
2541+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2542+        mockmkdir.side_effect = call_mkdir
2543+
2544+        # Now begin the test.
2545+        DASCore('teststoredir', expiration_policy)
2546+
2547+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2548}
2549[checkpoint 6
2550wilcoxjg@gmail.com**20110706190824
2551 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2552] {
2553hunk ./src/allmydata/interfaces.py 100
2554                          renew_secret=LeaseRenewSecret,
2555                          cancel_secret=LeaseCancelSecret,
2556                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2557-                         allocated_size=Offset, canary=Referenceable):
2558+                         allocated_size=Offset,
2559+                         canary=Referenceable):
2560         """
2561hunk ./src/allmydata/interfaces.py 103
2562-        @param storage_index: the index of the bucket to be created or
2563+        @param storage_index: the index of the shares to be created or
2564                               increfed.
2565hunk ./src/allmydata/interfaces.py 105
2566-        @param sharenums: these are the share numbers (probably between 0 and
2567-                          99) that the sender is proposing to store on this
2568-                          server.
2569-        @param renew_secret: This is the secret used to protect bucket refresh
2570+        @param renew_secret: This is the secret used to protect shares refresh
2571                              This secret is generated by the client and
2572                              stored for later comparison by the server. Each
2573                              server is given a different secret.
2574hunk ./src/allmydata/interfaces.py 109
2575-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2576-        @param canary: If the canary is lost before close(), the bucket is
2577+        @param cancel_secret: Like renew_secret, but protects shares decref.
2578+        @param sharenums: these are the share numbers (probably between 0 and
2579+                          99) that the sender is proposing to store on this
2580+                          server.
2581+        @param allocated_size: XXX The size of the shares the client wishes to store.
2582+        @param canary: If the canary is lost before close(), the shares are
2583                        deleted.
2584hunk ./src/allmydata/interfaces.py 116
2585+
2586         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2587                  already have and allocated is what we hereby agree to accept.
2588                  New leases are added for shares in both lists.
2589hunk ./src/allmydata/interfaces.py 128
2590                   renew_secret=LeaseRenewSecret,
2591                   cancel_secret=LeaseCancelSecret):
2592         """
2593-        Add a new lease on the given bucket. If the renew_secret matches an
2594+        Add a new lease on the given shares. If the renew_secret matches an
2595         existing lease, that lease will be renewed instead. If there is no
2596         bucket for the given storage_index, return silently. (note that in
2597         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2598hunk ./src/allmydata/storage/server.py 17
2599 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2600      create_mutable_sharefile
2601 
2602-from zope.interface import implements
2603-
2604 # storage/
2605 # storage/shares/incoming
2606 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2607hunk ./src/allmydata/test/test_backends.py 6
2608 from StringIO import StringIO
2609 
2610 from allmydata.test.common_util import ReallyEqualMixin
2611+from allmydata.util.assertutil import _assert
2612 
2613 import mock, os
2614 
2615hunk ./src/allmydata/test/test_backends.py 92
2616                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2617             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2618                 return StringIO()
2619+            else:
2620+                _assert(False, "The tester code doesn't recognize this case.") 
2621+
2622         mockopen.side_effect = call_open
2623         testbackend = DASCore(tempdir, expiration_policy)
2624         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2625hunk ./src/allmydata/test/test_backends.py 109
2626 
2627         def call_listdir(dirname):
2628             self.failUnlessReallyEqual(dirname, sharedirname)
2629-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2630+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2631 
2632         mocklistdir.side_effect = call_listdir
2633 
2634hunk ./src/allmydata/test/test_backends.py 113
2635+        def call_isdir(dirname):
2636+            self.failUnlessReallyEqual(dirname, sharedirname)
2637+            return True
2638+
2639+        mockisdir.side_effect = call_isdir
2640+
2641+        def call_mkdir(dirname, permissions):
2642+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2643+                self.Fail
2644+            else:
2645+                return True
2646+
2647+        mockmkdir.side_effect = call_mkdir
2648+
2649         class MockFile:
2650             def __init__(self):
2651                 self.buffer = ''
2652hunk ./src/allmydata/test/test_backends.py 156
2653             return sharefile
2654 
2655         mockopen.side_effect = call_open
2656+
2657         # Now begin the test.
2658         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2659         bs[0].remote_write(0, 'a')
2660hunk ./src/allmydata/test/test_backends.py 161
2661         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2662+       
2663+        # Now test the allocated_size method.
2664+        spaceint = self.s.allocated_size()
2665 
2666     @mock.patch('os.path.exists')
2667     @mock.patch('os.path.getsize')
2668}
2669[checkpoint 7
2670wilcoxjg@gmail.com**20110706200820
2671 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2672] hunk ./src/allmydata/test/test_backends.py 164
2673         
2674         # Now test the allocated_size method.
2675         spaceint = self.s.allocated_size()
2676+        self.failUnlessReallyEqual(spaceint, 1)
2677 
2678     @mock.patch('os.path.exists')
2679     @mock.patch('os.path.getsize')
2680[checkpoint8
2681wilcoxjg@gmail.com**20110706223126
2682 Ignore-this: 97336180883cb798b16f15411179f827
2683   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2684] hunk ./src/allmydata/test/test_backends.py 32
2685                      'cutoff_date' : None,
2686                      'sharetypes' : None}
2687 
2688+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2689+    def setUp(self):
2690+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2691+
2692+    @mock.patch('os.mkdir')
2693+    @mock.patch('__builtin__.open')
2694+    @mock.patch('os.listdir')
2695+    @mock.patch('os.path.isdir')
2696+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2697+        """ Write a new share. """
2698+
2699+        # Now begin the test.
2700+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2701+        bs[0].remote_write(0, 'a')
2702+        self.failIf(mockisdir.called)
2703+        self.failIf(mocklistdir.called)
2704+        self.failIf(mockopen.called)
2705+        self.failIf(mockmkdir.called)
2706+
2707 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2708     @mock.patch('time.time')
2709     @mock.patch('os.mkdir')
2710[checkpoint 9
2711wilcoxjg@gmail.com**20110707042942
2712 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2713] {
2714hunk ./src/allmydata/storage/backends/das/core.py 88
2715                     filename = os.path.join(finalstoragedir, f)
2716                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2717         except OSError:
2718-            # Commonly caused by there being no buckets at all.
2719+            # Commonly caused by there being no shares at all.
2720             pass
2721         
2722     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2723hunk ./src/allmydata/storage/backends/das/core.py 141
2724         self.storage_index = storageindex
2725         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2726         self._max_size = max_size
2727+        self.incomingdir = os.path.join(sharedir, 'incoming')
2728+        si_dir = storage_index_to_dir(storageindex)
2729+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2730+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2731         if create:
2732             # touch the file, so later callers will see that we're working on
2733             # it. Also construct the metadata.
2734hunk ./src/allmydata/storage/backends/das/core.py 177
2735             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2736         self._data_offset = 0xc
2737 
2738+    def close(self):
2739+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2740+        fileutil.rename(self.incominghome, self.finalhome)
2741+        try:
2742+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2743+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2744+            # these directories lying around forever, but the delete might
2745+            # fail if we're working on another share for the same storage
2746+            # index (like ab/abcde/5). The alternative approach would be to
2747+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2748+            # ShareWriter), each of which is responsible for a single
2749+            # directory on disk, and have them use reference counting of
2750+            # their children to know when they should do the rmdir. This
2751+            # approach is simpler, but relies on os.rmdir refusing to delete
2752+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2753+            os.rmdir(os.path.dirname(self.incominghome))
2754+            # we also delete the grandparent (prefix) directory, .../ab ,
2755+            # again to avoid leaving directories lying around. This might
2756+            # fail if there is another bucket open that shares a prefix (like
2757+            # ab/abfff).
2758+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2759+            # we leave the great-grandparent (incoming/) directory in place.
2760+        except EnvironmentError:
2761+            # ignore the "can't rmdir because the directory is not empty"
2762+            # exceptions, those are normal consequences of the
2763+            # above-mentioned conditions.
2764+            pass
2765+        pass
2766+       
2767+    def stat(self):
2768+        return os.stat(self.finalhome)[stat.ST_SIZE]
2769+
2770     def get_shnum(self):
2771         return self.shnum
2772 
2773hunk ./src/allmydata/storage/immutable.py 7
2774 
2775 from zope.interface import implements
2776 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2777-from allmydata.util import base32, fileutil, log
2778+from allmydata.util import base32, log
2779 from allmydata.util.assertutil import precondition
2780 from allmydata.util.hashutil import constant_time_compare
2781 from allmydata.storage.lease import LeaseInfo
2782hunk ./src/allmydata/storage/immutable.py 44
2783     def remote_close(self):
2784         precondition(not self.closed)
2785         start = time.time()
2786-
2787-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2788-        fileutil.rename(self.incominghome, self.finalhome)
2789-        try:
2790-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2791-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2792-            # these directories lying around forever, but the delete might
2793-            # fail if we're working on another share for the same storage
2794-            # index (like ab/abcde/5). The alternative approach would be to
2795-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2796-            # ShareWriter), each of which is responsible for a single
2797-            # directory on disk, and have them use reference counting of
2798-            # their children to know when they should do the rmdir. This
2799-            # approach is simpler, but relies on os.rmdir refusing to delete
2800-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2801-            os.rmdir(os.path.dirname(self.incominghome))
2802-            # we also delete the grandparent (prefix) directory, .../ab ,
2803-            # again to avoid leaving directories lying around. This might
2804-            # fail if there is another bucket open that shares a prefix (like
2805-            # ab/abfff).
2806-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2807-            # we leave the great-grandparent (incoming/) directory in place.
2808-        except EnvironmentError:
2809-            # ignore the "can't rmdir because the directory is not empty"
2810-            # exceptions, those are normal consequences of the
2811-            # above-mentioned conditions.
2812-            pass
2813+        self._sharefile.close()
2814         self._sharefile = None
2815         self.closed = True
2816         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2817hunk ./src/allmydata/storage/immutable.py 49
2818 
2819-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2820+        filelen = self._sharefile.stat()
2821         self.ss.bucket_writer_closed(self, filelen)
2822         self.ss.add_latency("close", time.time() - start)
2823         self.ss.count("close")
2824hunk ./src/allmydata/storage/server.py 45
2825         self._active_writers = weakref.WeakKeyDictionary()
2826         self.backend = backend
2827         self.backend.setServiceParent(self)
2828+        self.backend.set_storage_server(self)
2829         log.msg("StorageServer created", facility="tahoe.storage")
2830 
2831         self.latencies = {"allocate": [], # immutable
2832hunk ./src/allmydata/storage/server.py 220
2833 
2834         for shnum in (sharenums - alreadygot):
2835             if (not limited) or (remaining_space >= max_space_per_bucket):
2836-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2837-                self.backend.set_storage_server(self)
2838                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2839                                                      max_space_per_bucket, lease_info, canary)
2840                 bucketwriters[shnum] = bw
2841hunk ./src/allmydata/test/test_backends.py 117
2842         mockopen.side_effect = call_open
2843         testbackend = DASCore(tempdir, expiration_policy)
2844         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2845-
2846+   
2847+    @mock.patch('allmydata.util.fileutil.get_available_space')
2848     @mock.patch('time.time')
2849     @mock.patch('os.mkdir')
2850     @mock.patch('__builtin__.open')
2851hunk ./src/allmydata/test/test_backends.py 124
2852     @mock.patch('os.listdir')
2853     @mock.patch('os.path.isdir')
2854-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2855+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2856+                             mockget_available_space):
2857         """ Write a new share. """
2858 
2859         def call_listdir(dirname):
2860hunk ./src/allmydata/test/test_backends.py 148
2861 
2862         mockmkdir.side_effect = call_mkdir
2863 
2864+        def call_get_available_space(storedir, reserved_space):
2865+            self.failUnlessReallyEqual(storedir, tempdir)
2866+            return 1
2867+
2868+        mockget_available_space.side_effect = call_get_available_space
2869+
2870         class MockFile:
2871             def __init__(self):
2872                 self.buffer = ''
2873hunk ./src/allmydata/test/test_backends.py 188
2874         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2875         bs[0].remote_write(0, 'a')
2876         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2877-       
2878+
2879+        # What happens when there's not enough space for the client's request?
2880+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2881+
2882         # Now test the allocated_size method.
2883         spaceint = self.s.allocated_size()
2884         self.failUnlessReallyEqual(spaceint, 1)
2885}
2886[checkpoint10
2887wilcoxjg@gmail.com**20110707172049
2888 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2889] {
2890hunk ./src/allmydata/test/test_backends.py 20
2891 # The following share file contents was generated with
2892 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2893 # with share data == 'a'.
2894-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2895+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2896+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2897+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2898 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2899 
2900hunk ./src/allmydata/test/test_backends.py 25
2901+testnodeid = 'testnodeidxxxxxxxxxx'
2902 tempdir = 'teststoredir'
2903 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2904 sharefname = os.path.join(sharedirname, '0')
2905hunk ./src/allmydata/test/test_backends.py 37
2906 
2907 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2908     def setUp(self):
2909-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2910+        self.s = StorageServer(testnodeid, backend=NullCore())
2911 
2912     @mock.patch('os.mkdir')
2913     @mock.patch('__builtin__.open')
2914hunk ./src/allmydata/test/test_backends.py 99
2915         mockmkdir.side_effect = call_mkdir
2916 
2917         # Now begin the test.
2918-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2919+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2920 
2921         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2922 
2923hunk ./src/allmydata/test/test_backends.py 119
2924 
2925         mockopen.side_effect = call_open
2926         testbackend = DASCore(tempdir, expiration_policy)
2927-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2928-   
2929+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
2930+       
2931+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
2932     @mock.patch('allmydata.util.fileutil.get_available_space')
2933     @mock.patch('time.time')
2934     @mock.patch('os.mkdir')
2935hunk ./src/allmydata/test/test_backends.py 129
2936     @mock.patch('os.listdir')
2937     @mock.patch('os.path.isdir')
2938     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2939-                             mockget_available_space):
2940+                             mockget_available_space, mockget_shares):
2941         """ Write a new share. """
2942 
2943         def call_listdir(dirname):
2944hunk ./src/allmydata/test/test_backends.py 139
2945         mocklistdir.side_effect = call_listdir
2946 
2947         def call_isdir(dirname):
2948+            #XXX Should there be any other tests here?
2949             self.failUnlessReallyEqual(dirname, sharedirname)
2950             return True
2951 
2952hunk ./src/allmydata/test/test_backends.py 159
2953 
2954         mockget_available_space.side_effect = call_get_available_space
2955 
2956+        mocktime.return_value = 0
2957+        class MockShare:
2958+            def __init__(self):
2959+                self.shnum = 1
2960+               
2961+            def add_or_renew_lease(elf, lease_info):
2962+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
2963+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
2964+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
2965+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
2966+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
2967+               
2968+
2969+        share = MockShare()
2970+        def call_get_shares(storageindex):
2971+            return [share]
2972+
2973+        mockget_shares.side_effect = call_get_shares
2974+
2975         class MockFile:
2976             def __init__(self):
2977                 self.buffer = ''
2978hunk ./src/allmydata/test/test_backends.py 199
2979             def tell(self):
2980                 return self.pos
2981 
2982-        mocktime.return_value = 0
2983 
2984         sharefile = MockFile()
2985         def call_open(fname, mode):
2986}
2987[jacp 11
2988wilcoxjg@gmail.com**20110708213919
2989 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
2990] {
2991hunk ./src/allmydata/storage/backends/das/core.py 144
2992         self.incomingdir = os.path.join(sharedir, 'incoming')
2993         si_dir = storage_index_to_dir(storageindex)
2994         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2995+        #XXX  self.fname and self.finalhome need to be resolve/merged.
2996         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2997         if create:
2998             # touch the file, so later callers will see that we're working on
2999hunk ./src/allmydata/storage/backends/das/core.py 208
3000         pass
3001         
3002     def stat(self):
3003-        return os.stat(self.finalhome)[stat.ST_SIZE]
3004+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3005 
3006     def get_shnum(self):
3007         return self.shnum
3008hunk ./src/allmydata/storage/immutable.py 44
3009     def remote_close(self):
3010         precondition(not self.closed)
3011         start = time.time()
3012+
3013         self._sharefile.close()
3014hunk ./src/allmydata/storage/immutable.py 46
3015+        filelen = self._sharefile.stat()
3016         self._sharefile = None
3017hunk ./src/allmydata/storage/immutable.py 48
3018+
3019         self.closed = True
3020         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3021 
3022hunk ./src/allmydata/storage/immutable.py 52
3023-        filelen = self._sharefile.stat()
3024         self.ss.bucket_writer_closed(self, filelen)
3025         self.ss.add_latency("close", time.time() - start)
3026         self.ss.count("close")
3027hunk ./src/allmydata/storage/server.py 220
3028 
3029         for shnum in (sharenums - alreadygot):
3030             if (not limited) or (remaining_space >= max_space_per_bucket):
3031-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3032-                                                     max_space_per_bucket, lease_info, canary)
3033+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3034                 bucketwriters[shnum] = bw
3035                 self._active_writers[bw] = 1
3036                 if limited:
3037hunk ./src/allmydata/test/test_backends.py 20
3038 # The following share file contents was generated with
3039 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3040 # with share data == 'a'.
3041-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3042-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3043+renew_secret  = 'x'*32
3044+cancel_secret = 'y'*32
3045 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3046 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3047 
3048hunk ./src/allmydata/test/test_backends.py 27
3049 testnodeid = 'testnodeidxxxxxxxxxx'
3050 tempdir = 'teststoredir'
3051-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3052-sharefname = os.path.join(sharedirname, '0')
3053+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3054+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3055+shareincomingname = os.path.join(sharedirincomingname, '0')
3056+sharefname = os.path.join(sharedirfinalname, '0')
3057+
3058 expiration_policy = {'enabled' : False,
3059                      'mode' : 'age',
3060                      'override_lease_duration' : None,
3061hunk ./src/allmydata/test/test_backends.py 123
3062         mockopen.side_effect = call_open
3063         testbackend = DASCore(tempdir, expiration_policy)
3064         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3065-       
3066+
3067+    @mock.patch('allmydata.util.fileutil.rename')
3068+    @mock.patch('allmydata.util.fileutil.make_dirs')
3069+    @mock.patch('os.path.exists')
3070+    @mock.patch('os.stat')
3071     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3072     @mock.patch('allmydata.util.fileutil.get_available_space')
3073     @mock.patch('time.time')
3074hunk ./src/allmydata/test/test_backends.py 136
3075     @mock.patch('os.listdir')
3076     @mock.patch('os.path.isdir')
3077     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3078-                             mockget_available_space, mockget_shares):
3079+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3080+                             mockmake_dirs, mockrename):
3081         """ Write a new share. """
3082 
3083         def call_listdir(dirname):
3084hunk ./src/allmydata/test/test_backends.py 141
3085-            self.failUnlessReallyEqual(dirname, sharedirname)
3086+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3087             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3088 
3089         mocklistdir.side_effect = call_listdir
3090hunk ./src/allmydata/test/test_backends.py 148
3091 
3092         def call_isdir(dirname):
3093             #XXX Should there be any other tests here?
3094-            self.failUnlessReallyEqual(dirname, sharedirname)
3095+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3096             return True
3097 
3098         mockisdir.side_effect = call_isdir
3099hunk ./src/allmydata/test/test_backends.py 154
3100 
3101         def call_mkdir(dirname, permissions):
3102-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3103+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3104                 self.Fail
3105             else:
3106                 return True
3107hunk ./src/allmydata/test/test_backends.py 208
3108                 return self.pos
3109 
3110 
3111-        sharefile = MockFile()
3112+        fobj = MockFile()
3113         def call_open(fname, mode):
3114             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3115hunk ./src/allmydata/test/test_backends.py 211
3116-            return sharefile
3117+            return fobj
3118 
3119         mockopen.side_effect = call_open
3120 
3121hunk ./src/allmydata/test/test_backends.py 215
3122+        def call_make_dirs(dname):
3123+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3124+           
3125+        mockmake_dirs.side_effect = call_make_dirs
3126+
3127+        def call_rename(src, dst):
3128+           self.failUnlessReallyEqual(src, shareincomingname)
3129+           self.failUnlessReallyEqual(dst, sharefname)
3130+           
3131+        mockrename.side_effect = call_rename
3132+
3133+        def call_exists(fname):
3134+            self.failUnlessReallyEqual(fname, sharefname)
3135+
3136+        mockexists.side_effect = call_exists
3137+
3138         # Now begin the test.
3139         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3140         bs[0].remote_write(0, 'a')
3141hunk ./src/allmydata/test/test_backends.py 234
3142-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3143+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3144+        spaceint = self.s.allocated_size()
3145+        self.failUnlessReallyEqual(spaceint, 1)
3146+
3147+        bs[0].remote_close()
3148 
3149         # What happens when there's not enough space for the client's request?
3150hunk ./src/allmydata/test/test_backends.py 241
3151-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3152+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3153 
3154         # Now test the allocated_size method.
3155hunk ./src/allmydata/test/test_backends.py 244
3156-        spaceint = self.s.allocated_size()
3157-        self.failUnlessReallyEqual(spaceint, 1)
3158+        #self.failIf(mockexists.called, mockexists.call_args_list)
3159+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3160+        #self.failIf(mockrename.called, mockrename.call_args_list)
3161+        #self.failIf(mockstat.called, mockstat.call_args_list)
3162 
3163     @mock.patch('os.path.exists')
3164     @mock.patch('os.path.getsize')
3165}
3166[checkpoint12 testing correct behavior with regard to incoming and final
3167wilcoxjg@gmail.com**20110710191915
3168 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3169] {
3170hunk ./src/allmydata/storage/backends/das/core.py 74
3171         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3172         self.lease_checker.setServiceParent(self)
3173 
3174+    def get_incoming(self, storageindex):
3175+        return set((1,))
3176+
3177     def get_available_space(self):
3178         if self.readonly:
3179             return 0
3180hunk ./src/allmydata/storage/server.py 77
3181         """Return a dict, indexed by category, that contains a dict of
3182         latency numbers for each category. If there are sufficient samples
3183         for unambiguous interpretation, each dict will contain the
3184-        following keys: mean, 01_0_percentile, 10_0_percentile,
3185+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3186         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3187         99_0_percentile, 99_9_percentile.  If there are insufficient
3188         samples for a given percentile to be interpreted unambiguously
3189hunk ./src/allmydata/storage/server.py 120
3190 
3191     def get_stats(self):
3192         # remember: RIStatsProvider requires that our return dict
3193-        # contains numeric values.
3194+        # contains numeric, or None values.
3195         stats = { 'storage_server.allocated': self.allocated_size(), }
3196         stats['storage_server.reserved_space'] = self.reserved_space
3197         for category,ld in self.get_latencies().items():
3198hunk ./src/allmydata/storage/server.py 185
3199         start = time.time()
3200         self.count("allocate")
3201         alreadygot = set()
3202+        incoming = set()
3203         bucketwriters = {} # k: shnum, v: BucketWriter
3204 
3205         si_s = si_b2a(storage_index)
3206hunk ./src/allmydata/storage/server.py 219
3207             alreadygot.add(share.shnum)
3208             share.add_or_renew_lease(lease_info)
3209 
3210-        for shnum in (sharenums - alreadygot):
3211+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3212+        incoming = self.backend.get_incoming(storageindex)
3213+
3214+        for shnum in ((sharenums - alreadygot) - incoming):
3215             if (not limited) or (remaining_space >= max_space_per_bucket):
3216                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3217                 bucketwriters[shnum] = bw
3218hunk ./src/allmydata/storage/server.py 229
3219                 self._active_writers[bw] = 1
3220                 if limited:
3221                     remaining_space -= max_space_per_bucket
3222-
3223-        #XXX We SHOULD DOCUMENT LATER.
3224+            else:
3225+                # Bummer not enough space to accept this share.
3226+                pass
3227 
3228         self.add_latency("allocate", time.time() - start)
3229         return alreadygot, bucketwriters
3230hunk ./src/allmydata/storage/server.py 323
3231         self.add_latency("get", time.time() - start)
3232         return bucketreaders
3233 
3234-    def get_leases(self, storage_index):
3235+    def remote_get_incoming(self, storageindex):
3236+        incoming_share_set = self.backend.get_incoming(storageindex)
3237+        return incoming_share_set
3238+
3239+    def get_leases(self, storageindex):
3240         """Provide an iterator that yields all of the leases attached to this
3241         bucket. Each lease is returned as a LeaseInfo instance.
3242 
3243hunk ./src/allmydata/storage/server.py 337
3244         # since all shares get the same lease data, we just grab the leases
3245         # from the first share
3246         try:
3247-            shnum, filename = self._get_shares(storage_index).next()
3248+            shnum, filename = self._get_shares(storageindex).next()
3249             sf = ShareFile(filename)
3250             return sf.get_leases()
3251         except StopIteration:
3252hunk ./src/allmydata/test/test_backends.py 182
3253 
3254         share = MockShare()
3255         def call_get_shares(storageindex):
3256-            return [share]
3257+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3258+            return []#share]
3259 
3260         mockget_shares.side_effect = call_get_shares
3261 
3262hunk ./src/allmydata/test/test_backends.py 222
3263         mockmake_dirs.side_effect = call_make_dirs
3264 
3265         def call_rename(src, dst):
3266-           self.failUnlessReallyEqual(src, shareincomingname)
3267-           self.failUnlessReallyEqual(dst, sharefname)
3268+            self.failUnlessReallyEqual(src, shareincomingname)
3269+            self.failUnlessReallyEqual(dst, sharefname)
3270             
3271         mockrename.side_effect = call_rename
3272 
3273hunk ./src/allmydata/test/test_backends.py 233
3274         mockexists.side_effect = call_exists
3275 
3276         # Now begin the test.
3277+
3278+        # XXX (0) ???  Fail unless something is not properly set-up?
3279         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3280hunk ./src/allmydata/test/test_backends.py 236
3281+
3282+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3283+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3284+
3285+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3286+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3287+        # with the same si, until BucketWriter.remote_close() has been called.
3288+        # self.failIf(bsa)
3289+
3290+        # XXX (3) Inspect final and fail unless there's nothing there.
3291         bs[0].remote_write(0, 'a')
3292hunk ./src/allmydata/test/test_backends.py 247
3293+        # XXX (4a) Inspect final and fail unless share 0 is there.
3294+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3295         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3296         spaceint = self.s.allocated_size()
3297         self.failUnlessReallyEqual(spaceint, 1)
3298hunk ./src/allmydata/test/test_backends.py 253
3299 
3300+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3301         bs[0].remote_close()
3302 
3303         # What happens when there's not enough space for the client's request?
3304hunk ./src/allmydata/test/test_backends.py 260
3305         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3306 
3307         # Now test the allocated_size method.
3308-        #self.failIf(mockexists.called, mockexists.call_args_list)
3309+        # self.failIf(mockexists.called, mockexists.call_args_list)
3310         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3311         #self.failIf(mockrename.called, mockrename.call_args_list)
3312         #self.failIf(mockstat.called, mockstat.call_args_list)
3313}
3314[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3315wilcoxjg@gmail.com**20110710195139
3316 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3317] {
3318hunk ./src/allmydata/storage/server.py 220
3319             share.add_or_renew_lease(lease_info)
3320 
3321         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3322-        incoming = self.backend.get_incoming(storageindex)
3323+        incoming = self.backend.get_incoming(storage_index)
3324 
3325         for shnum in ((sharenums - alreadygot) - incoming):
3326             if (not limited) or (remaining_space >= max_space_per_bucket):
3327hunk ./src/allmydata/storage/server.py 323
3328         self.add_latency("get", time.time() - start)
3329         return bucketreaders
3330 
3331-    def remote_get_incoming(self, storageindex):
3332-        incoming_share_set = self.backend.get_incoming(storageindex)
3333+    def remote_get_incoming(self, storage_index):
3334+        incoming_share_set = self.backend.get_incoming(storage_index)
3335         return incoming_share_set
3336 
3337hunk ./src/allmydata/storage/server.py 327
3338-    def get_leases(self, storageindex):
3339+    def get_leases(self, storage_index):
3340         """Provide an iterator that yields all of the leases attached to this
3341         bucket. Each lease is returned as a LeaseInfo instance.
3342 
3343hunk ./src/allmydata/storage/server.py 337
3344         # since all shares get the same lease data, we just grab the leases
3345         # from the first share
3346         try:
3347-            shnum, filename = self._get_shares(storageindex).next()
3348+            shnum, filename = self._get_shares(storage_index).next()
3349             sf = ShareFile(filename)
3350             return sf.get_leases()
3351         except StopIteration:
3352replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3353}
3354[adding comments to clarify what I'm about to do.
3355wilcoxjg@gmail.com**20110710220623
3356 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3357] {
3358hunk ./src/allmydata/storage/backends/das/core.py 8
3359 
3360 import os, re, weakref, struct, time
3361 
3362-from foolscap.api import Referenceable
3363+#from foolscap.api import Referenceable
3364 from twisted.application import service
3365 
3366 from zope.interface import implements
3367hunk ./src/allmydata/storage/backends/das/core.py 12
3368-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3369+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3370 from allmydata.util import fileutil, idlib, log, time_format
3371 import allmydata # for __full_version__
3372 
3373hunk ./src/allmydata/storage/server.py 219
3374             alreadygot.add(share.shnum)
3375             share.add_or_renew_lease(lease_info)
3376 
3377-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3378+        # fill incoming with all shares that are incoming use a set operation
3379+        # since there's no need to operate on individual pieces
3380         incoming = self.backend.get_incoming(storageindex)
3381 
3382         for shnum in ((sharenums - alreadygot) - incoming):
3383hunk ./src/allmydata/test/test_backends.py 245
3384         # with the same si, until BucketWriter.remote_close() has been called.
3385         # self.failIf(bsa)
3386 
3387-        # XXX (3) Inspect final and fail unless there's nothing there.
3388         bs[0].remote_write(0, 'a')
3389hunk ./src/allmydata/test/test_backends.py 246
3390-        # XXX (4a) Inspect final and fail unless share 0 is there.
3391-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3392         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3393         spaceint = self.s.allocated_size()
3394         self.failUnlessReallyEqual(spaceint, 1)
3395hunk ./src/allmydata/test/test_backends.py 250
3396 
3397-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3398+        # XXX (3) Inspect final and fail unless there's nothing there.
3399         bs[0].remote_close()
3400hunk ./src/allmydata/test/test_backends.py 252
3401+        # XXX (4a) Inspect final and fail unless share 0 is there.
3402+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3403 
3404         # What happens when there's not enough space for the client's request?
3405         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3406}
3407[branching back, no longer attempting to mock inside TestServerFSBackend
3408wilcoxjg@gmail.com**20110711190849
3409 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3410] {
3411hunk ./src/allmydata/storage/backends/das/core.py 75
3412         self.lease_checker.setServiceParent(self)
3413 
3414     def get_incoming(self, storageindex):
3415-        return set((1,))
3416-
3417-    def get_available_space(self):
3418-        if self.readonly:
3419-            return 0
3420-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3421+        """Return the set of incoming shnums."""
3422+        return set(os.listdir(self.incomingdir))
3423 
3424     def get_shares(self, storage_index):
3425         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3426hunk ./src/allmydata/storage/backends/das/core.py 90
3427             # Commonly caused by there being no shares at all.
3428             pass
3429         
3430+    def get_available_space(self):
3431+        if self.readonly:
3432+            return 0
3433+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3434+
3435     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3436         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3437         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3438hunk ./src/allmydata/test/test_backends.py 27
3439 
3440 testnodeid = 'testnodeidxxxxxxxxxx'
3441 tempdir = 'teststoredir'
3442-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3443-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3444+basedir = os.path.join(tempdir, 'shares')
3445+baseincdir = os.path.join(basedir, 'incoming')
3446+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3447+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3448 shareincomingname = os.path.join(sharedirincomingname, '0')
3449 sharefname = os.path.join(sharedirfinalname, '0')
3450 
3451hunk ./src/allmydata/test/test_backends.py 142
3452                              mockmake_dirs, mockrename):
3453         """ Write a new share. """
3454 
3455-        def call_listdir(dirname):
3456-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3457-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3458-
3459-        mocklistdir.side_effect = call_listdir
3460-
3461-        def call_isdir(dirname):
3462-            #XXX Should there be any other tests here?
3463-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3464-            return True
3465-
3466-        mockisdir.side_effect = call_isdir
3467-
3468-        def call_mkdir(dirname, permissions):
3469-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3470-                self.Fail
3471-            else:
3472-                return True
3473-
3474-        mockmkdir.side_effect = call_mkdir
3475-
3476-        def call_get_available_space(storedir, reserved_space):
3477-            self.failUnlessReallyEqual(storedir, tempdir)
3478-            return 1
3479-
3480-        mockget_available_space.side_effect = call_get_available_space
3481-
3482-        mocktime.return_value = 0
3483         class MockShare:
3484             def __init__(self):
3485                 self.shnum = 1
3486hunk ./src/allmydata/test/test_backends.py 152
3487                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3488                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3489                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3490-               
3491 
3492         share = MockShare()
3493hunk ./src/allmydata/test/test_backends.py 154
3494-        def call_get_shares(storageindex):
3495-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3496-            return []#share]
3497-
3498-        mockget_shares.side_effect = call_get_shares
3499 
3500         class MockFile:
3501             def __init__(self):
3502hunk ./src/allmydata/test/test_backends.py 176
3503             def tell(self):
3504                 return self.pos
3505 
3506-
3507         fobj = MockFile()
3508hunk ./src/allmydata/test/test_backends.py 177
3509+
3510+        directories = {}
3511+        def call_listdir(dirname):
3512+            if dirname not in directories:
3513+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3514+            else:
3515+                return directories[dirname].get_contents()
3516+
3517+        mocklistdir.side_effect = call_listdir
3518+
3519+        class MockDir:
3520+            def __init__(self, dirname):
3521+                self.name = dirname
3522+                self.contents = []
3523+   
3524+            def get_contents(self):
3525+                return self.contents
3526+
3527+        def call_isdir(dirname):
3528+            #XXX Should there be any other tests here?
3529+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3530+            return True
3531+
3532+        mockisdir.side_effect = call_isdir
3533+
3534+        def call_mkdir(dirname, permissions):
3535+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3536+                self.Fail
3537+            if dirname in directories:
3538+                raise OSError(17, "File exists: '%s'" % dirname)
3539+                self.Fail
3540+            elif dirname not in directories:
3541+                directories[dirname] = MockDir(dirname)
3542+                return True
3543+
3544+        mockmkdir.side_effect = call_mkdir
3545+
3546+        def call_get_available_space(storedir, reserved_space):
3547+            self.failUnlessReallyEqual(storedir, tempdir)
3548+            return 1
3549+
3550+        mockget_available_space.side_effect = call_get_available_space
3551+
3552+        mocktime.return_value = 0
3553+        def call_get_shares(storageindex):
3554+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3555+            return []#share]
3556+
3557+        mockget_shares.side_effect = call_get_shares
3558+
3559         def call_open(fname, mode):
3560             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3561             return fobj
3562}
3563[checkpoint12 TestServerFSBackend no longer mocks filesystem
3564wilcoxjg@gmail.com**20110711193357
3565 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3566] {
3567hunk ./src/allmydata/storage/backends/das/core.py 23
3568      create_mutable_sharefile
3569 from allmydata.storage.immutable import BucketWriter, BucketReader
3570 from allmydata.storage.crawler import FSBucketCountingCrawler
3571+from allmydata.util.hashutil import constant_time_compare
3572 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3573 
3574 from zope.interface import implements
3575hunk ./src/allmydata/storage/backends/das/core.py 28
3576 
3577+# storage/
3578+# storage/shares/incoming
3579+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3580+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3581+# storage/shares/$START/$STORAGEINDEX
3582+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3583+
3584+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3585+# base-32 chars).
3586 # $SHARENUM matches this regex:
3587 NUM_RE=re.compile("^[0-9]+$")
3588 
3589hunk ./src/allmydata/test/test_backends.py 126
3590         testbackend = DASCore(tempdir, expiration_policy)
3591         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3592 
3593-    @mock.patch('allmydata.util.fileutil.rename')
3594-    @mock.patch('allmydata.util.fileutil.make_dirs')
3595-    @mock.patch('os.path.exists')
3596-    @mock.patch('os.stat')
3597-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3598-    @mock.patch('allmydata.util.fileutil.get_available_space')
3599     @mock.patch('time.time')
3600hunk ./src/allmydata/test/test_backends.py 127
3601-    @mock.patch('os.mkdir')
3602-    @mock.patch('__builtin__.open')
3603-    @mock.patch('os.listdir')
3604-    @mock.patch('os.path.isdir')
3605-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3606-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3607-                             mockmake_dirs, mockrename):
3608+    def test_write_share(self, mocktime):
3609         """ Write a new share. """
3610 
3611         class MockShare:
3612hunk ./src/allmydata/test/test_backends.py 143
3613 
3614         share = MockShare()
3615 
3616-        class MockFile:
3617-            def __init__(self):
3618-                self.buffer = ''
3619-                self.pos = 0
3620-            def write(self, instring):
3621-                begin = self.pos
3622-                padlen = begin - len(self.buffer)
3623-                if padlen > 0:
3624-                    self.buffer += '\x00' * padlen
3625-                end = self.pos + len(instring)
3626-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3627-                self.pos = end
3628-            def close(self):
3629-                pass
3630-            def seek(self, pos):
3631-                self.pos = pos
3632-            def read(self, numberbytes):
3633-                return self.buffer[self.pos:self.pos+numberbytes]
3634-            def tell(self):
3635-                return self.pos
3636-
3637-        fobj = MockFile()
3638-
3639-        directories = {}
3640-        def call_listdir(dirname):
3641-            if dirname not in directories:
3642-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3643-            else:
3644-                return directories[dirname].get_contents()
3645-
3646-        mocklistdir.side_effect = call_listdir
3647-
3648-        class MockDir:
3649-            def __init__(self, dirname):
3650-                self.name = dirname
3651-                self.contents = []
3652-   
3653-            def get_contents(self):
3654-                return self.contents
3655-
3656-        def call_isdir(dirname):
3657-            #XXX Should there be any other tests here?
3658-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3659-            return True
3660-
3661-        mockisdir.side_effect = call_isdir
3662-
3663-        def call_mkdir(dirname, permissions):
3664-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3665-                self.Fail
3666-            if dirname in directories:
3667-                raise OSError(17, "File exists: '%s'" % dirname)
3668-                self.Fail
3669-            elif dirname not in directories:
3670-                directories[dirname] = MockDir(dirname)
3671-                return True
3672-
3673-        mockmkdir.side_effect = call_mkdir
3674-
3675-        def call_get_available_space(storedir, reserved_space):
3676-            self.failUnlessReallyEqual(storedir, tempdir)
3677-            return 1
3678-
3679-        mockget_available_space.side_effect = call_get_available_space
3680-
3681-        mocktime.return_value = 0
3682-        def call_get_shares(storageindex):
3683-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3684-            return []#share]
3685-
3686-        mockget_shares.side_effect = call_get_shares
3687-
3688-        def call_open(fname, mode):
3689-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3690-            return fobj
3691-
3692-        mockopen.side_effect = call_open
3693-
3694-        def call_make_dirs(dname):
3695-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3696-           
3697-        mockmake_dirs.side_effect = call_make_dirs
3698-
3699-        def call_rename(src, dst):
3700-            self.failUnlessReallyEqual(src, shareincomingname)
3701-            self.failUnlessReallyEqual(dst, sharefname)
3702-           
3703-        mockrename.side_effect = call_rename
3704-
3705-        def call_exists(fname):
3706-            self.failUnlessReallyEqual(fname, sharefname)
3707-
3708-        mockexists.side_effect = call_exists
3709-
3710         # Now begin the test.
3711 
3712         # XXX (0) ???  Fail unless something is not properly set-up?
3713}
3714[JACP
3715wilcoxjg@gmail.com**20110711194407
3716 Ignore-this: b54745de777c4bb58d68d708f010bbb
3717] {
3718hunk ./src/allmydata/storage/backends/das/core.py 86
3719 
3720     def get_incoming(self, storageindex):
3721         """Return the set of incoming shnums."""
3722-        return set(os.listdir(self.incomingdir))
3723+        try:
3724+            incominglist = os.listdir(self.incomingdir)
3725+            print "incominglist: ", incominglist
3726+            return set(incominglist)
3727+        except OSError:
3728+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3729+            pass
3730 
3731     def get_shares(self, storage_index):
3732         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3733hunk ./src/allmydata/storage/server.py 17
3734 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3735      create_mutable_sharefile
3736 
3737-# storage/
3738-# storage/shares/incoming
3739-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3740-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3741-# storage/shares/$START/$STORAGEINDEX
3742-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3743-
3744-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3745-# base-32 chars).
3746-
3747-
3748 class StorageServer(service.MultiService, Referenceable):
3749     implements(RIStorageServer, IStatsProducer)
3750     name = 'storage'
3751}
3752[testing get incoming
3753wilcoxjg@gmail.com**20110711210224
3754 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3755] {
3756hunk ./src/allmydata/storage/backends/das/core.py 87
3757     def get_incoming(self, storageindex):
3758         """Return the set of incoming shnums."""
3759         try:
3760-            incominglist = os.listdir(self.incomingdir)
3761+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3762+            incominglist = os.listdir(incomingsharesdir)
3763             print "incominglist: ", incominglist
3764             return set(incominglist)
3765         except OSError:
3766hunk ./src/allmydata/storage/backends/das/core.py 92
3767-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3768-            pass
3769-
3770+            # XXX I'd like to make this more specific. If there are no shares at all.
3771+            return set()
3772+           
3773     def get_shares(self, storage_index):
3774         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3775         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3776hunk ./src/allmydata/test/test_backends.py 149
3777         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3778 
3779         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3780+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3781         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3782 
3783hunk ./src/allmydata/test/test_backends.py 152
3784-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3785         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3786         # with the same si, until BucketWriter.remote_close() has been called.
3787         # self.failIf(bsa)
3788}
3789[ImmutableShareFile does not know its StorageIndex
3790wilcoxjg@gmail.com**20110711211424
3791 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3792] {
3793hunk ./src/allmydata/storage/backends/das/core.py 112
3794             return 0
3795         return fileutil.get_available_space(self.storedir, self.reserved_space)
3796 
3797-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3798-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3799+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3800+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3801+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3802+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3803         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3804         return bw
3805 
3806hunk ./src/allmydata/storage/backends/das/core.py 155
3807     LEASE_SIZE = struct.calcsize(">L32s32sL")
3808     sharetype = "immutable"
3809 
3810-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3811+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3812         """ If max_size is not None then I won't allow more than
3813         max_size to be written to me. If create=True then max_size
3814         must not be None. """
3815}
3816[get_incoming correctly reports the 0 share after it has arrived
3817wilcoxjg@gmail.com**20110712025157
3818 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3819] {
3820hunk ./src/allmydata/storage/backends/das/core.py 1
3821+import os, re, weakref, struct, time, stat
3822+
3823 from allmydata.interfaces import IStorageBackend
3824 from allmydata.storage.backends.base import Backend
3825 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3826hunk ./src/allmydata/storage/backends/das/core.py 8
3827 from allmydata.util.assertutil import precondition
3828 
3829-import os, re, weakref, struct, time
3830-
3831 #from foolscap.api import Referenceable
3832 from twisted.application import service
3833 
3834hunk ./src/allmydata/storage/backends/das/core.py 89
3835         try:
3836             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3837             incominglist = os.listdir(incomingsharesdir)
3838-            print "incominglist: ", incominglist
3839-            return set(incominglist)
3840+            incomingshnums = [int(x) for x in incominglist]
3841+            return set(incomingshnums)
3842         except OSError:
3843             # XXX I'd like to make this more specific. If there are no shares at all.
3844             return set()
3845hunk ./src/allmydata/storage/backends/das/core.py 113
3846         return fileutil.get_available_space(self.storedir, self.reserved_space)
3847 
3848     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3849-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3850-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3851-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3852+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3853+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3854+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3855         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3856         return bw
3857 
3858hunk ./src/allmydata/storage/backends/das/core.py 160
3859         max_size to be written to me. If create=True then max_size
3860         must not be None. """
3861         precondition((max_size is not None) or (not create), max_size, create)
3862-        self.shnum = shnum
3863-        self.storage_index = storageindex
3864-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3865         self._max_size = max_size
3866hunk ./src/allmydata/storage/backends/das/core.py 161
3867-        self.incomingdir = os.path.join(sharedir, 'incoming')
3868-        si_dir = storage_index_to_dir(storageindex)
3869-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3870-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3871-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3872+        self.incominghome = incominghome
3873+        self.finalhome = finalhome
3874         if create:
3875             # touch the file, so later callers will see that we're working on
3876             # it. Also construct the metadata.
3877hunk ./src/allmydata/storage/backends/das/core.py 166
3878-            assert not os.path.exists(self.fname)
3879-            fileutil.make_dirs(os.path.dirname(self.fname))
3880-            f = open(self.fname, 'wb')
3881+            assert not os.path.exists(self.finalhome)
3882+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3883+            f = open(self.incominghome, 'wb')
3884             # The second field -- the four-byte share data length -- is no
3885             # longer used as of Tahoe v1.3.0, but we continue to write it in
3886             # there in case someone downgrades a storage server from >=
3887hunk ./src/allmydata/storage/backends/das/core.py 183
3888             self._lease_offset = max_size + 0x0c
3889             self._num_leases = 0
3890         else:
3891-            f = open(self.fname, 'rb')
3892-            filesize = os.path.getsize(self.fname)
3893+            f = open(self.finalhome, 'rb')
3894+            filesize = os.path.getsize(self.finalhome)
3895             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3896             f.close()
3897             if version != 1:
3898hunk ./src/allmydata/storage/backends/das/core.py 189
3899                 msg = "sharefile %s had version %d but we wanted 1" % \
3900-                      (self.fname, version)
3901+                      (self.finalhome, version)
3902                 raise UnknownImmutableContainerVersionError(msg)
3903             self._num_leases = num_leases
3904             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3905hunk ./src/allmydata/storage/backends/das/core.py 225
3906         pass
3907         
3908     def stat(self):
3909-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3910+        return os.stat(self.finalhome)[stat.ST_SIZE]
3911+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
3912 
3913     def get_shnum(self):
3914         return self.shnum
3915hunk ./src/allmydata/storage/backends/das/core.py 232
3916 
3917     def unlink(self):
3918-        os.unlink(self.fname)
3919+        os.unlink(self.finalhome)
3920 
3921     def read_share_data(self, offset, length):
3922         precondition(offset >= 0)
3923hunk ./src/allmydata/storage/backends/das/core.py 239
3924         # Reads beyond the end of the data are truncated. Reads that start
3925         # beyond the end of the data return an empty string.
3926         seekpos = self._data_offset+offset
3927-        fsize = os.path.getsize(self.fname)
3928+        fsize = os.path.getsize(self.finalhome)
3929         actuallength = max(0, min(length, fsize-seekpos))
3930         if actuallength == 0:
3931             return ""
3932hunk ./src/allmydata/storage/backends/das/core.py 243
3933-        f = open(self.fname, 'rb')
3934+        f = open(self.finalhome, 'rb')
3935         f.seek(seekpos)
3936         return f.read(actuallength)
3937 
3938hunk ./src/allmydata/storage/backends/das/core.py 252
3939         precondition(offset >= 0, offset)
3940         if self._max_size is not None and offset+length > self._max_size:
3941             raise DataTooLargeError(self._max_size, offset, length)
3942-        f = open(self.fname, 'rb+')
3943+        f = open(self.incominghome, 'rb+')
3944         real_offset = self._data_offset+offset
3945         f.seek(real_offset)
3946         assert f.tell() == real_offset
3947hunk ./src/allmydata/storage/backends/das/core.py 279
3948 
3949     def get_leases(self):
3950         """Yields a LeaseInfo instance for all leases."""
3951-        f = open(self.fname, 'rb')
3952+        f = open(self.finalhome, 'rb')
3953         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3954         f.seek(self._lease_offset)
3955         for i in range(num_leases):
3956hunk ./src/allmydata/storage/backends/das/core.py 288
3957                 yield LeaseInfo().from_immutable_data(data)
3958 
3959     def add_lease(self, lease_info):
3960-        f = open(self.fname, 'rb+')
3961+        f = open(self.incominghome, 'rb+')
3962         num_leases = self._read_num_leases(f)
3963         self._write_lease_record(f, num_leases, lease_info)
3964         self._write_num_leases(f, num_leases+1)
3965hunk ./src/allmydata/storage/backends/das/core.py 301
3966                 if new_expire_time > lease.expiration_time:
3967                     # yes
3968                     lease.expiration_time = new_expire_time
3969-                    f = open(self.fname, 'rb+')
3970+                    f = open(self.finalhome, 'rb+')
3971                     self._write_lease_record(f, i, lease)
3972                     f.close()
3973                 return
3974hunk ./src/allmydata/storage/backends/das/core.py 336
3975             # the same order as they were added, so that if we crash while
3976             # doing this, we won't lose any non-cancelled leases.
3977             leases = [l for l in leases if l] # remove the cancelled leases
3978-            f = open(self.fname, 'rb+')
3979+            f = open(self.finalhome, 'rb+')
3980             for i,lease in enumerate(leases):
3981                 self._write_lease_record(f, i, lease)
3982             self._write_num_leases(f, len(leases))
3983hunk ./src/allmydata/storage/backends/das/core.py 344
3984             f.close()
3985         space_freed = self.LEASE_SIZE * num_leases_removed
3986         if not len(leases):
3987-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
3988+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
3989             self.unlink()
3990         return space_freed
3991hunk ./src/allmydata/test/test_backends.py 129
3992     @mock.patch('time.time')
3993     def test_write_share(self, mocktime):
3994         """ Write a new share. """
3995-
3996-        class MockShare:
3997-            def __init__(self):
3998-                self.shnum = 1
3999-               
4000-            def add_or_renew_lease(elf, lease_info):
4001-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4002-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4003-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4004-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4005-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4006-
4007-        share = MockShare()
4008-
4009         # Now begin the test.
4010 
4011         # XXX (0) ???  Fail unless something is not properly set-up?
4012hunk ./src/allmydata/test/test_backends.py 143
4013         # self.failIf(bsa)
4014 
4015         bs[0].remote_write(0, 'a')
4016-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4017+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4018         spaceint = self.s.allocated_size()
4019         self.failUnlessReallyEqual(spaceint, 1)
4020 
4021hunk ./src/allmydata/test/test_backends.py 161
4022         #self.failIf(mockrename.called, mockrename.call_args_list)
4023         #self.failIf(mockstat.called, mockstat.call_args_list)
4024 
4025+    def test_handle_incoming(self):
4026+        incomingset = self.s.backend.get_incoming('teststorage_index')
4027+        self.failUnlessReallyEqual(incomingset, set())
4028+
4029+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4030+       
4031+        incomingset = self.s.backend.get_incoming('teststorage_index')
4032+        self.failUnlessReallyEqual(incomingset, set((0,)))
4033+
4034+        bs[0].remote_close()
4035+        self.failUnlessReallyEqual(incomingset, set())
4036+
4037     @mock.patch('os.path.exists')
4038     @mock.patch('os.path.getsize')
4039     @mock.patch('__builtin__.open')
4040hunk ./src/allmydata/test/test_backends.py 223
4041         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4042 
4043 
4044-
4045 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4046     @mock.patch('time.time')
4047     @mock.patch('os.mkdir')
4048hunk ./src/allmydata/test/test_backends.py 271
4049         DASCore('teststoredir', expiration_policy)
4050 
4051         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4052+
4053}
4054[jacp14
4055wilcoxjg@gmail.com**20110712061211
4056 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4057] {
4058hunk ./src/allmydata/storage/backends/das/core.py 95
4059             # XXX I'd like to make this more specific. If there are no shares at all.
4060             return set()
4061             
4062-    def get_shares(self, storage_index):
4063+    def get_shares(self, storageindex):
4064         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4065hunk ./src/allmydata/storage/backends/das/core.py 97
4066-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4067+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4068         try:
4069             for f in os.listdir(finalstoragedir):
4070                 if NUM_RE.match(f):
4071hunk ./src/allmydata/storage/backends/das/core.py 102
4072                     filename = os.path.join(finalstoragedir, f)
4073-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4074+                    yield ImmutableShare(filename, storageindex, f)
4075         except OSError:
4076             # Commonly caused by there being no shares at all.
4077             pass
4078hunk ./src/allmydata/storage/backends/das/core.py 115
4079     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4080         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4081         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4082-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4083+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4084         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4085         return bw
4086 
4087hunk ./src/allmydata/storage/backends/das/core.py 155
4088     LEASE_SIZE = struct.calcsize(">L32s32sL")
4089     sharetype = "immutable"
4090 
4091-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4092+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4093         """ If max_size is not None then I won't allow more than
4094         max_size to be written to me. If create=True then max_size
4095         must not be None. """
4096hunk ./src/allmydata/storage/backends/das/core.py 160
4097         precondition((max_size is not None) or (not create), max_size, create)
4098+        self.storageindex = storageindex
4099         self._max_size = max_size
4100         self.incominghome = incominghome
4101         self.finalhome = finalhome
4102hunk ./src/allmydata/storage/backends/das/core.py 164
4103+        self.shnum = shnum
4104         if create:
4105             # touch the file, so later callers will see that we're working on
4106             # it. Also construct the metadata.
4107hunk ./src/allmydata/storage/backends/das/core.py 212
4108             # their children to know when they should do the rmdir. This
4109             # approach is simpler, but relies on os.rmdir refusing to delete
4110             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4111+            #print "os.path.dirname(self.incominghome): "
4112+            #print os.path.dirname(self.incominghome)
4113             os.rmdir(os.path.dirname(self.incominghome))
4114             # we also delete the grandparent (prefix) directory, .../ab ,
4115             # again to avoid leaving directories lying around. This might
4116hunk ./src/allmydata/storage/immutable.py 93
4117     def __init__(self, ss, share):
4118         self.ss = ss
4119         self._share_file = share
4120-        self.storage_index = share.storage_index
4121+        self.storageindex = share.storageindex
4122         self.shnum = share.shnum
4123 
4124     def __repr__(self):
4125hunk ./src/allmydata/storage/immutable.py 98
4126         return "<%s %s %s>" % (self.__class__.__name__,
4127-                               base32.b2a_l(self.storage_index[:8], 60),
4128+                               base32.b2a_l(self.storageindex[:8], 60),
4129                                self.shnum)
4130 
4131     def remote_read(self, offset, length):
4132hunk ./src/allmydata/storage/immutable.py 110
4133 
4134     def remote_advise_corrupt_share(self, reason):
4135         return self.ss.remote_advise_corrupt_share("immutable",
4136-                                                   self.storage_index,
4137+                                                   self.storageindex,
4138                                                    self.shnum,
4139                                                    reason)
4140hunk ./src/allmydata/test/test_backends.py 20
4141 # The following share file contents was generated with
4142 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4143 # with share data == 'a'.
4144-renew_secret  = 'x'*32
4145-cancel_secret = 'y'*32
4146-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4147-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4148+shareversionnumber = '\x00\x00\x00\x01'
4149+sharedatalength = '\x00\x00\x00\x01'
4150+numberofleases = '\x00\x00\x00\x01'
4151+shareinputdata = 'a'
4152+ownernumber = '\x00\x00\x00\x00'
4153+renewsecret  = 'x'*32
4154+cancelsecret = 'y'*32
4155+expirationtime = '\x00(\xde\x80'
4156+nextlease = ''
4157+containerdata = shareversionnumber + sharedatalength + numberofleases
4158+client_data = shareinputdata + ownernumber + renewsecret + \
4159+    cancelsecret + expirationtime + nextlease
4160+share_data = containerdata + client_data
4161+
4162 
4163 testnodeid = 'testnodeidxxxxxxxxxx'
4164 tempdir = 'teststoredir'
4165hunk ./src/allmydata/test/test_backends.py 52
4166 
4167 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4168     def setUp(self):
4169-        self.s = StorageServer(testnodeid, backend=NullCore())
4170+        self.ss = StorageServer(testnodeid, backend=NullCore())
4171 
4172     @mock.patch('os.mkdir')
4173     @mock.patch('__builtin__.open')
4174hunk ./src/allmydata/test/test_backends.py 62
4175         """ Write a new share. """
4176 
4177         # Now begin the test.
4178-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4179+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4180         bs[0].remote_write(0, 'a')
4181         self.failIf(mockisdir.called)
4182         self.failIf(mocklistdir.called)
4183hunk ./src/allmydata/test/test_backends.py 133
4184                 _assert(False, "The tester code doesn't recognize this case.") 
4185 
4186         mockopen.side_effect = call_open
4187-        testbackend = DASCore(tempdir, expiration_policy)
4188-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4189+        self.backend = DASCore(tempdir, expiration_policy)
4190+        self.ss = StorageServer(testnodeid, self.backend)
4191+        self.ssinf = StorageServer(testnodeid, self.backend)
4192 
4193     @mock.patch('time.time')
4194     def test_write_share(self, mocktime):
4195hunk ./src/allmydata/test/test_backends.py 142
4196         """ Write a new share. """
4197         # Now begin the test.
4198 
4199-        # XXX (0) ???  Fail unless something is not properly set-up?
4200-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4201+        mocktime.return_value = 0
4202+        # Inspect incoming and fail unless it's empty.
4203+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4204+        self.failUnlessReallyEqual(incomingset, set())
4205+       
4206+        # Among other things, populate incoming with the sharenum: 0.
4207+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4208 
4209hunk ./src/allmydata/test/test_backends.py 150
4210-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4211-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4212-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4213+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4214+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4215+       
4216+        # Attempt to create a second share writer with the same share.
4217+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4218 
4219hunk ./src/allmydata/test/test_backends.py 156
4220-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4221+        # Show that no sharewriter results from a remote_allocate_buckets
4222         # with the same si, until BucketWriter.remote_close() has been called.
4223hunk ./src/allmydata/test/test_backends.py 158
4224-        # self.failIf(bsa)
4225+        self.failIf(bsa)
4226 
4227hunk ./src/allmydata/test/test_backends.py 160
4228+        # Write 'a' to shnum 0. Only tested together with close and read.
4229         bs[0].remote_write(0, 'a')
4230hunk ./src/allmydata/test/test_backends.py 162
4231-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4232-        spaceint = self.s.allocated_size()
4233+
4234+        # Test allocated size.
4235+        spaceint = self.ss.allocated_size()
4236         self.failUnlessReallyEqual(spaceint, 1)
4237 
4238         # XXX (3) Inspect final and fail unless there's nothing there.
4239hunk ./src/allmydata/test/test_backends.py 168
4240+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4241         bs[0].remote_close()
4242         # XXX (4a) Inspect final and fail unless share 0 is there.
4243hunk ./src/allmydata/test/test_backends.py 171
4244+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4245+        #contents = sharesinfinal[0].read_share_data(0,999)
4246+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4247         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4248 
4249         # What happens when there's not enough space for the client's request?
4250hunk ./src/allmydata/test/test_backends.py 177
4251-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4252+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4253 
4254         # Now test the allocated_size method.
4255         # self.failIf(mockexists.called, mockexists.call_args_list)
4256hunk ./src/allmydata/test/test_backends.py 185
4257         #self.failIf(mockrename.called, mockrename.call_args_list)
4258         #self.failIf(mockstat.called, mockstat.call_args_list)
4259 
4260-    def test_handle_incoming(self):
4261-        incomingset = self.s.backend.get_incoming('teststorage_index')
4262-        self.failUnlessReallyEqual(incomingset, set())
4263-
4264-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4265-       
4266-        incomingset = self.s.backend.get_incoming('teststorage_index')
4267-        self.failUnlessReallyEqual(incomingset, set((0,)))
4268-
4269-        bs[0].remote_close()
4270-        self.failUnlessReallyEqual(incomingset, set())
4271-
4272     @mock.patch('os.path.exists')
4273     @mock.patch('os.path.getsize')
4274     @mock.patch('__builtin__.open')
4275hunk ./src/allmydata/test/test_backends.py 208
4276             self.failUnless('r' in mode, mode)
4277             self.failUnless('b' in mode, mode)
4278 
4279-            return StringIO(share_file_data)
4280+            return StringIO(share_data)
4281         mockopen.side_effect = call_open
4282 
4283hunk ./src/allmydata/test/test_backends.py 211
4284-        datalen = len(share_file_data)
4285+        datalen = len(share_data)
4286         def call_getsize(fname):
4287             self.failUnlessReallyEqual(fname, sharefname)
4288             return datalen
4289hunk ./src/allmydata/test/test_backends.py 223
4290         mockexists.side_effect = call_exists
4291 
4292         # Now begin the test.
4293-        bs = self.s.remote_get_buckets('teststorage_index')
4294+        bs = self.ss.remote_get_buckets('teststorage_index')
4295 
4296         self.failUnlessEqual(len(bs), 1)
4297hunk ./src/allmydata/test/test_backends.py 226
4298-        b = bs[0]
4299+        b = bs['0']
4300         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4301hunk ./src/allmydata/test/test_backends.py 228
4302-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4303+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4304         # If you try to read past the end you get the as much data as is there.
4305hunk ./src/allmydata/test/test_backends.py 230
4306-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4307+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4308         # If you start reading past the end of the file you get the empty string.
4309         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4310 
4311}
4312[jacp14 or so
4313wilcoxjg@gmail.com**20110713060346
4314 Ignore-this: 7026810f60879d65b525d450e43ff87a
4315] {
4316hunk ./src/allmydata/storage/backends/das/core.py 102
4317             for f in os.listdir(finalstoragedir):
4318                 if NUM_RE.match(f):
4319                     filename = os.path.join(finalstoragedir, f)
4320-                    yield ImmutableShare(filename, storageindex, f)
4321+                    yield ImmutableShare(filename, storageindex, int(f))
4322         except OSError:
4323             # Commonly caused by there being no shares at all.
4324             pass
4325hunk ./src/allmydata/storage/backends/null/core.py 25
4326     def set_storage_server(self, ss):
4327         self.ss = ss
4328 
4329+    def get_incoming(self, storageindex):
4330+        return set()
4331+
4332 class ImmutableShare:
4333     sharetype = "immutable"
4334 
4335hunk ./src/allmydata/storage/immutable.py 19
4336 
4337     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4338         self.ss = ss
4339-        self._max_size = max_size # don't allow the client to write more than this
4340+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4341+
4342         self._canary = canary
4343         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4344         self.closed = False
4345hunk ./src/allmydata/test/test_backends.py 135
4346         mockopen.side_effect = call_open
4347         self.backend = DASCore(tempdir, expiration_policy)
4348         self.ss = StorageServer(testnodeid, self.backend)
4349-        self.ssinf = StorageServer(testnodeid, self.backend)
4350+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4351+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4352 
4353     @mock.patch('time.time')
4354     def test_write_share(self, mocktime):
4355hunk ./src/allmydata/test/test_backends.py 161
4356         # with the same si, until BucketWriter.remote_close() has been called.
4357         self.failIf(bsa)
4358 
4359-        # Write 'a' to shnum 0. Only tested together with close and read.
4360-        bs[0].remote_write(0, 'a')
4361-
4362         # Test allocated size.
4363         spaceint = self.ss.allocated_size()
4364         self.failUnlessReallyEqual(spaceint, 1)
4365hunk ./src/allmydata/test/test_backends.py 165
4366 
4367-        # XXX (3) Inspect final and fail unless there's nothing there.
4368+        # Write 'a' to shnum 0. Only tested together with close and read.
4369+        bs[0].remote_write(0, 'a')
4370+       
4371+        # Preclose: Inspect final, failUnless nothing there.
4372         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4373         bs[0].remote_close()
4374hunk ./src/allmydata/test/test_backends.py 171
4375-        # XXX (4a) Inspect final and fail unless share 0 is there.
4376-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4377-        #contents = sharesinfinal[0].read_share_data(0,999)
4378-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4379-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4380 
4381hunk ./src/allmydata/test/test_backends.py 172
4382-        # What happens when there's not enough space for the client's request?
4383-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4384+        # Postclose: (Omnibus) failUnless written data is in final.
4385+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4386+        contents = sharesinfinal[0].read_share_data(0,73)
4387+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4388 
4389hunk ./src/allmydata/test/test_backends.py 177
4390-        # Now test the allocated_size method.
4391-        # self.failIf(mockexists.called, mockexists.call_args_list)
4392-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4393-        #self.failIf(mockrename.called, mockrename.call_args_list)
4394-        #self.failIf(mockstat.called, mockstat.call_args_list)
4395+        # Cover interior of for share in get_shares loop.
4396+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4397+       
4398+    @mock.patch('time.time')
4399+    @mock.patch('allmydata.util.fileutil.get_available_space')
4400+    def test_out_of_space(self, mockget_available_space, mocktime):
4401+        mocktime.return_value = 0
4402+       
4403+        def call_get_available_space(dir, reserve):
4404+            return 0
4405+
4406+        mockget_available_space.side_effect = call_get_available_space
4407+       
4408+       
4409+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4410 
4411     @mock.patch('os.path.exists')
4412     @mock.patch('os.path.getsize')
4413hunk ./src/allmydata/test/test_backends.py 234
4414         bs = self.ss.remote_get_buckets('teststorage_index')
4415 
4416         self.failUnlessEqual(len(bs), 1)
4417-        b = bs['0']
4418+        b = bs[0]
4419         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4420         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4421         # If you try to read past the end you get the as much data as is there.
4422}
4423[temporary work-in-progress patch to be unrecorded
4424zooko@zooko.com**20110714003008
4425 Ignore-this: 39ecb812eca5abe04274c19897af5b45
4426 tidy up a few tests, work done in pair-programming with Zancas
4427] {
4428hunk ./src/allmydata/storage/backends/das/core.py 65
4429         self._clean_incomplete()
4430 
4431     def _clean_incomplete(self):
4432-        fileutil.rm_dir(self.incomingdir)
4433+        fileutil.rmtree(self.incomingdir)
4434         fileutil.make_dirs(self.incomingdir)
4435 
4436     def _setup_corruption_advisory(self):
4437hunk ./src/allmydata/storage/immutable.py 1
4438-import os, stat, struct, time
4439+import os, time
4440 
4441 from foolscap.api import Referenceable
4442 
4443hunk ./src/allmydata/storage/server.py 1
4444-import os, re, weakref, struct, time
4445+import os, weakref, struct, time
4446 
4447 from foolscap.api import Referenceable
4448 from twisted.application import service
4449hunk ./src/allmydata/storage/server.py 7
4450 
4451 from zope.interface import implements
4452-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
4453+from allmydata.interfaces import RIStorageServer, IStatsProducer
4454 from allmydata.util import fileutil, idlib, log, time_format
4455 import allmydata # for __full_version__
4456 
4457hunk ./src/allmydata/storage/server.py 313
4458         self.add_latency("get", time.time() - start)
4459         return bucketreaders
4460 
4461-    def remote_get_incoming(self, storageindex):
4462-        incoming_share_set = self.backend.get_incoming(storageindex)
4463-        return incoming_share_set
4464-
4465     def get_leases(self, storageindex):
4466         """Provide an iterator that yields all of the leases attached to this
4467         bucket. Each lease is returned as a LeaseInfo instance.
4468hunk ./src/allmydata/test/test_backends.py 3
4469 from twisted.trial import unittest
4470 
4471+from twisted.path.filepath import FilePath
4472+
4473 from StringIO import StringIO
4474 
4475 from allmydata.test.common_util import ReallyEqualMixin
4476hunk ./src/allmydata/test/test_backends.py 38
4477 
4478 
4479 testnodeid = 'testnodeidxxxxxxxxxx'
4480-tempdir = 'teststoredir'
4481-basedir = os.path.join(tempdir, 'shares')
4482+storedir = 'teststoredir'
4483+storedirfp = FilePath(storedir)
4484+basedir = os.path.join(storedir, 'shares')
4485 baseincdir = os.path.join(basedir, 'incoming')
4486 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4487 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4488hunk ./src/allmydata/test/test_backends.py 53
4489                      'cutoff_date' : None,
4490                      'sharetypes' : None}
4491 
4492-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4493+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4494+    """ NullBackend is just for testing and executable documentation, so
4495+    this test is actually a test of StorageServer in which we're using
4496+    NullBackend as helper code for the test, rather than a test of
4497+    NullBackend. """
4498     def setUp(self):
4499         self.ss = StorageServer(testnodeid, backend=NullCore())
4500 
4501hunk ./src/allmydata/test/test_backends.py 62
4502     @mock.patch('os.mkdir')
4503+
4504     @mock.patch('__builtin__.open')
4505     @mock.patch('os.listdir')
4506     @mock.patch('os.path.isdir')
4507hunk ./src/allmydata/test/test_backends.py 69
4508     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4509         """ Write a new share. """
4510 
4511-        # Now begin the test.
4512         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4513         bs[0].remote_write(0, 'a')
4514         self.failIf(mockisdir.called)
4515hunk ./src/allmydata/test/test_backends.py 83
4516     @mock.patch('os.listdir')
4517     @mock.patch('os.path.isdir')
4518     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4519-        """ This tests whether a server instance can be constructed
4520-        with a filesystem backend. To pass the test, it has to use the
4521-        filesystem in only the prescribed ways. """
4522+        """ This tests whether a server instance can be constructed with a
4523+        filesystem backend. To pass the test, it mustn't use the filesystem
4524+        outside of its configured storedir. """
4525 
4526         def call_open(fname, mode):
4527hunk ./src/allmydata/test/test_backends.py 88
4528-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4529-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4530-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4531-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4532-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4533+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4534+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4535+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4536+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4537+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4538                 return StringIO()
4539             else:
4540hunk ./src/allmydata/test/test_backends.py 95
4541-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4542+                fnamefp = FilePath(fname)
4543+                self.failUnless(storedirfp in fnamefp.parents(),
4544+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4545         mockopen.side_effect = call_open
4546 
4547         def call_isdir(fname):
4548hunk ./src/allmydata/test/test_backends.py 101
4549-            if fname == os.path.join(tempdir,'shares'):
4550+            if fname == os.path.join(storedir, 'shares'):
4551                 return True
4552hunk ./src/allmydata/test/test_backends.py 103
4553-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4554+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4555                 return True
4556             else:
4557                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4558hunk ./src/allmydata/test/test_backends.py 109
4559         mockisdir.side_effect = call_isdir
4560 
4561+        mocklistdir.return_value = []
4562+
4563         def call_mkdir(fname, mode):
4564hunk ./src/allmydata/test/test_backends.py 112
4565-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4566             self.failUnlessEqual(0777, mode)
4567hunk ./src/allmydata/test/test_backends.py 113
4568-            if fname == tempdir:
4569-                return None
4570-            elif fname == os.path.join(tempdir,'shares'):
4571-                return None
4572-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4573-                return None
4574-            else:
4575-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4576+            self.failUnlessIn(fname,
4577+                              [storedir,
4578+                               os.path.join(storedir, 'shares'),
4579+                               os.path.join(storedir, 'shares', 'incoming')],
4580+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4581         mockmkdir.side_effect = call_mkdir
4582 
4583         # Now begin the test.
4584hunk ./src/allmydata/test/test_backends.py 121
4585-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4586+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4587 
4588         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4589 
4590hunk ./src/allmydata/test/test_backends.py 126
4591 
4592-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
4593+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4594+    """ This tests both the StorageServer xyz """
4595     @mock.patch('__builtin__.open')
4596     def setUp(self, mockopen):
4597         def call_open(fname, mode):
4598hunk ./src/allmydata/test/test_backends.py 131
4599-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
4600-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4601-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4602-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4603-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4604+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4605+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4606+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4607+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4608+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4609                 return StringIO()
4610             else:
4611                 _assert(False, "The tester code doesn't recognize this case.") 
4612hunk ./src/allmydata/test/test_backends.py 141
4613 
4614         mockopen.side_effect = call_open
4615-        self.backend = DASCore(tempdir, expiration_policy)
4616+        self.backend = DASCore(storedir, expiration_policy)
4617         self.ss = StorageServer(testnodeid, self.backend)
4618hunk ./src/allmydata/test/test_backends.py 143
4619-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4620+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
4621         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4622 
4623     @mock.patch('time.time')
4624hunk ./src/allmydata/test/test_backends.py 147
4625-    def test_write_share(self, mocktime):
4626-        """ Write a new share. """
4627-        # Now begin the test.
4628+    def test_write_and_read_share(self, mocktime):
4629+        """
4630+        Write a new share, read it, and test the server's (and FS backend's)
4631+        handling of simultaneous and successive attempts to write the same
4632+        share.
4633+        """
4634 
4635         mocktime.return_value = 0
4636         # Inspect incoming and fail unless it's empty.
4637hunk ./src/allmydata/test/test_backends.py 159
4638         incomingset = self.ss.backend.get_incoming('teststorage_index')
4639         self.failUnlessReallyEqual(incomingset, set())
4640         
4641-        # Among other things, populate incoming with the sharenum: 0.
4642+        # Populate incoming with the sharenum: 0.
4643         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4644 
4645         # Inspect incoming and fail unless the sharenum: 0 is listed there.
4646hunk ./src/allmydata/test/test_backends.py 163
4647-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4648+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
4649         
4650hunk ./src/allmydata/test/test_backends.py 165
4651-        # Attempt to create a second share writer with the same share.
4652+        # Attempt to create a second share writer with the same sharenum.
4653         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4654 
4655         # Show that no sharewriter results from a remote_allocate_buckets
4656hunk ./src/allmydata/test/test_backends.py 169
4657-        # with the same si, until BucketWriter.remote_close() has been called.
4658+        # with the same si and sharenum, until BucketWriter.remote_close()
4659+        # has been called.
4660         self.failIf(bsa)
4661 
4662         # Test allocated size.
4663hunk ./src/allmydata/test/test_backends.py 187
4664         # Postclose: (Omnibus) failUnless written data is in final.
4665         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4666         contents = sharesinfinal[0].read_share_data(0,73)
4667-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4668+        self.failUnlessReallyEqual(contents, client_data)
4669 
4670hunk ./src/allmydata/test/test_backends.py 189
4671-        # Cover interior of for share in get_shares loop.
4672-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4673+        # Exercise the case that the share we're asking to allocate is
4674+        # already (completely) uploaded.
4675+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4676         
4677     @mock.patch('time.time')
4678     @mock.patch('allmydata.util.fileutil.get_available_space')
4679hunk ./src/allmydata/test/test_backends.py 210
4680     @mock.patch('os.path.getsize')
4681     @mock.patch('__builtin__.open')
4682     @mock.patch('os.listdir')
4683-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4684+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4685         """ This tests whether the code correctly finds and reads
4686         shares written out by old (Tahoe-LAFS <= v1.8.2)
4687         servers. There is a similar test in test_download, but that one
4688hunk ./src/allmydata/test/test_backends.py 219
4689         StorageServer object. """
4690 
4691         def call_listdir(dirname):
4692-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4693+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4694             return ['0']
4695 
4696         mocklistdir.side_effect = call_listdir
4697hunk ./src/allmydata/test/test_backends.py 226
4698 
4699         def call_open(fname, mode):
4700             self.failUnlessReallyEqual(fname, sharefname)
4701-            self.failUnless('r' in mode, mode)
4702+            self.failUnlessEqual(mode[0], 'r', mode)
4703             self.failUnless('b' in mode, mode)
4704 
4705             return StringIO(share_data)
4706hunk ./src/allmydata/test/test_backends.py 268
4707         filesystem in only the prescribed ways. """
4708 
4709         def call_open(fname, mode):
4710-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4711-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4712-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4713-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4714-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4715+            if fname == os.path.join(storedir,'bucket_counter.state'):
4716+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4717+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4718+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4719+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4720                 return StringIO()
4721             else:
4722                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4723hunk ./src/allmydata/test/test_backends.py 279
4724         mockopen.side_effect = call_open
4725 
4726         def call_isdir(fname):
4727-            if fname == os.path.join(tempdir,'shares'):
4728+            if fname == os.path.join(storedir,'shares'):
4729                 return True
4730hunk ./src/allmydata/test/test_backends.py 281
4731-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4732+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4733                 return True
4734             else:
4735                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4736hunk ./src/allmydata/test/test_backends.py 290
4737         def call_mkdir(fname, mode):
4738             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4739             self.failUnlessEqual(0777, mode)
4740-            if fname == tempdir:
4741+            if fname == storedir:
4742                 return None
4743hunk ./src/allmydata/test/test_backends.py 292
4744-            elif fname == os.path.join(tempdir,'shares'):
4745+            elif fname == os.path.join(storedir,'shares'):
4746                 return None
4747hunk ./src/allmydata/test/test_backends.py 294
4748-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4749+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4750                 return None
4751             else:
4752                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4753hunk ./src/allmydata/util/fileutil.py 5
4754 Futz with files like a pro.
4755 """
4756 
4757-import sys, exceptions, os, stat, tempfile, time, binascii
4758+import errno, sys, exceptions, os, stat, tempfile, time, binascii
4759 
4760 from twisted.python import log
4761 
4762hunk ./src/allmydata/util/fileutil.py 186
4763             raise tx
4764         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
4765 
4766-def rm_dir(dirname):
4767+def rmtree(dirname):
4768     """
4769     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
4770     already gone, do nothing and return without raising an exception.  If this
4771hunk ./src/allmydata/util/fileutil.py 205
4772             else:
4773                 remove(fullname)
4774         os.rmdir(dirname)
4775-    except Exception, le:
4776-        # Ignore "No such file or directory"
4777-        if (not isinstance(le, OSError)) or le.args[0] != 2:
4778+    except EnvironmentError, le:
4779+        # Ignore "No such file or directory", collect any other exception.
4780+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
4781             excs.append(le)
4782hunk ./src/allmydata/util/fileutil.py 209
4783+    except Exception, le:
4784+        excs.append(le)
4785 
4786     # Okay, now we've recursively removed everything, ignoring any "No
4787     # such file or directory" errors, and collecting any other errors.
4788hunk ./src/allmydata/util/fileutil.py 222
4789             raise OSError, "Failed to remove dir for unknown reason."
4790         raise OSError, excs
4791 
4792+def rm_dir(dirname):
4793+    # Renamed to be like shutil.rmtree and unlike rmdir.
4794+    return rmtree(dirname)
4795 
4796 def remove_if_possible(f):
4797     try:
4798}
4799[work in progress intended to be unrecorded and never committed to trunk
4800zooko@zooko.com**20110714212139
4801 Ignore-this: c291aaf2b22c4887ad0ba2caea911537
4802 switch from os.path.join to filepath
4803 incomplete refactoring of common "stay in your subtree" tester code into a superclass
4804 
4805] {
4806hunk ./src/allmydata/test/test_backends.py 3
4807 from twisted.trial import unittest
4808 
4809-from twisted.path.filepath import FilePath
4810+from twisted.python.filepath import FilePath
4811 
4812 from StringIO import StringIO
4813 
4814hunk ./src/allmydata/test/test_backends.py 10
4815 from allmydata.test.common_util import ReallyEqualMixin
4816 from allmydata.util.assertutil import _assert
4817 
4818-import mock, os
4819+import mock
4820 
4821 # This is the code that we're going to be testing.
4822 from allmydata.storage.server import StorageServer
4823hunk ./src/allmydata/test/test_backends.py 25
4824 shareversionnumber = '\x00\x00\x00\x01'
4825 sharedatalength = '\x00\x00\x00\x01'
4826 numberofleases = '\x00\x00\x00\x01'
4827+
4828 shareinputdata = 'a'
4829 ownernumber = '\x00\x00\x00\x00'
4830 renewsecret  = 'x'*32
4831hunk ./src/allmydata/test/test_backends.py 39
4832 
4833 
4834 testnodeid = 'testnodeidxxxxxxxxxx'
4835-storedir = 'teststoredir'
4836-storedirfp = FilePath(storedir)
4837-basedir = os.path.join(storedir, 'shares')
4838-baseincdir = os.path.join(basedir, 'incoming')
4839-sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4840-sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4841-shareincomingname = os.path.join(sharedirincomingname, '0')
4842-sharefname = os.path.join(sharedirfinalname, '0')
4843+
4844+class TestFilesMixin(unittest.TestCase):
4845+    def setUp(self):
4846+        self.storedir = FilePath('teststoredir')
4847+        self.basedir = self.storedir.child('shares')
4848+        self.baseincdir = self.basedir.child('incoming')
4849+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4850+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4851+        self.shareincomingname = self.sharedirincomingname.child('0')
4852+        self.sharefname = self.sharedirfinalname.child('0')
4853+
4854+    def call_open(self, fname, mode):
4855+        fnamefp = FilePath(fname)
4856+        if fnamefp == self.storedir.child('bucket_counter.state'):
4857+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
4858+        elif fnamefp == self.storedir.child('lease_checker.state'):
4859+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
4860+        elif fnamefp == self.storedir.child('lease_checker.history'):
4861+            return StringIO()
4862+        else:
4863+            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4864+                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
4865+
4866+    def call_isdir(self, fname):
4867+        fnamefp = FilePath(fname)
4868+        if fnamefp == self.storedir.child('shares'):
4869+            return True
4870+        elif fnamefp == self.storedir.child('shares').child('incoming'):
4871+            return True
4872+        else:
4873+            self.failUnless(self.storedir in fnamefp.parents(),
4874+                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4875+
4876+    def call_mkdir(self, fname, mode):
4877+        self.failUnlessEqual(0777, mode)
4878+        fnamefp = FilePath(fname)
4879+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4880+                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4881+
4882+
4883+    @mock.patch('os.mkdir')
4884+    @mock.patch('__builtin__.open')
4885+    @mock.patch('os.listdir')
4886+    @mock.patch('os.path.isdir')
4887+    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4888+        mocklistdir.return_value = []
4889+        mockmkdir.side_effect = self.call_mkdir
4890+        mockisdir.side_effect = self.call_isdir
4891+        mockopen.side_effect = self.call_open
4892+        mocklistdir.return_value = []
4893+       
4894+        test_func()
4895+       
4896+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4897 
4898 expiration_policy = {'enabled' : False,
4899                      'mode' : 'age',
4900hunk ./src/allmydata/test/test_backends.py 123
4901         self.failIf(mockopen.called)
4902         self.failIf(mockmkdir.called)
4903 
4904-class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
4905-    @mock.patch('time.time')
4906-    @mock.patch('os.mkdir')
4907-    @mock.patch('__builtin__.open')
4908-    @mock.patch('os.listdir')
4909-    @mock.patch('os.path.isdir')
4910-    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4911+class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
4912+    def test_create_server_fs_backend(self):
4913         """ This tests whether a server instance can be constructed with a
4914         filesystem backend. To pass the test, it mustn't use the filesystem
4915         outside of its configured storedir. """
4916hunk ./src/allmydata/test/test_backends.py 129
4917 
4918-        def call_open(fname, mode):
4919-            if fname == os.path.join(storedir, 'bucket_counter.state'):
4920-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4921-            elif fname == os.path.join(storedir, 'lease_checker.state'):
4922-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4923-            elif fname == os.path.join(storedir, 'lease_checker.history'):
4924-                return StringIO()
4925-            else:
4926-                fnamefp = FilePath(fname)
4927-                self.failUnless(storedirfp in fnamefp.parents(),
4928-                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4929-        mockopen.side_effect = call_open
4930+        def _f():
4931+            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4932 
4933hunk ./src/allmydata/test/test_backends.py 132
4934-        def call_isdir(fname):
4935-            if fname == os.path.join(storedir, 'shares'):
4936-                return True
4937-            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4938-                return True
4939-            else:
4940-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4941-        mockisdir.side_effect = call_isdir
4942-
4943-        mocklistdir.return_value = []
4944-
4945-        def call_mkdir(fname, mode):
4946-            self.failUnlessEqual(0777, mode)
4947-            self.failUnlessIn(fname,
4948-                              [storedir,
4949-                               os.path.join(storedir, 'shares'),
4950-                               os.path.join(storedir, 'shares', 'incoming')],
4951-                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4952-        mockmkdir.side_effect = call_mkdir
4953-
4954-        # Now begin the test.
4955-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4956-
4957-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4958+        self._help_test_stay_in_your_subtree(_f)
4959 
4960 
4961 class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4962}
4963
4964Context:
4965
4966[docs: add missing link in NEWS.rst
4967zooko@zooko.com**20110712153307
4968 Ignore-this: be7b7eb81c03700b739daa1027d72b35
4969]
4970[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
4971zooko@zooko.com**20110712153229
4972 Ignore-this: 723c4f9e2211027c79d711715d972c5
4973 Also remove a couple of vestigial references to figleaf, which is long gone.
4974 fixes #1409 (remove contrib/fuse)
4975]
4976[add Protovis.js-based download-status timeline visualization
4977Brian Warner <warner@lothar.com>**20110629222606
4978 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
4979 
4980 provide status overlap info on the webapi t=json output, add decode/decrypt
4981 rate tooltips, add zoomin/zoomout buttons
4982]
4983[add more download-status data, fix tests
4984Brian Warner <warner@lothar.com>**20110629222555
4985 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
4986]
4987[prepare for viz: improve DownloadStatus events
4988Brian Warner <warner@lothar.com>**20110629222542
4989 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
4990 
4991 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
4992]
4993[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
4994zooko@zooko.com**20110629185711
4995 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
4996]
4997[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
4998david-sarah@jacaranda.org**20110130235809
4999 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
5000]
5001[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
5002david-sarah@jacaranda.org**20110626054124
5003 Ignore-this: abb864427a1b91bd10d5132b4589fd90
5004]
5005[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
5006david-sarah@jacaranda.org**20110623205528
5007 Ignore-this: c63e23146c39195de52fb17c7c49b2da
5008]
5009[Rename test_package_initialization.py to (much shorter) test_import.py .
5010Brian Warner <warner@lothar.com>**20110611190234
5011 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
5012 
5013 The former name was making my 'ls' listings hard to read, by forcing them
5014 down to just two columns.
5015]
5016[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
5017zooko@zooko.com**20110611163741
5018 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
5019 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
5020 fixes #1412
5021]
5022[wui: right-align the size column in the WUI
5023zooko@zooko.com**20110611153758
5024 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
5025 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
5026 fixes #1412
5027]
5028[docs: three minor fixes
5029zooko@zooko.com**20110610121656
5030 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
5031 CREDITS for arc for stats tweak
5032 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
5033 English usage tweak
5034]
5035[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
5036david-sarah@jacaranda.org**20110609223719
5037 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
5038]
5039[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
5040wilcoxjg@gmail.com**20110527120135
5041 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
5042 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
5043 NEWS.rst, stats.py: documentation of change to get_latencies
5044 stats.rst: now documents percentile modification in get_latencies
5045 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
5046 fixes #1392
5047]
5048[corrected "k must never be smaller than N" to "k must never be greater than N"
5049secorp@allmydata.org**20110425010308
5050 Ignore-this: 233129505d6c70860087f22541805eac
5051]
5052[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
5053david-sarah@jacaranda.org**20110517011214
5054 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
5055]
5056[docs: convert NEWS to NEWS.rst and change all references to it.
5057david-sarah@jacaranda.org**20110517010255
5058 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
5059]
5060[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
5061david-sarah@jacaranda.org**20110512140559
5062 Ignore-this: 784548fc5367fac5450df1c46890876d
5063]
5064[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
5065david-sarah@jacaranda.org**20110130164923
5066 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
5067]
5068[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
5069zooko@zooko.com**20110128142006
5070 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
5071 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
5072]
5073[M-x whitespace-cleanup
5074zooko@zooko.com**20110510193653
5075 Ignore-this: dea02f831298c0f65ad096960e7df5c7
5076]
5077[docs: fix typo in running.rst, thanks to arch_o_median
5078zooko@zooko.com**20110510193633
5079 Ignore-this: ca06de166a46abbc61140513918e79e8
5080]
5081[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
5082david-sarah@jacaranda.org**20110204204902
5083 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
5084]
5085[relnotes.txt: forseeable -> foreseeable. refs #1342
5086david-sarah@jacaranda.org**20110204204116
5087 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
5088]
5089[replace remaining .html docs with .rst docs
5090zooko@zooko.com**20110510191650
5091 Ignore-this: d557d960a986d4ac8216d1677d236399
5092 Remove install.html (long since deprecated).
5093 Also replace some obsolete references to install.html with references to quickstart.rst.
5094 Fix some broken internal references within docs/historical/historical_known_issues.txt.
5095 Thanks to Ravi Pinjala and Patrick McDonald.
5096 refs #1227
5097]
5098[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
5099zooko@zooko.com**20110428055232
5100 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
5101]
5102[munin tahoe_files plugin: fix incorrect file count
5103francois@ctrlaltdel.ch**20110428055312
5104 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
5105 fixes #1391
5106]
5107[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
5108david-sarah@jacaranda.org**20110411190738
5109 Ignore-this: 7847d26bc117c328c679f08a7baee519
5110]
5111[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
5112david-sarah@jacaranda.org**20110410155844
5113 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
5114]
5115[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
5116david-sarah@jacaranda.org**20110410155705
5117 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
5118]
5119[remove unused variable detected by pyflakes
5120zooko@zooko.com**20110407172231
5121 Ignore-this: 7344652d5e0720af822070d91f03daf9
5122]
5123[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
5124david-sarah@jacaranda.org**20110401202750
5125 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
5126]
5127[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
5128Brian Warner <warner@lothar.com>**20110325232511
5129 Ignore-this: d5307faa6900f143193bfbe14e0f01a
5130]
5131[control.py: remove all uses of s.get_serverid()
5132warner@lothar.com**20110227011203
5133 Ignore-this: f80a787953bd7fa3d40e828bde00e855
5134]
5135[web: remove some uses of s.get_serverid(), not all
5136warner@lothar.com**20110227011159
5137 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
5138]
5139[immutable/downloader/fetcher.py: remove all get_serverid() calls
5140warner@lothar.com**20110227011156
5141 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
5142]
5143[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
5144warner@lothar.com**20110227011153
5145 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
5146 
5147 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
5148 _shares_from_server dict was being popped incorrectly (using shnum as the
5149 index instead of serverid). I'm still thinking through the consequences of
5150 this bug. It was probably benign and really hard to detect. I think it would
5151 cause us to incorrectly believe that we're pulling too many shares from a
5152 server, and thus prefer a different server rather than asking for a second
5153 share from the first server. The diversity code is intended to spread out the
5154 number of shares simultaneously being requested from each server, but with
5155 this bug, it might be spreading out the total number of shares requested at
5156 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
5157 segment, so the effect doesn't last very long).
5158]
5159[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
5160warner@lothar.com**20110227011150
5161 Ignore-this: d8d56dd8e7b280792b40105e13664554
5162 
5163 test_download.py: create+check MyShare instances better, make sure they share
5164 Server objects, now that finder.py cares
5165]
5166[immutable/downloader/finder.py: reduce use of get_serverid(), one left
5167warner@lothar.com**20110227011146
5168 Ignore-this: 5785be173b491ae8a78faf5142892020
5169]
5170[immutable/offloaded.py: reduce use of get_serverid() a bit more
5171warner@lothar.com**20110227011142
5172 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
5173]
5174[immutable/upload.py: reduce use of get_serverid()
5175warner@lothar.com**20110227011138
5176 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
5177]
5178[immutable/checker.py: remove some uses of s.get_serverid(), not all
5179warner@lothar.com**20110227011134
5180 Ignore-this: e480a37efa9e94e8016d826c492f626e
5181]
5182[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
5183warner@lothar.com**20110227011132
5184 Ignore-this: 6078279ddf42b179996a4b53bee8c421
5185 MockIServer stubs
5186]
5187[upload.py: rearrange _make_trackers a bit, no behavior changes
5188warner@lothar.com**20110227011128
5189 Ignore-this: 296d4819e2af452b107177aef6ebb40f
5190]
5191[happinessutil.py: finally rename merge_peers to merge_servers
5192warner@lothar.com**20110227011124
5193 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
5194]
5195[test_upload.py: factor out FakeServerTracker
5196warner@lothar.com**20110227011120
5197 Ignore-this: 6c182cba90e908221099472cc159325b
5198]
5199[test_upload.py: server-vs-tracker cleanup
5200warner@lothar.com**20110227011115
5201 Ignore-this: 2915133be1a3ba456e8603885437e03
5202]
5203[happinessutil.py: server-vs-tracker cleanup
5204warner@lothar.com**20110227011111
5205 Ignore-this: b856c84033562d7d718cae7cb01085a9
5206]
5207[upload.py: more tracker-vs-server cleanup
5208warner@lothar.com**20110227011107
5209 Ignore-this: bb75ed2afef55e47c085b35def2de315
5210]
5211[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
5212warner@lothar.com**20110227011103
5213 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
5214]
5215[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
5216warner@lothar.com**20110227011100
5217 Ignore-this: 7ea858755cbe5896ac212a925840fe68
5218 
5219 No behavioral changes, just updating variable/method names and log messages.
5220 The effects outside these three files should be minimal: some exception
5221 messages changed (to say "server" instead of "peer"), and some internal class
5222 names were changed. A few things still use "peer" to minimize external
5223 changes, like UploadResults.timings["peer_selection"] and
5224 happinessutil.merge_peers, which can be changed later.
5225]
5226[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
5227warner@lothar.com**20110227011056
5228 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
5229]
5230[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
5231warner@lothar.com**20110227011051
5232 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
5233]
5234[test: increase timeout on a network test because Francois's ARM machine hit that timeout
5235zooko@zooko.com**20110317165909
5236 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
5237 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
5238]
5239[docs/configuration.rst: add a "Frontend Configuration" section
5240Brian Warner <warner@lothar.com>**20110222014323
5241 Ignore-this: 657018aa501fe4f0efef9851628444ca
5242 
5243 this points to docs/frontends/*.rst, which were previously underlinked
5244]
5245[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
5246"Brian Warner <warner@lothar.com>"**20110221061544
5247 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
5248]
5249[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
5250david-sarah@jacaranda.org**20110221015817
5251 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
5252]
5253[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
5254david-sarah@jacaranda.org**20110221020125
5255 Ignore-this: b0744ed58f161bf188e037bad077fc48
5256]
5257[Refactor StorageFarmBroker handling of servers
5258Brian Warner <warner@lothar.com>**20110221015804
5259 Ignore-this: 842144ed92f5717699b8f580eab32a51
5260 
5261 Pass around IServer instance instead of (peerid, rref) tuple. Replace
5262 "descriptor" with "server". Other replacements:
5263 
5264  get_all_servers -> get_connected_servers/get_known_servers
5265  get_servers_for_index -> get_servers_for_psi (now returns IServers)
5266 
5267 This change still needs to be pushed further down: lots of code is now
5268 getting the IServer and then distributing (peerid, rref) internally.
5269 Instead, it ought to distribute the IServer internally and delay
5270 extracting a serverid or rref until the last moment.
5271 
5272 no_network.py was updated to retain parallelism.
5273]
5274[TAG allmydata-tahoe-1.8.2
5275warner@lothar.com**20110131020101]
5276Patch bundle hash:
5277bf5e5cfa550b2376245c956e0d3ea08b0401ceb5