Ticket #999: checkpoint9.darcs.patch

File checkpoint9.darcs.patch, 137.5 KB (added by arch_o_median, at 2011-07-07T04:29:24Z)

checkpoint 9

Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37New patches:
38
39[storage: new mocking tests of storage server read and write
40wilcoxjg@gmail.com**20110325203514
41 Ignore-this: df65c3c4f061dd1516f88662023fdb41
42 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
43] {
44addfile ./src/allmydata/test/test_server.py
45hunk ./src/allmydata/test/test_server.py 1
46+from twisted.trial import unittest
47+
48+from StringIO import StringIO
49+
50+from allmydata.test.common_util import ReallyEqualMixin
51+
52+import mock
53+
54+# This is the code that we're going to be testing.
55+from allmydata.storage.server import StorageServer
56+
57+# The following share file contents was generated with
58+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
59+# with share data == 'a'.
60+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
61+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
62+
63+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
64+
65+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
66+    @mock.patch('__builtin__.open')
67+    def test_create_server(self, mockopen):
68+        """ This tests whether a server instance can be constructed. """
69+
70+        def call_open(fname, mode):
71+            if fname == 'testdir/bucket_counter.state':
72+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
73+            elif fname == 'testdir/lease_checker.state':
74+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
75+            elif fname == 'testdir/lease_checker.history':
76+                return StringIO()
77+        mockopen.side_effect = call_open
78+
79+        # Now begin the test.
80+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
81+
82+        # You passed!
83+
84+class TestServer(unittest.TestCase, ReallyEqualMixin):
85+    @mock.patch('__builtin__.open')
86+    def setUp(self, mockopen):
87+        def call_open(fname, mode):
88+            if fname == 'testdir/bucket_counter.state':
89+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
90+            elif fname == 'testdir/lease_checker.state':
91+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
92+            elif fname == 'testdir/lease_checker.history':
93+                return StringIO()
94+        mockopen.side_effect = call_open
95+
96+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
97+
98+
99+    @mock.patch('time.time')
100+    @mock.patch('os.mkdir')
101+    @mock.patch('__builtin__.open')
102+    @mock.patch('os.listdir')
103+    @mock.patch('os.path.isdir')
104+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
105+        """Handle a report of corruption."""
106+
107+        def call_listdir(dirname):
108+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
109+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
110+
111+        mocklistdir.side_effect = call_listdir
112+
113+        class MockFile:
114+            def __init__(self):
115+                self.buffer = ''
116+                self.pos = 0
117+            def write(self, instring):
118+                begin = self.pos
119+                padlen = begin - len(self.buffer)
120+                if padlen > 0:
121+                    self.buffer += '\x00' * padlen
122+                end = self.pos + len(instring)
123+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
124+                self.pos = end
125+            def close(self):
126+                pass
127+            def seek(self, pos):
128+                self.pos = pos
129+            def read(self, numberbytes):
130+                return self.buffer[self.pos:self.pos+numberbytes]
131+            def tell(self):
132+                return self.pos
133+
134+        mocktime.return_value = 0
135+
136+        sharefile = MockFile()
137+        def call_open(fname, mode):
138+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
139+            return sharefile
140+
141+        mockopen.side_effect = call_open
142+        # Now begin the test.
143+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
144+        print bs
145+        bs[0].remote_write(0, 'a')
146+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
147+
148+
149+    @mock.patch('os.path.exists')
150+    @mock.patch('os.path.getsize')
151+    @mock.patch('__builtin__.open')
152+    @mock.patch('os.listdir')
153+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
154+        """ This tests whether the code correctly finds and reads
155+        shares written out by old (Tahoe-LAFS <= v1.8.2)
156+        servers. There is a similar test in test_download, but that one
157+        is from the perspective of the client and exercises a deeper
158+        stack of code. This one is for exercising just the
159+        StorageServer object. """
160+
161+        def call_listdir(dirname):
162+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
163+            return ['0']
164+
165+        mocklistdir.side_effect = call_listdir
166+
167+        def call_open(fname, mode):
168+            self.failUnlessReallyEqual(fname, sharefname)
169+            self.failUnless('r' in mode, mode)
170+            self.failUnless('b' in mode, mode)
171+
172+            return StringIO(share_file_data)
173+        mockopen.side_effect = call_open
174+
175+        datalen = len(share_file_data)
176+        def call_getsize(fname):
177+            self.failUnlessReallyEqual(fname, sharefname)
178+            return datalen
179+        mockgetsize.side_effect = call_getsize
180+
181+        def call_exists(fname):
182+            self.failUnlessReallyEqual(fname, sharefname)
183+            return True
184+        mockexists.side_effect = call_exists
185+
186+        # Now begin the test.
187+        bs = self.s.remote_get_buckets('teststorage_index')
188+
189+        self.failUnlessEqual(len(bs), 1)
190+        b = bs[0]
191+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
192+        # If you try to read past the end you get the as much data as is there.
193+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
194+        # If you start reading past the end of the file you get the empty string.
195+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
196}
197[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
198wilcoxjg@gmail.com**20110624202850
199 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
200 sloppy not for production
201] {
202move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
203hunk ./src/allmydata/storage/crawler.py 13
204     pass
205 
206 class ShareCrawler(service.MultiService):
207-    """A ShareCrawler subclass is attached to a StorageServer, and
208+    """A subcless of ShareCrawler is attached to a StorageServer, and
209     periodically walks all of its shares, processing each one in some
210     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
211     since large servers can easily have a terabyte of shares, in several
212hunk ./src/allmydata/storage/crawler.py 31
213     We assume that the normal upload/download/get_buckets traffic of a tahoe
214     grid will cause the prefixdir contents to be mostly cached in the kernel,
215     or that the number of buckets in each prefixdir will be small enough to
216-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
217+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
218     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
219     prefix. On this server, each prefixdir took 130ms-200ms to list the first
220     time, and 17ms to list the second time.
221hunk ./src/allmydata/storage/crawler.py 68
222     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
223     minimum_cycle_time = 300 # don't run a cycle faster than this
224 
225-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
226+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
227         service.MultiService.__init__(self)
228         if allowed_cpu_percentage is not None:
229             self.allowed_cpu_percentage = allowed_cpu_percentage
230hunk ./src/allmydata/storage/crawler.py 72
231-        self.server = server
232-        self.sharedir = server.sharedir
233-        self.statefile = statefile
234+        self.backend = backend
235         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
236                          for i in range(2**10)]
237         self.prefixes.sort()
238hunk ./src/allmydata/storage/crawler.py 446
239 
240     minimum_cycle_time = 60*60 # we don't need this more than once an hour
241 
242-    def __init__(self, server, statefile, num_sample_prefixes=1):
243-        ShareCrawler.__init__(self, server, statefile)
244+    def __init__(self, statefile, num_sample_prefixes=1):
245+        ShareCrawler.__init__(self, statefile)
246         self.num_sample_prefixes = num_sample_prefixes
247 
248     def add_initial_state(self):
249hunk ./src/allmydata/storage/expirer.py 15
250     removed.
251 
252     I collect statistics on the leases and make these available to a web
253-    status page, including::
254+    status page, including:
255 
256     Space recovered during this cycle-so-far:
257      actual (only if expiration_enabled=True):
258hunk ./src/allmydata/storage/expirer.py 51
259     slow_start = 360 # wait 6 minutes after startup
260     minimum_cycle_time = 12*60*60 # not more than twice per day
261 
262-    def __init__(self, server, statefile, historyfile,
263+    def __init__(self, statefile, historyfile,
264                  expiration_enabled, mode,
265                  override_lease_duration, # used if expiration_mode=="age"
266                  cutoff_date, # used if expiration_mode=="cutoff-date"
267hunk ./src/allmydata/storage/expirer.py 71
268         else:
269             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
270         self.sharetypes_to_expire = sharetypes
271-        ShareCrawler.__init__(self, server, statefile)
272+        ShareCrawler.__init__(self, statefile)
273 
274     def add_initial_state(self):
275         # we fill ["cycle-to-date"] here (even though they will be reset in
276hunk ./src/allmydata/storage/immutable.py 44
277     sharetype = "immutable"
278 
279     def __init__(self, filename, max_size=None, create=False):
280-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
281+        """ If max_size is not None then I won't allow more than
282+        max_size to be written to me. If create=True then max_size
283+        must not be None. """
284         precondition((max_size is not None) or (not create), max_size, create)
285         self.home = filename
286         self._max_size = max_size
287hunk ./src/allmydata/storage/immutable.py 87
288 
289     def read_share_data(self, offset, length):
290         precondition(offset >= 0)
291-        # reads beyond the end of the data are truncated. Reads that start
292-        # beyond the end of the data return an empty string. I wonder why
293-        # Python doesn't do the following computation for me?
294+        # Reads beyond the end of the data are truncated. Reads that start
295+        # beyond the end of the data return an empty string.
296         seekpos = self._data_offset+offset
297         fsize = os.path.getsize(self.home)
298         actuallength = max(0, min(length, fsize-seekpos))
299hunk ./src/allmydata/storage/immutable.py 198
300             space_freed += os.stat(self.home)[stat.ST_SIZE]
301             self.unlink()
302         return space_freed
303+class NullBucketWriter(Referenceable):
304+    implements(RIBucketWriter)
305 
306hunk ./src/allmydata/storage/immutable.py 201
307+    def remote_write(self, offset, data):
308+        return
309 
310 class BucketWriter(Referenceable):
311     implements(RIBucketWriter)
312hunk ./src/allmydata/storage/server.py 7
313 from twisted.application import service
314 
315 from zope.interface import implements
316-from allmydata.interfaces import RIStorageServer, IStatsProducer
317+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
318 from allmydata.util import fileutil, idlib, log, time_format
319 import allmydata # for __full_version__
320 
321hunk ./src/allmydata/storage/server.py 16
322 from allmydata.storage.lease import LeaseInfo
323 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
324      create_mutable_sharefile
325-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
326+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
327 from allmydata.storage.crawler import BucketCountingCrawler
328 from allmydata.storage.expirer import LeaseCheckingCrawler
329 
330hunk ./src/allmydata/storage/server.py 20
331+from zope.interface import implements
332+
333+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
334+# be started and stopped.
335+class Backend(service.MultiService):
336+    implements(IStatsProducer)
337+    def __init__(self):
338+        service.MultiService.__init__(self)
339+
340+    def get_bucket_shares(self):
341+        """XXX"""
342+        raise NotImplementedError
343+
344+    def get_share(self):
345+        """XXX"""
346+        raise NotImplementedError
347+
348+    def make_bucket_writer(self):
349+        """XXX"""
350+        raise NotImplementedError
351+
352+class NullBackend(Backend):
353+    def __init__(self):
354+        Backend.__init__(self)
355+
356+    def get_available_space(self):
357+        return None
358+
359+    def get_bucket_shares(self, storage_index):
360+        return set()
361+
362+    def get_share(self, storage_index, sharenum):
363+        return None
364+
365+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
366+        return NullBucketWriter()
367+
368+class FSBackend(Backend):
369+    def __init__(self, storedir, readonly=False, reserved_space=0):
370+        Backend.__init__(self)
371+
372+        self._setup_storage(storedir, readonly, reserved_space)
373+        self._setup_corruption_advisory()
374+        self._setup_bucket_counter()
375+        self._setup_lease_checkerf()
376+
377+    def _setup_storage(self, storedir, readonly, reserved_space):
378+        self.storedir = storedir
379+        self.readonly = readonly
380+        self.reserved_space = int(reserved_space)
381+        if self.reserved_space:
382+            if self.get_available_space() is None:
383+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
384+                        umid="0wZ27w", level=log.UNUSUAL)
385+
386+        self.sharedir = os.path.join(self.storedir, "shares")
387+        fileutil.make_dirs(self.sharedir)
388+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
389+        self._clean_incomplete()
390+
391+    def _clean_incomplete(self):
392+        fileutil.rm_dir(self.incomingdir)
393+        fileutil.make_dirs(self.incomingdir)
394+
395+    def _setup_corruption_advisory(self):
396+        # we don't actually create the corruption-advisory dir until necessary
397+        self.corruption_advisory_dir = os.path.join(self.storedir,
398+                                                    "corruption-advisories")
399+
400+    def _setup_bucket_counter(self):
401+        statefile = os.path.join(self.storedir, "bucket_counter.state")
402+        self.bucket_counter = BucketCountingCrawler(statefile)
403+        self.bucket_counter.setServiceParent(self)
404+
405+    def _setup_lease_checkerf(self):
406+        statefile = os.path.join(self.storedir, "lease_checker.state")
407+        historyfile = os.path.join(self.storedir, "lease_checker.history")
408+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
409+                                   expiration_enabled, expiration_mode,
410+                                   expiration_override_lease_duration,
411+                                   expiration_cutoff_date,
412+                                   expiration_sharetypes)
413+        self.lease_checker.setServiceParent(self)
414+
415+    def get_available_space(self):
416+        if self.readonly:
417+            return 0
418+        return fileutil.get_available_space(self.storedir, self.reserved_space)
419+
420+    def get_bucket_shares(self, storage_index):
421+        """Return a list of (shnum, pathname) tuples for files that hold
422+        shares for this storage_index. In each tuple, 'shnum' will always be
423+        the integer form of the last component of 'pathname'."""
424+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
425+        try:
426+            for f in os.listdir(storagedir):
427+                if NUM_RE.match(f):
428+                    filename = os.path.join(storagedir, f)
429+                    yield (int(f), filename)
430+        except OSError:
431+            # Commonly caused by there being no buckets at all.
432+            pass
433+
434 # storage/
435 # storage/shares/incoming
436 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
437hunk ./src/allmydata/storage/server.py 143
438     name = 'storage'
439     LeaseCheckerClass = LeaseCheckingCrawler
440 
441-    def __init__(self, storedir, nodeid, reserved_space=0,
442-                 discard_storage=False, readonly_storage=False,
443+    def __init__(self, nodeid, backend, reserved_space=0,
444+                 readonly_storage=False,
445                  stats_provider=None,
446                  expiration_enabled=False,
447                  expiration_mode="age",
448hunk ./src/allmydata/storage/server.py 155
449         assert isinstance(nodeid, str)
450         assert len(nodeid) == 20
451         self.my_nodeid = nodeid
452-        self.storedir = storedir
453-        sharedir = os.path.join(storedir, "shares")
454-        fileutil.make_dirs(sharedir)
455-        self.sharedir = sharedir
456-        # we don't actually create the corruption-advisory dir until necessary
457-        self.corruption_advisory_dir = os.path.join(storedir,
458-                                                    "corruption-advisories")
459-        self.reserved_space = int(reserved_space)
460-        self.no_storage = discard_storage
461-        self.readonly_storage = readonly_storage
462         self.stats_provider = stats_provider
463         if self.stats_provider:
464             self.stats_provider.register_producer(self)
465hunk ./src/allmydata/storage/server.py 158
466-        self.incomingdir = os.path.join(sharedir, 'incoming')
467-        self._clean_incomplete()
468-        fileutil.make_dirs(self.incomingdir)
469         self._active_writers = weakref.WeakKeyDictionary()
470hunk ./src/allmydata/storage/server.py 159
471+        self.backend = backend
472+        self.backend.setServiceParent(self)
473         log.msg("StorageServer created", facility="tahoe.storage")
474 
475hunk ./src/allmydata/storage/server.py 163
476-        if reserved_space:
477-            if self.get_available_space() is None:
478-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
479-                        umin="0wZ27w", level=log.UNUSUAL)
480-
481         self.latencies = {"allocate": [], # immutable
482                           "write": [],
483                           "close": [],
484hunk ./src/allmydata/storage/server.py 174
485                           "renew": [],
486                           "cancel": [],
487                           }
488-        self.add_bucket_counter()
489-
490-        statefile = os.path.join(self.storedir, "lease_checker.state")
491-        historyfile = os.path.join(self.storedir, "lease_checker.history")
492-        klass = self.LeaseCheckerClass
493-        self.lease_checker = klass(self, statefile, historyfile,
494-                                   expiration_enabled, expiration_mode,
495-                                   expiration_override_lease_duration,
496-                                   expiration_cutoff_date,
497-                                   expiration_sharetypes)
498-        self.lease_checker.setServiceParent(self)
499 
500     def __repr__(self):
501         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
502hunk ./src/allmydata/storage/server.py 178
503 
504-    def add_bucket_counter(self):
505-        statefile = os.path.join(self.storedir, "bucket_counter.state")
506-        self.bucket_counter = BucketCountingCrawler(self, statefile)
507-        self.bucket_counter.setServiceParent(self)
508-
509     def count(self, name, delta=1):
510         if self.stats_provider:
511             self.stats_provider.count("storage_server." + name, delta)
512hunk ./src/allmydata/storage/server.py 233
513             kwargs["facility"] = "tahoe.storage"
514         return log.msg(*args, **kwargs)
515 
516-    def _clean_incomplete(self):
517-        fileutil.rm_dir(self.incomingdir)
518-
519     def get_stats(self):
520         # remember: RIStatsProvider requires that our return dict
521         # contains numeric values.
522hunk ./src/allmydata/storage/server.py 269
523             stats['storage_server.total_bucket_count'] = bucket_count
524         return stats
525 
526-    def get_available_space(self):
527-        """Returns available space for share storage in bytes, or None if no
528-        API to get this information is available."""
529-
530-        if self.readonly_storage:
531-            return 0
532-        return fileutil.get_available_space(self.storedir, self.reserved_space)
533-
534     def allocated_size(self):
535         space = 0
536         for bw in self._active_writers:
537hunk ./src/allmydata/storage/server.py 276
538         return space
539 
540     def remote_get_version(self):
541-        remaining_space = self.get_available_space()
542+        remaining_space = self.backend.get_available_space()
543         if remaining_space is None:
544             # We're on a platform that has no API to get disk stats.
545             remaining_space = 2**64
546hunk ./src/allmydata/storage/server.py 301
547         self.count("allocate")
548         alreadygot = set()
549         bucketwriters = {} # k: shnum, v: BucketWriter
550-        si_dir = storage_index_to_dir(storage_index)
551-        si_s = si_b2a(storage_index)
552 
553hunk ./src/allmydata/storage/server.py 302
554+        si_s = si_b2a(storage_index)
555         log.msg("storage: allocate_buckets %s" % si_s)
556 
557         # in this implementation, the lease information (including secrets)
558hunk ./src/allmydata/storage/server.py 316
559 
560         max_space_per_bucket = allocated_size
561 
562-        remaining_space = self.get_available_space()
563+        remaining_space = self.backend.get_available_space()
564         limited = remaining_space is not None
565         if limited:
566             # this is a bit conservative, since some of this allocated_size()
567hunk ./src/allmydata/storage/server.py 329
568         # they asked about: this will save them a lot of work. Add or update
569         # leases for all of them: if they want us to hold shares for this
570         # file, they'll want us to hold leases for this file.
571-        for (shnum, fn) in self._get_bucket_shares(storage_index):
572+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
573             alreadygot.add(shnum)
574             sf = ShareFile(fn)
575             sf.add_or_renew_lease(lease_info)
576hunk ./src/allmydata/storage/server.py 335
577 
578         for shnum in sharenums:
579-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
580-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
581-            if os.path.exists(finalhome):
582+            share = self.backend.get_share(storage_index, shnum)
583+
584+            if not share:
585+                if (not limited) or (remaining_space >= max_space_per_bucket):
586+                    # ok! we need to create the new share file.
587+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
588+                                      max_space_per_bucket, lease_info, canary)
589+                    bucketwriters[shnum] = bw
590+                    self._active_writers[bw] = 1
591+                    if limited:
592+                        remaining_space -= max_space_per_bucket
593+                else:
594+                    # bummer! not enough space to accept this bucket
595+                    pass
596+
597+            elif share.is_complete():
598                 # great! we already have it. easy.
599                 pass
600hunk ./src/allmydata/storage/server.py 353
601-            elif os.path.exists(incominghome):
602+            elif not share.is_complete():
603                 # Note that we don't create BucketWriters for shnums that
604                 # have a partial share (in incoming/), so if a second upload
605                 # occurs while the first is still in progress, the second
606hunk ./src/allmydata/storage/server.py 359
607                 # uploader will use different storage servers.
608                 pass
609-            elif (not limited) or (remaining_space >= max_space_per_bucket):
610-                # ok! we need to create the new share file.
611-                bw = BucketWriter(self, incominghome, finalhome,
612-                                  max_space_per_bucket, lease_info, canary)
613-                if self.no_storage:
614-                    bw.throw_out_all_data = True
615-                bucketwriters[shnum] = bw
616-                self._active_writers[bw] = 1
617-                if limited:
618-                    remaining_space -= max_space_per_bucket
619-            else:
620-                # bummer! not enough space to accept this bucket
621-                pass
622-
623-        if bucketwriters:
624-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
625 
626         self.add_latency("allocate", time.time() - start)
627         return alreadygot, bucketwriters
628hunk ./src/allmydata/storage/server.py 437
629             self.stats_provider.count('storage_server.bytes_added', consumed_size)
630         del self._active_writers[bw]
631 
632-    def _get_bucket_shares(self, storage_index):
633-        """Return a list of (shnum, pathname) tuples for files that hold
634-        shares for this storage_index. In each tuple, 'shnum' will always be
635-        the integer form of the last component of 'pathname'."""
636-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
637-        try:
638-            for f in os.listdir(storagedir):
639-                if NUM_RE.match(f):
640-                    filename = os.path.join(storagedir, f)
641-                    yield (int(f), filename)
642-        except OSError:
643-            # Commonly caused by there being no buckets at all.
644-            pass
645 
646     def remote_get_buckets(self, storage_index):
647         start = time.time()
648hunk ./src/allmydata/storage/server.py 444
649         si_s = si_b2a(storage_index)
650         log.msg("storage: get_buckets %s" % si_s)
651         bucketreaders = {} # k: sharenum, v: BucketReader
652-        for shnum, filename in self._get_bucket_shares(storage_index):
653+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
654             bucketreaders[shnum] = BucketReader(self, filename,
655                                                 storage_index, shnum)
656         self.add_latency("get", time.time() - start)
657hunk ./src/allmydata/test/test_backends.py 10
658 import mock
659 
660 # This is the code that we're going to be testing.
661-from allmydata.storage.server import StorageServer
662+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
663 
664 # The following share file contents was generated with
665 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
666hunk ./src/allmydata/test/test_backends.py 21
667 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
668 
669 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
670+    @mock.patch('time.time')
671+    @mock.patch('os.mkdir')
672+    @mock.patch('__builtin__.open')
673+    @mock.patch('os.listdir')
674+    @mock.patch('os.path.isdir')
675+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
676+        """ This tests whether a server instance can be constructed
677+        with a null backend. The server instance fails the test if it
678+        tries to read or write to the file system. """
679+
680+        # Now begin the test.
681+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
682+
683+        self.failIf(mockisdir.called)
684+        self.failIf(mocklistdir.called)
685+        self.failIf(mockopen.called)
686+        self.failIf(mockmkdir.called)
687+
688+        # You passed!
689+
690+    @mock.patch('time.time')
691+    @mock.patch('os.mkdir')
692     @mock.patch('__builtin__.open')
693hunk ./src/allmydata/test/test_backends.py 44
694-    def test_create_server(self, mockopen):
695-        """ This tests whether a server instance can be constructed. """
696+    @mock.patch('os.listdir')
697+    @mock.patch('os.path.isdir')
698+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
699+        """ This tests whether a server instance can be constructed
700+        with a filesystem backend. To pass the test, it has to use the
701+        filesystem in only the prescribed ways. """
702 
703         def call_open(fname, mode):
704             if fname == 'testdir/bucket_counter.state':
705hunk ./src/allmydata/test/test_backends.py 58
706                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
707             elif fname == 'testdir/lease_checker.history':
708                 return StringIO()
709+            else:
710+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
711         mockopen.side_effect = call_open
712 
713         # Now begin the test.
714hunk ./src/allmydata/test/test_backends.py 63
715-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
716+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
717+
718+        self.failIf(mockisdir.called)
719+        self.failIf(mocklistdir.called)
720+        self.failIf(mockopen.called)
721+        self.failIf(mockmkdir.called)
722+        self.failIf(mocktime.called)
723 
724         # You passed!
725 
726hunk ./src/allmydata/test/test_backends.py 73
727-class TestServer(unittest.TestCase, ReallyEqualMixin):
728+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
729+    def setUp(self):
730+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
731+
732+    @mock.patch('os.mkdir')
733+    @mock.patch('__builtin__.open')
734+    @mock.patch('os.listdir')
735+    @mock.patch('os.path.isdir')
736+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
737+        """ Write a new share. """
738+
739+        # Now begin the test.
740+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
741+        bs[0].remote_write(0, 'a')
742+        self.failIf(mockisdir.called)
743+        self.failIf(mocklistdir.called)
744+        self.failIf(mockopen.called)
745+        self.failIf(mockmkdir.called)
746+
747+    @mock.patch('os.path.exists')
748+    @mock.patch('os.path.getsize')
749+    @mock.patch('__builtin__.open')
750+    @mock.patch('os.listdir')
751+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
752+        """ This tests whether the code correctly finds and reads
753+        shares written out by old (Tahoe-LAFS <= v1.8.2)
754+        servers. There is a similar test in test_download, but that one
755+        is from the perspective of the client and exercises a deeper
756+        stack of code. This one is for exercising just the
757+        StorageServer object. """
758+
759+        # Now begin the test.
760+        bs = self.s.remote_get_buckets('teststorage_index')
761+
762+        self.failUnlessEqual(len(bs), 0)
763+        self.failIf(mocklistdir.called)
764+        self.failIf(mockopen.called)
765+        self.failIf(mockgetsize.called)
766+        self.failIf(mockexists.called)
767+
768+
769+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
770     @mock.patch('__builtin__.open')
771     def setUp(self, mockopen):
772         def call_open(fname, mode):
773hunk ./src/allmydata/test/test_backends.py 126
774                 return StringIO()
775         mockopen.side_effect = call_open
776 
777-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
778-
779+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
780 
781     @mock.patch('time.time')
782     @mock.patch('os.mkdir')
783hunk ./src/allmydata/test/test_backends.py 134
784     @mock.patch('os.listdir')
785     @mock.patch('os.path.isdir')
786     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
787-        """Handle a report of corruption."""
788+        """ Write a new share. """
789 
790         def call_listdir(dirname):
791             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
792hunk ./src/allmydata/test/test_backends.py 173
793         mockopen.side_effect = call_open
794         # Now begin the test.
795         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
796-        print bs
797         bs[0].remote_write(0, 'a')
798         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
799 
800hunk ./src/allmydata/test/test_backends.py 176
801-
802     @mock.patch('os.path.exists')
803     @mock.patch('os.path.getsize')
804     @mock.patch('__builtin__.open')
805hunk ./src/allmydata/test/test_backends.py 218
806 
807         self.failUnlessEqual(len(bs), 1)
808         b = bs[0]
809+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
810         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
811         # If you try to read past the end you get the as much data as is there.
812         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
813hunk ./src/allmydata/test/test_backends.py 224
814         # If you start reading past the end of the file you get the empty string.
815         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
816+
817+
818}
819[a temp patch used as a snapshot
820wilcoxjg@gmail.com**20110626052732
821 Ignore-this: 95f05e314eaec870afa04c76d979aa44
822] {
823hunk ./docs/configuration.rst 637
824   [storage]
825   enabled = True
826   readonly = True
827-  sizelimit = 10000000000
828 
829 
830   [helper]
831hunk ./docs/garbage-collection.rst 16
832 
833 When a file or directory in the virtual filesystem is no longer referenced,
834 the space that its shares occupied on each storage server can be freed,
835-making room for other shares. Tahoe currently uses a garbage collection
836+making room for other shares. Tahoe uses a garbage collection
837 ("GC") mechanism to implement this space-reclamation process. Each share has
838 one or more "leases", which are managed by clients who want the
839 file/directory to be retained. The storage server accepts each share for a
840hunk ./docs/garbage-collection.rst 34
841 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
842 If lease renewal occurs quickly and with 100% reliability, than any renewal
843 time that is shorter than the lease duration will suffice, but a larger ratio
844-of duration-over-renewal-time will be more robust in the face of occasional
845+of lease duration to renewal time will be more robust in the face of occasional
846 delays or failures.
847 
848 The current recommended values for a small Tahoe grid are to renew the leases
849replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
850hunk ./src/allmydata/client.py 260
851             sharetypes.append("mutable")
852         expiration_sharetypes = tuple(sharetypes)
853 
854+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
855+            xyz
856+        xyz
857         ss = StorageServer(storedir, self.nodeid,
858                            reserved_space=reserved,
859                            discard_storage=discard,
860hunk ./src/allmydata/storage/crawler.py 234
861         f = open(tmpfile, "wb")
862         pickle.dump(self.state, f)
863         f.close()
864-        fileutil.move_into_place(tmpfile, self.statefile)
865+        fileutil.move_into_place(tmpfile, self.statefname)
866 
867     def startService(self):
868         # arrange things to look like we were just sleeping, so
869}
870[snapshot of progress on backend implementation (not suitable for trunk)
871wilcoxjg@gmail.com**20110626053244
872 Ignore-this: 50c764af791c2b99ada8289546806a0a
873] {
874adddir ./src/allmydata/storage/backends
875adddir ./src/allmydata/storage/backends/das
876move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
877adddir ./src/allmydata/storage/backends/null
878hunk ./src/allmydata/interfaces.py 270
879         store that on disk.
880         """
881 
882+class IStorageBackend(Interface):
883+    """
884+    Objects of this kind live on the server side and are used by the
885+    storage server object.
886+    """
887+    def get_available_space(self, reserved_space):
888+        """ Returns available space for share storage in bytes, or
889+        None if this information is not available or if the available
890+        space is unlimited.
891+
892+        If the backend is configured for read-only mode then this will
893+        return 0.
894+
895+        reserved_space is how many bytes to subtract from the answer, so
896+        you can pass how many bytes you would like to leave unused on this
897+        filesystem as reserved_space. """
898+
899+    def get_bucket_shares(self):
900+        """XXX"""
901+
902+    def get_share(self):
903+        """XXX"""
904+
905+    def make_bucket_writer(self):
906+        """XXX"""
907+
908+class IStorageBackendShare(Interface):
909+    """
910+    This object contains as much as all of the share data.  It is intended
911+    for lazy evaluation such that in many use cases substantially less than
912+    all of the share data will be accessed.
913+    """
914+    def is_complete(self):
915+        """
916+        Returns the share state, or None if the share does not exist.
917+        """
918+
919 class IStorageBucketWriter(Interface):
920     """
921     Objects of this kind live on the client side.
922hunk ./src/allmydata/interfaces.py 2492
923 
924 class EmptyPathnameComponentError(Exception):
925     """The webapi disallows empty pathname components."""
926+
927+class IShareStore(Interface):
928+    pass
929+
930addfile ./src/allmydata/storage/backends/__init__.py
931addfile ./src/allmydata/storage/backends/das/__init__.py
932addfile ./src/allmydata/storage/backends/das/core.py
933hunk ./src/allmydata/storage/backends/das/core.py 1
934+from allmydata.interfaces import IStorageBackend
935+from allmydata.storage.backends.base import Backend
936+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
937+from allmydata.util.assertutil import precondition
938+
939+import os, re, weakref, struct, time
940+
941+from foolscap.api import Referenceable
942+from twisted.application import service
943+
944+from zope.interface import implements
945+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
946+from allmydata.util import fileutil, idlib, log, time_format
947+import allmydata # for __full_version__
948+
949+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
950+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
951+from allmydata.storage.lease import LeaseInfo
952+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
953+     create_mutable_sharefile
954+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
955+from allmydata.storage.crawler import FSBucketCountingCrawler
956+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
957+
958+from zope.interface import implements
959+
960+class DASCore(Backend):
961+    implements(IStorageBackend)
962+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
963+        Backend.__init__(self)
964+
965+        self._setup_storage(storedir, readonly, reserved_space)
966+        self._setup_corruption_advisory()
967+        self._setup_bucket_counter()
968+        self._setup_lease_checkerf(expiration_policy)
969+
970+    def _setup_storage(self, storedir, readonly, reserved_space):
971+        self.storedir = storedir
972+        self.readonly = readonly
973+        self.reserved_space = int(reserved_space)
974+        if self.reserved_space:
975+            if self.get_available_space() is None:
976+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
977+                        umid="0wZ27w", level=log.UNUSUAL)
978+
979+        self.sharedir = os.path.join(self.storedir, "shares")
980+        fileutil.make_dirs(self.sharedir)
981+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
982+        self._clean_incomplete()
983+
984+    def _clean_incomplete(self):
985+        fileutil.rm_dir(self.incomingdir)
986+        fileutil.make_dirs(self.incomingdir)
987+
988+    def _setup_corruption_advisory(self):
989+        # we don't actually create the corruption-advisory dir until necessary
990+        self.corruption_advisory_dir = os.path.join(self.storedir,
991+                                                    "corruption-advisories")
992+
993+    def _setup_bucket_counter(self):
994+        statefname = os.path.join(self.storedir, "bucket_counter.state")
995+        self.bucket_counter = FSBucketCountingCrawler(statefname)
996+        self.bucket_counter.setServiceParent(self)
997+
998+    def _setup_lease_checkerf(self, expiration_policy):
999+        statefile = os.path.join(self.storedir, "lease_checker.state")
1000+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1001+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1002+        self.lease_checker.setServiceParent(self)
1003+
1004+    def get_available_space(self):
1005+        if self.readonly:
1006+            return 0
1007+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1008+
1009+    def get_shares(self, storage_index):
1010+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1011+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1012+        try:
1013+            for f in os.listdir(finalstoragedir):
1014+                if NUM_RE.match(f):
1015+                    filename = os.path.join(finalstoragedir, f)
1016+                    yield FSBShare(filename, int(f))
1017+        except OSError:
1018+            # Commonly caused by there being no buckets at all.
1019+            pass
1020+       
1021+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1022+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1023+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1024+        return bw
1025+       
1026+
1027+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1028+# and share data. The share data is accessed by RIBucketWriter.write and
1029+# RIBucketReader.read . The lease information is not accessible through these
1030+# interfaces.
1031+
1032+# The share file has the following layout:
1033+#  0x00: share file version number, four bytes, current version is 1
1034+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1035+#  0x08: number of leases, four bytes big-endian
1036+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1037+#  A+0x0c = B: first lease. Lease format is:
1038+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1039+#   B+0x04: renew secret, 32 bytes (SHA256)
1040+#   B+0x24: cancel secret, 32 bytes (SHA256)
1041+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1042+#   B+0x48: next lease, or end of record
1043+
1044+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1045+# but it is still filled in by storage servers in case the storage server
1046+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1047+# share file is moved from one storage server to another. The value stored in
1048+# this field is truncated, so if the actual share data length is >= 2**32,
1049+# then the value stored in this field will be the actual share data length
1050+# modulo 2**32.
1051+
1052+class ImmutableShare:
1053+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1054+    sharetype = "immutable"
1055+
1056+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1057+        """ If max_size is not None then I won't allow more than
1058+        max_size to be written to me. If create=True then max_size
1059+        must not be None. """
1060+        precondition((max_size is not None) or (not create), max_size, create)
1061+        self.shnum = shnum
1062+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1063+        self._max_size = max_size
1064+        if create:
1065+            # touch the file, so later callers will see that we're working on
1066+            # it. Also construct the metadata.
1067+            assert not os.path.exists(self.fname)
1068+            fileutil.make_dirs(os.path.dirname(self.fname))
1069+            f = open(self.fname, 'wb')
1070+            # The second field -- the four-byte share data length -- is no
1071+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1072+            # there in case someone downgrades a storage server from >=
1073+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1074+            # server to another, etc. We do saturation -- a share data length
1075+            # larger than 2**32-1 (what can fit into the field) is marked as
1076+            # the largest length that can fit into the field. That way, even
1077+            # if this does happen, the old < v1.3.0 server will still allow
1078+            # clients to read the first part of the share.
1079+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1080+            f.close()
1081+            self._lease_offset = max_size + 0x0c
1082+            self._num_leases = 0
1083+        else:
1084+            f = open(self.fname, 'rb')
1085+            filesize = os.path.getsize(self.fname)
1086+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1087+            f.close()
1088+            if version != 1:
1089+                msg = "sharefile %s had version %d but we wanted 1" % \
1090+                      (self.fname, version)
1091+                raise UnknownImmutableContainerVersionError(msg)
1092+            self._num_leases = num_leases
1093+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1094+        self._data_offset = 0xc
1095+
1096+    def unlink(self):
1097+        os.unlink(self.fname)
1098+
1099+    def read_share_data(self, offset, length):
1100+        precondition(offset >= 0)
1101+        # Reads beyond the end of the data are truncated. Reads that start
1102+        # beyond the end of the data return an empty string.
1103+        seekpos = self._data_offset+offset
1104+        fsize = os.path.getsize(self.fname)
1105+        actuallength = max(0, min(length, fsize-seekpos))
1106+        if actuallength == 0:
1107+            return ""
1108+        f = open(self.fname, 'rb')
1109+        f.seek(seekpos)
1110+        return f.read(actuallength)
1111+
1112+    def write_share_data(self, offset, data):
1113+        length = len(data)
1114+        precondition(offset >= 0, offset)
1115+        if self._max_size is not None and offset+length > self._max_size:
1116+            raise DataTooLargeError(self._max_size, offset, length)
1117+        f = open(self.fname, 'rb+')
1118+        real_offset = self._data_offset+offset
1119+        f.seek(real_offset)
1120+        assert f.tell() == real_offset
1121+        f.write(data)
1122+        f.close()
1123+
1124+    def _write_lease_record(self, f, lease_number, lease_info):
1125+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1126+        f.seek(offset)
1127+        assert f.tell() == offset
1128+        f.write(lease_info.to_immutable_data())
1129+
1130+    def _read_num_leases(self, f):
1131+        f.seek(0x08)
1132+        (num_leases,) = struct.unpack(">L", f.read(4))
1133+        return num_leases
1134+
1135+    def _write_num_leases(self, f, num_leases):
1136+        f.seek(0x08)
1137+        f.write(struct.pack(">L", num_leases))
1138+
1139+    def _truncate_leases(self, f, num_leases):
1140+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1141+
1142+    def get_leases(self):
1143+        """Yields a LeaseInfo instance for all leases."""
1144+        f = open(self.fname, 'rb')
1145+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1146+        f.seek(self._lease_offset)
1147+        for i in range(num_leases):
1148+            data = f.read(self.LEASE_SIZE)
1149+            if data:
1150+                yield LeaseInfo().from_immutable_data(data)
1151+
1152+    def add_lease(self, lease_info):
1153+        f = open(self.fname, 'rb+')
1154+        num_leases = self._read_num_leases(f)
1155+        self._write_lease_record(f, num_leases, lease_info)
1156+        self._write_num_leases(f, num_leases+1)
1157+        f.close()
1158+
1159+    def renew_lease(self, renew_secret, new_expire_time):
1160+        for i,lease in enumerate(self.get_leases()):
1161+            if constant_time_compare(lease.renew_secret, renew_secret):
1162+                # yup. See if we need to update the owner time.
1163+                if new_expire_time > lease.expiration_time:
1164+                    # yes
1165+                    lease.expiration_time = new_expire_time
1166+                    f = open(self.fname, 'rb+')
1167+                    self._write_lease_record(f, i, lease)
1168+                    f.close()
1169+                return
1170+        raise IndexError("unable to renew non-existent lease")
1171+
1172+    def add_or_renew_lease(self, lease_info):
1173+        try:
1174+            self.renew_lease(lease_info.renew_secret,
1175+                             lease_info.expiration_time)
1176+        except IndexError:
1177+            self.add_lease(lease_info)
1178+
1179+
1180+    def cancel_lease(self, cancel_secret):
1181+        """Remove a lease with the given cancel_secret. If the last lease is
1182+        cancelled, the file will be removed. Return the number of bytes that
1183+        were freed (by truncating the list of leases, and possibly by
1184+        deleting the file. Raise IndexError if there was no lease with the
1185+        given cancel_secret.
1186+        """
1187+
1188+        leases = list(self.get_leases())
1189+        num_leases_removed = 0
1190+        for i,lease in enumerate(leases):
1191+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1192+                leases[i] = None
1193+                num_leases_removed += 1
1194+        if not num_leases_removed:
1195+            raise IndexError("unable to find matching lease to cancel")
1196+        if num_leases_removed:
1197+            # pack and write out the remaining leases. We write these out in
1198+            # the same order as they were added, so that if we crash while
1199+            # doing this, we won't lose any non-cancelled leases.
1200+            leases = [l for l in leases if l] # remove the cancelled leases
1201+            f = open(self.fname, 'rb+')
1202+            for i,lease in enumerate(leases):
1203+                self._write_lease_record(f, i, lease)
1204+            self._write_num_leases(f, len(leases))
1205+            self._truncate_leases(f, len(leases))
1206+            f.close()
1207+        space_freed = self.LEASE_SIZE * num_leases_removed
1208+        if not len(leases):
1209+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1210+            self.unlink()
1211+        return space_freed
1212hunk ./src/allmydata/storage/backends/das/expirer.py 2
1213 import time, os, pickle, struct
1214-from allmydata.storage.crawler import ShareCrawler
1215-from allmydata.storage.shares import get_share_file
1216+from allmydata.storage.crawler import FSShareCrawler
1217 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1218      UnknownImmutableContainerVersionError
1219 from twisted.python import log as twlog
1220hunk ./src/allmydata/storage/backends/das/expirer.py 7
1221 
1222-class LeaseCheckingCrawler(ShareCrawler):
1223+class FSLeaseCheckingCrawler(FSShareCrawler):
1224     """I examine the leases on all shares, determining which are still valid
1225     and which have expired. I can remove the expired leases (if so
1226     configured), and the share will be deleted when the last lease is
1227hunk ./src/allmydata/storage/backends/das/expirer.py 50
1228     slow_start = 360 # wait 6 minutes after startup
1229     minimum_cycle_time = 12*60*60 # not more than twice per day
1230 
1231-    def __init__(self, statefile, historyfile,
1232-                 expiration_enabled, mode,
1233-                 override_lease_duration, # used if expiration_mode=="age"
1234-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1235-                 sharetypes):
1236+    def __init__(self, statefile, historyfile, expiration_policy):
1237         self.historyfile = historyfile
1238hunk ./src/allmydata/storage/backends/das/expirer.py 52
1239-        self.expiration_enabled = expiration_enabled
1240-        self.mode = mode
1241+        self.expiration_enabled = expiration_policy['enabled']
1242+        self.mode = expiration_policy['mode']
1243         self.override_lease_duration = None
1244         self.cutoff_date = None
1245         if self.mode == "age":
1246hunk ./src/allmydata/storage/backends/das/expirer.py 57
1247-            assert isinstance(override_lease_duration, (int, type(None)))
1248-            self.override_lease_duration = override_lease_duration # seconds
1249+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1250+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1251         elif self.mode == "cutoff-date":
1252hunk ./src/allmydata/storage/backends/das/expirer.py 60
1253-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1254+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1255             assert cutoff_date is not None
1256hunk ./src/allmydata/storage/backends/das/expirer.py 62
1257-            self.cutoff_date = cutoff_date
1258+            self.cutoff_date = expiration_policy['cutoff_date']
1259         else:
1260hunk ./src/allmydata/storage/backends/das/expirer.py 64
1261-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1262-        self.sharetypes_to_expire = sharetypes
1263-        ShareCrawler.__init__(self, statefile)
1264+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1265+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1266+        FSShareCrawler.__init__(self, statefile)
1267 
1268     def add_initial_state(self):
1269         # we fill ["cycle-to-date"] here (even though they will be reset in
1270hunk ./src/allmydata/storage/backends/das/expirer.py 156
1271 
1272     def process_share(self, sharefilename):
1273         # first, find out what kind of a share it is
1274-        sf = get_share_file(sharefilename)
1275+        f = open(sharefilename, "rb")
1276+        prefix = f.read(32)
1277+        f.close()
1278+        if prefix == MutableShareFile.MAGIC:
1279+            sf = MutableShareFile(sharefilename)
1280+        else:
1281+            # otherwise assume it's immutable
1282+            sf = FSBShare(sharefilename)
1283         sharetype = sf.sharetype
1284         now = time.time()
1285         s = self.stat(sharefilename)
1286addfile ./src/allmydata/storage/backends/null/__init__.py
1287addfile ./src/allmydata/storage/backends/null/core.py
1288hunk ./src/allmydata/storage/backends/null/core.py 1
1289+from allmydata.storage.backends.base import Backend
1290+
1291+class NullCore(Backend):
1292+    def __init__(self):
1293+        Backend.__init__(self)
1294+
1295+    def get_available_space(self):
1296+        return None
1297+
1298+    def get_shares(self, storage_index):
1299+        return set()
1300+
1301+    def get_share(self, storage_index, sharenum):
1302+        return None
1303+
1304+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1305+        return NullBucketWriter()
1306hunk ./src/allmydata/storage/crawler.py 12
1307 class TimeSliceExceeded(Exception):
1308     pass
1309 
1310-class ShareCrawler(service.MultiService):
1311+class FSShareCrawler(service.MultiService):
1312     """A subcless of ShareCrawler is attached to a StorageServer, and
1313     periodically walks all of its shares, processing each one in some
1314     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1315hunk ./src/allmydata/storage/crawler.py 68
1316     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1317     minimum_cycle_time = 300 # don't run a cycle faster than this
1318 
1319-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1320+    def __init__(self, statefname, allowed_cpu_percentage=None):
1321         service.MultiService.__init__(self)
1322         if allowed_cpu_percentage is not None:
1323             self.allowed_cpu_percentage = allowed_cpu_percentage
1324hunk ./src/allmydata/storage/crawler.py 72
1325-        self.backend = backend
1326+        self.statefname = statefname
1327         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1328                          for i in range(2**10)]
1329         self.prefixes.sort()
1330hunk ./src/allmydata/storage/crawler.py 192
1331         #                            of the last bucket to be processed, or
1332         #                            None if we are sleeping between cycles
1333         try:
1334-            f = open(self.statefile, "rb")
1335+            f = open(self.statefname, "rb")
1336             state = pickle.load(f)
1337             f.close()
1338         except EnvironmentError:
1339hunk ./src/allmydata/storage/crawler.py 230
1340         else:
1341             last_complete_prefix = self.prefixes[lcpi]
1342         self.state["last-complete-prefix"] = last_complete_prefix
1343-        tmpfile = self.statefile + ".tmp"
1344+        tmpfile = self.statefname + ".tmp"
1345         f = open(tmpfile, "wb")
1346         pickle.dump(self.state, f)
1347         f.close()
1348hunk ./src/allmydata/storage/crawler.py 433
1349         pass
1350 
1351 
1352-class BucketCountingCrawler(ShareCrawler):
1353+class FSBucketCountingCrawler(FSShareCrawler):
1354     """I keep track of how many buckets are being managed by this server.
1355     This is equivalent to the number of distributed files and directories for
1356     which I am providing storage. The actual number of files+directories in
1357hunk ./src/allmydata/storage/crawler.py 446
1358 
1359     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1360 
1361-    def __init__(self, statefile, num_sample_prefixes=1):
1362-        ShareCrawler.__init__(self, statefile)
1363+    def __init__(self, statefname, num_sample_prefixes=1):
1364+        FSShareCrawler.__init__(self, statefname)
1365         self.num_sample_prefixes = num_sample_prefixes
1366 
1367     def add_initial_state(self):
1368hunk ./src/allmydata/storage/immutable.py 14
1369 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1370      DataTooLargeError
1371 
1372-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1373-# and share data. The share data is accessed by RIBucketWriter.write and
1374-# RIBucketReader.read . The lease information is not accessible through these
1375-# interfaces.
1376-
1377-# The share file has the following layout:
1378-#  0x00: share file version number, four bytes, current version is 1
1379-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1380-#  0x08: number of leases, four bytes big-endian
1381-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1382-#  A+0x0c = B: first lease. Lease format is:
1383-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1384-#   B+0x04: renew secret, 32 bytes (SHA256)
1385-#   B+0x24: cancel secret, 32 bytes (SHA256)
1386-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1387-#   B+0x48: next lease, or end of record
1388-
1389-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1390-# but it is still filled in by storage servers in case the storage server
1391-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1392-# share file is moved from one storage server to another. The value stored in
1393-# this field is truncated, so if the actual share data length is >= 2**32,
1394-# then the value stored in this field will be the actual share data length
1395-# modulo 2**32.
1396-
1397-class ShareFile:
1398-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1399-    sharetype = "immutable"
1400-
1401-    def __init__(self, filename, max_size=None, create=False):
1402-        """ If max_size is not None then I won't allow more than
1403-        max_size to be written to me. If create=True then max_size
1404-        must not be None. """
1405-        precondition((max_size is not None) or (not create), max_size, create)
1406-        self.home = filename
1407-        self._max_size = max_size
1408-        if create:
1409-            # touch the file, so later callers will see that we're working on
1410-            # it. Also construct the metadata.
1411-            assert not os.path.exists(self.home)
1412-            fileutil.make_dirs(os.path.dirname(self.home))
1413-            f = open(self.home, 'wb')
1414-            # The second field -- the four-byte share data length -- is no
1415-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1416-            # there in case someone downgrades a storage server from >=
1417-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1418-            # server to another, etc. We do saturation -- a share data length
1419-            # larger than 2**32-1 (what can fit into the field) is marked as
1420-            # the largest length that can fit into the field. That way, even
1421-            # if this does happen, the old < v1.3.0 server will still allow
1422-            # clients to read the first part of the share.
1423-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1424-            f.close()
1425-            self._lease_offset = max_size + 0x0c
1426-            self._num_leases = 0
1427-        else:
1428-            f = open(self.home, 'rb')
1429-            filesize = os.path.getsize(self.home)
1430-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1431-            f.close()
1432-            if version != 1:
1433-                msg = "sharefile %s had version %d but we wanted 1" % \
1434-                      (filename, version)
1435-                raise UnknownImmutableContainerVersionError(msg)
1436-            self._num_leases = num_leases
1437-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1438-        self._data_offset = 0xc
1439-
1440-    def unlink(self):
1441-        os.unlink(self.home)
1442-
1443-    def read_share_data(self, offset, length):
1444-        precondition(offset >= 0)
1445-        # Reads beyond the end of the data are truncated. Reads that start
1446-        # beyond the end of the data return an empty string.
1447-        seekpos = self._data_offset+offset
1448-        fsize = os.path.getsize(self.home)
1449-        actuallength = max(0, min(length, fsize-seekpos))
1450-        if actuallength == 0:
1451-            return ""
1452-        f = open(self.home, 'rb')
1453-        f.seek(seekpos)
1454-        return f.read(actuallength)
1455-
1456-    def write_share_data(self, offset, data):
1457-        length = len(data)
1458-        precondition(offset >= 0, offset)
1459-        if self._max_size is not None and offset+length > self._max_size:
1460-            raise DataTooLargeError(self._max_size, offset, length)
1461-        f = open(self.home, 'rb+')
1462-        real_offset = self._data_offset+offset
1463-        f.seek(real_offset)
1464-        assert f.tell() == real_offset
1465-        f.write(data)
1466-        f.close()
1467-
1468-    def _write_lease_record(self, f, lease_number, lease_info):
1469-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1470-        f.seek(offset)
1471-        assert f.tell() == offset
1472-        f.write(lease_info.to_immutable_data())
1473-
1474-    def _read_num_leases(self, f):
1475-        f.seek(0x08)
1476-        (num_leases,) = struct.unpack(">L", f.read(4))
1477-        return num_leases
1478-
1479-    def _write_num_leases(self, f, num_leases):
1480-        f.seek(0x08)
1481-        f.write(struct.pack(">L", num_leases))
1482-
1483-    def _truncate_leases(self, f, num_leases):
1484-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1485-
1486-    def get_leases(self):
1487-        """Yields a LeaseInfo instance for all leases."""
1488-        f = open(self.home, 'rb')
1489-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1490-        f.seek(self._lease_offset)
1491-        for i in range(num_leases):
1492-            data = f.read(self.LEASE_SIZE)
1493-            if data:
1494-                yield LeaseInfo().from_immutable_data(data)
1495-
1496-    def add_lease(self, lease_info):
1497-        f = open(self.home, 'rb+')
1498-        num_leases = self._read_num_leases(f)
1499-        self._write_lease_record(f, num_leases, lease_info)
1500-        self._write_num_leases(f, num_leases+1)
1501-        f.close()
1502-
1503-    def renew_lease(self, renew_secret, new_expire_time):
1504-        for i,lease in enumerate(self.get_leases()):
1505-            if constant_time_compare(lease.renew_secret, renew_secret):
1506-                # yup. See if we need to update the owner time.
1507-                if new_expire_time > lease.expiration_time:
1508-                    # yes
1509-                    lease.expiration_time = new_expire_time
1510-                    f = open(self.home, 'rb+')
1511-                    self._write_lease_record(f, i, lease)
1512-                    f.close()
1513-                return
1514-        raise IndexError("unable to renew non-existent lease")
1515-
1516-    def add_or_renew_lease(self, lease_info):
1517-        try:
1518-            self.renew_lease(lease_info.renew_secret,
1519-                             lease_info.expiration_time)
1520-        except IndexError:
1521-            self.add_lease(lease_info)
1522-
1523-
1524-    def cancel_lease(self, cancel_secret):
1525-        """Remove a lease with the given cancel_secret. If the last lease is
1526-        cancelled, the file will be removed. Return the number of bytes that
1527-        were freed (by truncating the list of leases, and possibly by
1528-        deleting the file. Raise IndexError if there was no lease with the
1529-        given cancel_secret.
1530-        """
1531-
1532-        leases = list(self.get_leases())
1533-        num_leases_removed = 0
1534-        for i,lease in enumerate(leases):
1535-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1536-                leases[i] = None
1537-                num_leases_removed += 1
1538-        if not num_leases_removed:
1539-            raise IndexError("unable to find matching lease to cancel")
1540-        if num_leases_removed:
1541-            # pack and write out the remaining leases. We write these out in
1542-            # the same order as they were added, so that if we crash while
1543-            # doing this, we won't lose any non-cancelled leases.
1544-            leases = [l for l in leases if l] # remove the cancelled leases
1545-            f = open(self.home, 'rb+')
1546-            for i,lease in enumerate(leases):
1547-                self._write_lease_record(f, i, lease)
1548-            self._write_num_leases(f, len(leases))
1549-            self._truncate_leases(f, len(leases))
1550-            f.close()
1551-        space_freed = self.LEASE_SIZE * num_leases_removed
1552-        if not len(leases):
1553-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1554-            self.unlink()
1555-        return space_freed
1556-class NullBucketWriter(Referenceable):
1557-    implements(RIBucketWriter)
1558-
1559-    def remote_write(self, offset, data):
1560-        return
1561-
1562 class BucketWriter(Referenceable):
1563     implements(RIBucketWriter)
1564 
1565hunk ./src/allmydata/storage/immutable.py 17
1566-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1567+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1568         self.ss = ss
1569hunk ./src/allmydata/storage/immutable.py 19
1570-        self.incominghome = incominghome
1571-        self.finalhome = finalhome
1572         self._max_size = max_size # don't allow the client to write more than this
1573         self._canary = canary
1574         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1575hunk ./src/allmydata/storage/immutable.py 24
1576         self.closed = False
1577         self.throw_out_all_data = False
1578-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1579+        self._sharefile = immutableshare
1580         # also, add our lease to the file now, so that other ones can be
1581         # added by simultaneous uploaders
1582         self._sharefile.add_lease(lease_info)
1583hunk ./src/allmydata/storage/server.py 16
1584 from allmydata.storage.lease import LeaseInfo
1585 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1586      create_mutable_sharefile
1587-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1588-from allmydata.storage.crawler import BucketCountingCrawler
1589-from allmydata.storage.expirer import LeaseCheckingCrawler
1590 
1591 from zope.interface import implements
1592 
1593hunk ./src/allmydata/storage/server.py 19
1594-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1595-# be started and stopped.
1596-class Backend(service.MultiService):
1597-    implements(IStatsProducer)
1598-    def __init__(self):
1599-        service.MultiService.__init__(self)
1600-
1601-    def get_bucket_shares(self):
1602-        """XXX"""
1603-        raise NotImplementedError
1604-
1605-    def get_share(self):
1606-        """XXX"""
1607-        raise NotImplementedError
1608-
1609-    def make_bucket_writer(self):
1610-        """XXX"""
1611-        raise NotImplementedError
1612-
1613-class NullBackend(Backend):
1614-    def __init__(self):
1615-        Backend.__init__(self)
1616-
1617-    def get_available_space(self):
1618-        return None
1619-
1620-    def get_bucket_shares(self, storage_index):
1621-        return set()
1622-
1623-    def get_share(self, storage_index, sharenum):
1624-        return None
1625-
1626-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1627-        return NullBucketWriter()
1628-
1629-class FSBackend(Backend):
1630-    def __init__(self, storedir, readonly=False, reserved_space=0):
1631-        Backend.__init__(self)
1632-
1633-        self._setup_storage(storedir, readonly, reserved_space)
1634-        self._setup_corruption_advisory()
1635-        self._setup_bucket_counter()
1636-        self._setup_lease_checkerf()
1637-
1638-    def _setup_storage(self, storedir, readonly, reserved_space):
1639-        self.storedir = storedir
1640-        self.readonly = readonly
1641-        self.reserved_space = int(reserved_space)
1642-        if self.reserved_space:
1643-            if self.get_available_space() is None:
1644-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1645-                        umid="0wZ27w", level=log.UNUSUAL)
1646-
1647-        self.sharedir = os.path.join(self.storedir, "shares")
1648-        fileutil.make_dirs(self.sharedir)
1649-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1650-        self._clean_incomplete()
1651-
1652-    def _clean_incomplete(self):
1653-        fileutil.rm_dir(self.incomingdir)
1654-        fileutil.make_dirs(self.incomingdir)
1655-
1656-    def _setup_corruption_advisory(self):
1657-        # we don't actually create the corruption-advisory dir until necessary
1658-        self.corruption_advisory_dir = os.path.join(self.storedir,
1659-                                                    "corruption-advisories")
1660-
1661-    def _setup_bucket_counter(self):
1662-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1663-        self.bucket_counter = BucketCountingCrawler(statefile)
1664-        self.bucket_counter.setServiceParent(self)
1665-
1666-    def _setup_lease_checkerf(self):
1667-        statefile = os.path.join(self.storedir, "lease_checker.state")
1668-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1669-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1670-                                   expiration_enabled, expiration_mode,
1671-                                   expiration_override_lease_duration,
1672-                                   expiration_cutoff_date,
1673-                                   expiration_sharetypes)
1674-        self.lease_checker.setServiceParent(self)
1675-
1676-    def get_available_space(self):
1677-        if self.readonly:
1678-            return 0
1679-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1680-
1681-    def get_bucket_shares(self, storage_index):
1682-        """Return a list of (shnum, pathname) tuples for files that hold
1683-        shares for this storage_index. In each tuple, 'shnum' will always be
1684-        the integer form of the last component of 'pathname'."""
1685-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1686-        try:
1687-            for f in os.listdir(storagedir):
1688-                if NUM_RE.match(f):
1689-                    filename = os.path.join(storagedir, f)
1690-                    yield (int(f), filename)
1691-        except OSError:
1692-            # Commonly caused by there being no buckets at all.
1693-            pass
1694-
1695 # storage/
1696 # storage/shares/incoming
1697 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1698hunk ./src/allmydata/storage/server.py 32
1699 # $SHARENUM matches this regex:
1700 NUM_RE=re.compile("^[0-9]+$")
1701 
1702-
1703-
1704 class StorageServer(service.MultiService, Referenceable):
1705     implements(RIStorageServer, IStatsProducer)
1706     name = 'storage'
1707hunk ./src/allmydata/storage/server.py 35
1708-    LeaseCheckerClass = LeaseCheckingCrawler
1709 
1710     def __init__(self, nodeid, backend, reserved_space=0,
1711                  readonly_storage=False,
1712hunk ./src/allmydata/storage/server.py 38
1713-                 stats_provider=None,
1714-                 expiration_enabled=False,
1715-                 expiration_mode="age",
1716-                 expiration_override_lease_duration=None,
1717-                 expiration_cutoff_date=None,
1718-                 expiration_sharetypes=("mutable", "immutable")):
1719+                 stats_provider=None ):
1720         service.MultiService.__init__(self)
1721         assert isinstance(nodeid, str)
1722         assert len(nodeid) == 20
1723hunk ./src/allmydata/storage/server.py 217
1724         # they asked about: this will save them a lot of work. Add or update
1725         # leases for all of them: if they want us to hold shares for this
1726         # file, they'll want us to hold leases for this file.
1727-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1728-            alreadygot.add(shnum)
1729-            sf = ShareFile(fn)
1730-            sf.add_or_renew_lease(lease_info)
1731-
1732-        for shnum in sharenums:
1733-            share = self.backend.get_share(storage_index, shnum)
1734+        for share in self.backend.get_shares(storage_index):
1735+            alreadygot.add(share.shnum)
1736+            share.add_or_renew_lease(lease_info)
1737 
1738hunk ./src/allmydata/storage/server.py 221
1739-            if not share:
1740-                if (not limited) or (remaining_space >= max_space_per_bucket):
1741-                    # ok! we need to create the new share file.
1742-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1743-                                      max_space_per_bucket, lease_info, canary)
1744-                    bucketwriters[shnum] = bw
1745-                    self._active_writers[bw] = 1
1746-                    if limited:
1747-                        remaining_space -= max_space_per_bucket
1748-                else:
1749-                    # bummer! not enough space to accept this bucket
1750-                    pass
1751+        for shnum in (sharenums - alreadygot):
1752+            if (not limited) or (remaining_space >= max_space_per_bucket):
1753+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1754+                self.backend.set_storage_server(self)
1755+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1756+                                                     max_space_per_bucket, lease_info, canary)
1757+                bucketwriters[shnum] = bw
1758+                self._active_writers[bw] = 1
1759+                if limited:
1760+                    remaining_space -= max_space_per_bucket
1761 
1762hunk ./src/allmydata/storage/server.py 232
1763-            elif share.is_complete():
1764-                # great! we already have it. easy.
1765-                pass
1766-            elif not share.is_complete():
1767-                # Note that we don't create BucketWriters for shnums that
1768-                # have a partial share (in incoming/), so if a second upload
1769-                # occurs while the first is still in progress, the second
1770-                # uploader will use different storage servers.
1771-                pass
1772+        #XXX We SHOULD DOCUMENT LATER.
1773 
1774         self.add_latency("allocate", time.time() - start)
1775         return alreadygot, bucketwriters
1776hunk ./src/allmydata/storage/server.py 238
1777 
1778     def _iter_share_files(self, storage_index):
1779-        for shnum, filename in self._get_bucket_shares(storage_index):
1780+        for shnum, filename in self._get_shares(storage_index):
1781             f = open(filename, 'rb')
1782             header = f.read(32)
1783             f.close()
1784hunk ./src/allmydata/storage/server.py 318
1785         si_s = si_b2a(storage_index)
1786         log.msg("storage: get_buckets %s" % si_s)
1787         bucketreaders = {} # k: sharenum, v: BucketReader
1788-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1789+        for shnum, filename in self.backend.get_shares(storage_index):
1790             bucketreaders[shnum] = BucketReader(self, filename,
1791                                                 storage_index, shnum)
1792         self.add_latency("get", time.time() - start)
1793hunk ./src/allmydata/storage/server.py 334
1794         # since all shares get the same lease data, we just grab the leases
1795         # from the first share
1796         try:
1797-            shnum, filename = self._get_bucket_shares(storage_index).next()
1798+            shnum, filename = self._get_shares(storage_index).next()
1799             sf = ShareFile(filename)
1800             return sf.get_leases()
1801         except StopIteration:
1802hunk ./src/allmydata/storage/shares.py 1
1803-#! /usr/bin/python
1804-
1805-from allmydata.storage.mutable import MutableShareFile
1806-from allmydata.storage.immutable import ShareFile
1807-
1808-def get_share_file(filename):
1809-    f = open(filename, "rb")
1810-    prefix = f.read(32)
1811-    f.close()
1812-    if prefix == MutableShareFile.MAGIC:
1813-        return MutableShareFile(filename)
1814-    # otherwise assume it's immutable
1815-    return ShareFile(filename)
1816-
1817rmfile ./src/allmydata/storage/shares.py
1818hunk ./src/allmydata/test/common_util.py 20
1819 
1820 def flip_one_bit(s, offset=0, size=None):
1821     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1822-    than offset+size. """
1823+    than offset+size. Return the new string. """
1824     if size is None:
1825         size=len(s)-offset
1826     i = randrange(offset, offset+size)
1827hunk ./src/allmydata/test/test_backends.py 7
1828 
1829 from allmydata.test.common_util import ReallyEqualMixin
1830 
1831-import mock
1832+import mock, os
1833 
1834 # This is the code that we're going to be testing.
1835hunk ./src/allmydata/test/test_backends.py 10
1836-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1837+from allmydata.storage.server import StorageServer
1838+
1839+from allmydata.storage.backends.das.core import DASCore
1840+from allmydata.storage.backends.null.core import NullCore
1841+
1842 
1843 # The following share file contents was generated with
1844 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1845hunk ./src/allmydata/test/test_backends.py 22
1846 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1847 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1848 
1849-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1850+tempdir = 'teststoredir'
1851+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1852+sharefname = os.path.join(sharedirname, '0')
1853 
1854 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1855     @mock.patch('time.time')
1856hunk ./src/allmydata/test/test_backends.py 58
1857         filesystem in only the prescribed ways. """
1858 
1859         def call_open(fname, mode):
1860-            if fname == 'testdir/bucket_counter.state':
1861-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1862-            elif fname == 'testdir/lease_checker.state':
1863-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1864-            elif fname == 'testdir/lease_checker.history':
1865+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1866+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1867+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1868+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1869+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1870                 return StringIO()
1871             else:
1872                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1873hunk ./src/allmydata/test/test_backends.py 124
1874     @mock.patch('__builtin__.open')
1875     def setUp(self, mockopen):
1876         def call_open(fname, mode):
1877-            if fname == 'testdir/bucket_counter.state':
1878-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1879-            elif fname == 'testdir/lease_checker.state':
1880-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1881-            elif fname == 'testdir/lease_checker.history':
1882+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1883+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1884+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1885+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1886+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1887                 return StringIO()
1888         mockopen.side_effect = call_open
1889hunk ./src/allmydata/test/test_backends.py 131
1890-
1891-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1892+        expiration_policy = {'enabled' : False,
1893+                             'mode' : 'age',
1894+                             'override_lease_duration' : None,
1895+                             'cutoff_date' : None,
1896+                             'sharetypes' : None}
1897+        testbackend = DASCore(tempdir, expiration_policy)
1898+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1899 
1900     @mock.patch('time.time')
1901     @mock.patch('os.mkdir')
1902hunk ./src/allmydata/test/test_backends.py 148
1903         """ Write a new share. """
1904 
1905         def call_listdir(dirname):
1906-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1907-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1908+            self.failUnlessReallyEqual(dirname, sharedirname)
1909+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1910 
1911         mocklistdir.side_effect = call_listdir
1912 
1913hunk ./src/allmydata/test/test_backends.py 178
1914 
1915         sharefile = MockFile()
1916         def call_open(fname, mode):
1917-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1918+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1919             return sharefile
1920 
1921         mockopen.side_effect = call_open
1922hunk ./src/allmydata/test/test_backends.py 200
1923         StorageServer object. """
1924 
1925         def call_listdir(dirname):
1926-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1927+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1928             return ['0']
1929 
1930         mocklistdir.side_effect = call_listdir
1931}
1932[checkpoint patch
1933wilcoxjg@gmail.com**20110626165715
1934 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1935] {
1936hunk ./src/allmydata/storage/backends/das/core.py 21
1937 from allmydata.storage.lease import LeaseInfo
1938 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1939      create_mutable_sharefile
1940-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1941+from allmydata.storage.immutable import BucketWriter, BucketReader
1942 from allmydata.storage.crawler import FSBucketCountingCrawler
1943 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1944 
1945hunk ./src/allmydata/storage/backends/das/core.py 27
1946 from zope.interface import implements
1947 
1948+# $SHARENUM matches this regex:
1949+NUM_RE=re.compile("^[0-9]+$")
1950+
1951 class DASCore(Backend):
1952     implements(IStorageBackend)
1953     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1954hunk ./src/allmydata/storage/backends/das/core.py 80
1955         return fileutil.get_available_space(self.storedir, self.reserved_space)
1956 
1957     def get_shares(self, storage_index):
1958-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1959+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
1960         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1961         try:
1962             for f in os.listdir(finalstoragedir):
1963hunk ./src/allmydata/storage/backends/das/core.py 86
1964                 if NUM_RE.match(f):
1965                     filename = os.path.join(finalstoragedir, f)
1966-                    yield FSBShare(filename, int(f))
1967+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
1968         except OSError:
1969             # Commonly caused by there being no buckets at all.
1970             pass
1971hunk ./src/allmydata/storage/backends/das/core.py 95
1972         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1973         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1974         return bw
1975+
1976+    def set_storage_server(self, ss):
1977+        self.ss = ss
1978         
1979 
1980 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
1981hunk ./src/allmydata/storage/server.py 29
1982 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
1983 # base-32 chars).
1984 
1985-# $SHARENUM matches this regex:
1986-NUM_RE=re.compile("^[0-9]+$")
1987 
1988 class StorageServer(service.MultiService, Referenceable):
1989     implements(RIStorageServer, IStatsProducer)
1990}
1991[checkpoint4
1992wilcoxjg@gmail.com**20110628202202
1993 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
1994] {
1995hunk ./src/allmydata/storage/backends/das/core.py 96
1996         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1997         return bw
1998 
1999+    def make_bucket_reader(self, share):
2000+        return BucketReader(self.ss, share)
2001+
2002     def set_storage_server(self, ss):
2003         self.ss = ss
2004         
2005hunk ./src/allmydata/storage/backends/das/core.py 138
2006         must not be None. """
2007         precondition((max_size is not None) or (not create), max_size, create)
2008         self.shnum = shnum
2009+        self.storage_index = storageindex
2010         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2011         self._max_size = max_size
2012         if create:
2013hunk ./src/allmydata/storage/backends/das/core.py 173
2014             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2015         self._data_offset = 0xc
2016 
2017+    def get_shnum(self):
2018+        return self.shnum
2019+
2020     def unlink(self):
2021         os.unlink(self.fname)
2022 
2023hunk ./src/allmydata/storage/backends/null/core.py 2
2024 from allmydata.storage.backends.base import Backend
2025+from allmydata.storage.immutable import BucketWriter, BucketReader
2026 
2027 class NullCore(Backend):
2028     def __init__(self):
2029hunk ./src/allmydata/storage/backends/null/core.py 17
2030     def get_share(self, storage_index, sharenum):
2031         return None
2032 
2033-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2034-        return NullBucketWriter()
2035+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2036+       
2037+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2038+
2039+    def set_storage_server(self, ss):
2040+        self.ss = ss
2041+
2042+class ImmutableShare:
2043+    sharetype = "immutable"
2044+
2045+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2046+        """ If max_size is not None then I won't allow more than
2047+        max_size to be written to me. If create=True then max_size
2048+        must not be None. """
2049+        precondition((max_size is not None) or (not create), max_size, create)
2050+        self.shnum = shnum
2051+        self.storage_index = storageindex
2052+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2053+        self._max_size = max_size
2054+        if create:
2055+            # touch the file, so later callers will see that we're working on
2056+            # it. Also construct the metadata.
2057+            assert not os.path.exists(self.fname)
2058+            fileutil.make_dirs(os.path.dirname(self.fname))
2059+            f = open(self.fname, 'wb')
2060+            # The second field -- the four-byte share data length -- is no
2061+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2062+            # there in case someone downgrades a storage server from >=
2063+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2064+            # server to another, etc. We do saturation -- a share data length
2065+            # larger than 2**32-1 (what can fit into the field) is marked as
2066+            # the largest length that can fit into the field. That way, even
2067+            # if this does happen, the old < v1.3.0 server will still allow
2068+            # clients to read the first part of the share.
2069+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2070+            f.close()
2071+            self._lease_offset = max_size + 0x0c
2072+            self._num_leases = 0
2073+        else:
2074+            f = open(self.fname, 'rb')
2075+            filesize = os.path.getsize(self.fname)
2076+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2077+            f.close()
2078+            if version != 1:
2079+                msg = "sharefile %s had version %d but we wanted 1" % \
2080+                      (self.fname, version)
2081+                raise UnknownImmutableContainerVersionError(msg)
2082+            self._num_leases = num_leases
2083+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2084+        self._data_offset = 0xc
2085+
2086+    def get_shnum(self):
2087+        return self.shnum
2088+
2089+    def unlink(self):
2090+        os.unlink(self.fname)
2091+
2092+    def read_share_data(self, offset, length):
2093+        precondition(offset >= 0)
2094+        # Reads beyond the end of the data are truncated. Reads that start
2095+        # beyond the end of the data return an empty string.
2096+        seekpos = self._data_offset+offset
2097+        fsize = os.path.getsize(self.fname)
2098+        actuallength = max(0, min(length, fsize-seekpos))
2099+        if actuallength == 0:
2100+            return ""
2101+        f = open(self.fname, 'rb')
2102+        f.seek(seekpos)
2103+        return f.read(actuallength)
2104+
2105+    def write_share_data(self, offset, data):
2106+        length = len(data)
2107+        precondition(offset >= 0, offset)
2108+        if self._max_size is not None and offset+length > self._max_size:
2109+            raise DataTooLargeError(self._max_size, offset, length)
2110+        f = open(self.fname, 'rb+')
2111+        real_offset = self._data_offset+offset
2112+        f.seek(real_offset)
2113+        assert f.tell() == real_offset
2114+        f.write(data)
2115+        f.close()
2116+
2117+    def _write_lease_record(self, f, lease_number, lease_info):
2118+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2119+        f.seek(offset)
2120+        assert f.tell() == offset
2121+        f.write(lease_info.to_immutable_data())
2122+
2123+    def _read_num_leases(self, f):
2124+        f.seek(0x08)
2125+        (num_leases,) = struct.unpack(">L", f.read(4))
2126+        return num_leases
2127+
2128+    def _write_num_leases(self, f, num_leases):
2129+        f.seek(0x08)
2130+        f.write(struct.pack(">L", num_leases))
2131+
2132+    def _truncate_leases(self, f, num_leases):
2133+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2134+
2135+    def get_leases(self):
2136+        """Yields a LeaseInfo instance for all leases."""
2137+        f = open(self.fname, 'rb')
2138+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2139+        f.seek(self._lease_offset)
2140+        for i in range(num_leases):
2141+            data = f.read(self.LEASE_SIZE)
2142+            if data:
2143+                yield LeaseInfo().from_immutable_data(data)
2144+
2145+    def add_lease(self, lease_info):
2146+        f = open(self.fname, 'rb+')
2147+        num_leases = self._read_num_leases(f)
2148+        self._write_lease_record(f, num_leases, lease_info)
2149+        self._write_num_leases(f, num_leases+1)
2150+        f.close()
2151+
2152+    def renew_lease(self, renew_secret, new_expire_time):
2153+        for i,lease in enumerate(self.get_leases()):
2154+            if constant_time_compare(lease.renew_secret, renew_secret):
2155+                # yup. See if we need to update the owner time.
2156+                if new_expire_time > lease.expiration_time:
2157+                    # yes
2158+                    lease.expiration_time = new_expire_time
2159+                    f = open(self.fname, 'rb+')
2160+                    self._write_lease_record(f, i, lease)
2161+                    f.close()
2162+                return
2163+        raise IndexError("unable to renew non-existent lease")
2164+
2165+    def add_or_renew_lease(self, lease_info):
2166+        try:
2167+            self.renew_lease(lease_info.renew_secret,
2168+                             lease_info.expiration_time)
2169+        except IndexError:
2170+            self.add_lease(lease_info)
2171+
2172+
2173+    def cancel_lease(self, cancel_secret):
2174+        """Remove a lease with the given cancel_secret. If the last lease is
2175+        cancelled, the file will be removed. Return the number of bytes that
2176+        were freed (by truncating the list of leases, and possibly by
2177+        deleting the file. Raise IndexError if there was no lease with the
2178+        given cancel_secret.
2179+        """
2180+
2181+        leases = list(self.get_leases())
2182+        num_leases_removed = 0
2183+        for i,lease in enumerate(leases):
2184+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2185+                leases[i] = None
2186+                num_leases_removed += 1
2187+        if not num_leases_removed:
2188+            raise IndexError("unable to find matching lease to cancel")
2189+        if num_leases_removed:
2190+            # pack and write out the remaining leases. We write these out in
2191+            # the same order as they were added, so that if we crash while
2192+            # doing this, we won't lose any non-cancelled leases.
2193+            leases = [l for l in leases if l] # remove the cancelled leases
2194+            f = open(self.fname, 'rb+')
2195+            for i,lease in enumerate(leases):
2196+                self._write_lease_record(f, i, lease)
2197+            self._write_num_leases(f, len(leases))
2198+            self._truncate_leases(f, len(leases))
2199+            f.close()
2200+        space_freed = self.LEASE_SIZE * num_leases_removed
2201+        if not len(leases):
2202+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2203+            self.unlink()
2204+        return space_freed
2205hunk ./src/allmydata/storage/immutable.py 114
2206 class BucketReader(Referenceable):
2207     implements(RIBucketReader)
2208 
2209-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2210+    def __init__(self, ss, share):
2211         self.ss = ss
2212hunk ./src/allmydata/storage/immutable.py 116
2213-        self._share_file = ShareFile(sharefname)
2214-        self.storage_index = storage_index
2215-        self.shnum = shnum
2216+        self._share_file = share
2217+        self.storage_index = share.storage_index
2218+        self.shnum = share.shnum
2219 
2220     def __repr__(self):
2221         return "<%s %s %s>" % (self.__class__.__name__,
2222hunk ./src/allmydata/storage/server.py 316
2223         si_s = si_b2a(storage_index)
2224         log.msg("storage: get_buckets %s" % si_s)
2225         bucketreaders = {} # k: sharenum, v: BucketReader
2226-        for shnum, filename in self.backend.get_shares(storage_index):
2227-            bucketreaders[shnum] = BucketReader(self, filename,
2228-                                                storage_index, shnum)
2229+        self.backend.set_storage_server(self)
2230+        for share in self.backend.get_shares(storage_index):
2231+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2232         self.add_latency("get", time.time() - start)
2233         return bucketreaders
2234 
2235hunk ./src/allmydata/test/test_backends.py 25
2236 tempdir = 'teststoredir'
2237 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2238 sharefname = os.path.join(sharedirname, '0')
2239+expiration_policy = {'enabled' : False,
2240+                     'mode' : 'age',
2241+                     'override_lease_duration' : None,
2242+                     'cutoff_date' : None,
2243+                     'sharetypes' : None}
2244 
2245 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2246     @mock.patch('time.time')
2247hunk ./src/allmydata/test/test_backends.py 43
2248         tries to read or write to the file system. """
2249 
2250         # Now begin the test.
2251-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2252+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2253 
2254         self.failIf(mockisdir.called)
2255         self.failIf(mocklistdir.called)
2256hunk ./src/allmydata/test/test_backends.py 74
2257         mockopen.side_effect = call_open
2258 
2259         # Now begin the test.
2260-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2261+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2262 
2263         self.failIf(mockisdir.called)
2264         self.failIf(mocklistdir.called)
2265hunk ./src/allmydata/test/test_backends.py 86
2266 
2267 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2268     def setUp(self):
2269-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2270+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2271 
2272     @mock.patch('os.mkdir')
2273     @mock.patch('__builtin__.open')
2274hunk ./src/allmydata/test/test_backends.py 136
2275             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2276                 return StringIO()
2277         mockopen.side_effect = call_open
2278-        expiration_policy = {'enabled' : False,
2279-                             'mode' : 'age',
2280-                             'override_lease_duration' : None,
2281-                             'cutoff_date' : None,
2282-                             'sharetypes' : None}
2283         testbackend = DASCore(tempdir, expiration_policy)
2284         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2285 
2286}
2287[checkpoint5
2288wilcoxjg@gmail.com**20110705034626
2289 Ignore-this: 255780bd58299b0aa33c027e9d008262
2290] {
2291addfile ./src/allmydata/storage/backends/base.py
2292hunk ./src/allmydata/storage/backends/base.py 1
2293+from twisted.application import service
2294+
2295+class Backend(service.MultiService):
2296+    def __init__(self):
2297+        service.MultiService.__init__(self)
2298hunk ./src/allmydata/storage/backends/null/core.py 19
2299 
2300     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2301         
2302+        immutableshare = ImmutableShare()
2303         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2304 
2305     def set_storage_server(self, ss):
2306hunk ./src/allmydata/storage/backends/null/core.py 28
2307 class ImmutableShare:
2308     sharetype = "immutable"
2309 
2310-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2311+    def __init__(self):
2312         """ If max_size is not None then I won't allow more than
2313         max_size to be written to me. If create=True then max_size
2314         must not be None. """
2315hunk ./src/allmydata/storage/backends/null/core.py 32
2316-        precondition((max_size is not None) or (not create), max_size, create)
2317-        self.shnum = shnum
2318-        self.storage_index = storageindex
2319-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2320-        self._max_size = max_size
2321-        if create:
2322-            # touch the file, so later callers will see that we're working on
2323-            # it. Also construct the metadata.
2324-            assert not os.path.exists(self.fname)
2325-            fileutil.make_dirs(os.path.dirname(self.fname))
2326-            f = open(self.fname, 'wb')
2327-            # The second field -- the four-byte share data length -- is no
2328-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2329-            # there in case someone downgrades a storage server from >=
2330-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2331-            # server to another, etc. We do saturation -- a share data length
2332-            # larger than 2**32-1 (what can fit into the field) is marked as
2333-            # the largest length that can fit into the field. That way, even
2334-            # if this does happen, the old < v1.3.0 server will still allow
2335-            # clients to read the first part of the share.
2336-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2337-            f.close()
2338-            self._lease_offset = max_size + 0x0c
2339-            self._num_leases = 0
2340-        else:
2341-            f = open(self.fname, 'rb')
2342-            filesize = os.path.getsize(self.fname)
2343-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2344-            f.close()
2345-            if version != 1:
2346-                msg = "sharefile %s had version %d but we wanted 1" % \
2347-                      (self.fname, version)
2348-                raise UnknownImmutableContainerVersionError(msg)
2349-            self._num_leases = num_leases
2350-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2351-        self._data_offset = 0xc
2352+        pass
2353 
2354     def get_shnum(self):
2355         return self.shnum
2356hunk ./src/allmydata/storage/backends/null/core.py 54
2357         return f.read(actuallength)
2358 
2359     def write_share_data(self, offset, data):
2360-        length = len(data)
2361-        precondition(offset >= 0, offset)
2362-        if self._max_size is not None and offset+length > self._max_size:
2363-            raise DataTooLargeError(self._max_size, offset, length)
2364-        f = open(self.fname, 'rb+')
2365-        real_offset = self._data_offset+offset
2366-        f.seek(real_offset)
2367-        assert f.tell() == real_offset
2368-        f.write(data)
2369-        f.close()
2370+        pass
2371 
2372     def _write_lease_record(self, f, lease_number, lease_info):
2373         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2374hunk ./src/allmydata/storage/backends/null/core.py 84
2375             if data:
2376                 yield LeaseInfo().from_immutable_data(data)
2377 
2378-    def add_lease(self, lease_info):
2379-        f = open(self.fname, 'rb+')
2380-        num_leases = self._read_num_leases(f)
2381-        self._write_lease_record(f, num_leases, lease_info)
2382-        self._write_num_leases(f, num_leases+1)
2383-        f.close()
2384+    def add_lease(self, lease):
2385+        pass
2386 
2387     def renew_lease(self, renew_secret, new_expire_time):
2388         for i,lease in enumerate(self.get_leases()):
2389hunk ./src/allmydata/test/test_backends.py 32
2390                      'sharetypes' : None}
2391 
2392 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2393-    @mock.patch('time.time')
2394-    @mock.patch('os.mkdir')
2395-    @mock.patch('__builtin__.open')
2396-    @mock.patch('os.listdir')
2397-    @mock.patch('os.path.isdir')
2398-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2399-        """ This tests whether a server instance can be constructed
2400-        with a null backend. The server instance fails the test if it
2401-        tries to read or write to the file system. """
2402-
2403-        # Now begin the test.
2404-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2405-
2406-        self.failIf(mockisdir.called)
2407-        self.failIf(mocklistdir.called)
2408-        self.failIf(mockopen.called)
2409-        self.failIf(mockmkdir.called)
2410-
2411-        # You passed!
2412-
2413     @mock.patch('time.time')
2414     @mock.patch('os.mkdir')
2415     @mock.patch('__builtin__.open')
2416hunk ./src/allmydata/test/test_backends.py 53
2417                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2418         mockopen.side_effect = call_open
2419 
2420-        # Now begin the test.
2421-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2422-
2423-        self.failIf(mockisdir.called)
2424-        self.failIf(mocklistdir.called)
2425-        self.failIf(mockopen.called)
2426-        self.failIf(mockmkdir.called)
2427-        self.failIf(mocktime.called)
2428-
2429-        # You passed!
2430-
2431-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2432-    def setUp(self):
2433-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2434-
2435-    @mock.patch('os.mkdir')
2436-    @mock.patch('__builtin__.open')
2437-    @mock.patch('os.listdir')
2438-    @mock.patch('os.path.isdir')
2439-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2440-        """ Write a new share. """
2441-
2442-        # Now begin the test.
2443-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2444-        bs[0].remote_write(0, 'a')
2445-        self.failIf(mockisdir.called)
2446-        self.failIf(mocklistdir.called)
2447-        self.failIf(mockopen.called)
2448-        self.failIf(mockmkdir.called)
2449+        def call_isdir(fname):
2450+            if fname == os.path.join(tempdir,'shares'):
2451+                return True
2452+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2453+                return True
2454+            else:
2455+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2456+        mockisdir.side_effect = call_isdir
2457 
2458hunk ./src/allmydata/test/test_backends.py 62
2459-    @mock.patch('os.path.exists')
2460-    @mock.patch('os.path.getsize')
2461-    @mock.patch('__builtin__.open')
2462-    @mock.patch('os.listdir')
2463-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2464-        """ This tests whether the code correctly finds and reads
2465-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2466-        servers. There is a similar test in test_download, but that one
2467-        is from the perspective of the client and exercises a deeper
2468-        stack of code. This one is for exercising just the
2469-        StorageServer object. """
2470+        def call_mkdir(fname, mode):
2471+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2472+            self.failUnlessEqual(0777, mode)
2473+            if fname == tempdir:
2474+                return None
2475+            elif fname == os.path.join(tempdir,'shares'):
2476+                return None
2477+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2478+                return None
2479+            else:
2480+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2481+        mockmkdir.side_effect = call_mkdir
2482 
2483         # Now begin the test.
2484hunk ./src/allmydata/test/test_backends.py 76
2485-        bs = self.s.remote_get_buckets('teststorage_index')
2486+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2487 
2488hunk ./src/allmydata/test/test_backends.py 78
2489-        self.failUnlessEqual(len(bs), 0)
2490-        self.failIf(mocklistdir.called)
2491-        self.failIf(mockopen.called)
2492-        self.failIf(mockgetsize.called)
2493-        self.failIf(mockexists.called)
2494+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2495 
2496 
2497 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2498hunk ./src/allmydata/test/test_backends.py 193
2499         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2500 
2501 
2502+
2503+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2504+    @mock.patch('time.time')
2505+    @mock.patch('os.mkdir')
2506+    @mock.patch('__builtin__.open')
2507+    @mock.patch('os.listdir')
2508+    @mock.patch('os.path.isdir')
2509+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2510+        """ This tests whether a file system backend instance can be
2511+        constructed. To pass the test, it has to use the
2512+        filesystem in only the prescribed ways. """
2513+
2514+        def call_open(fname, mode):
2515+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2516+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2517+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2518+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2519+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2520+                return StringIO()
2521+            else:
2522+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2523+        mockopen.side_effect = call_open
2524+
2525+        def call_isdir(fname):
2526+            if fname == os.path.join(tempdir,'shares'):
2527+                return True
2528+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2529+                return True
2530+            else:
2531+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2532+        mockisdir.side_effect = call_isdir
2533+
2534+        def call_mkdir(fname, mode):
2535+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2536+            self.failUnlessEqual(0777, mode)
2537+            if fname == tempdir:
2538+                return None
2539+            elif fname == os.path.join(tempdir,'shares'):
2540+                return None
2541+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2542+                return None
2543+            else:
2544+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2545+        mockmkdir.side_effect = call_mkdir
2546+
2547+        # Now begin the test.
2548+        DASCore('teststoredir', expiration_policy)
2549+
2550+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2551}
2552[checkpoint 6
2553wilcoxjg@gmail.com**20110706190824
2554 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2555] {
2556hunk ./src/allmydata/interfaces.py 100
2557                          renew_secret=LeaseRenewSecret,
2558                          cancel_secret=LeaseCancelSecret,
2559                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2560-                         allocated_size=Offset, canary=Referenceable):
2561+                         allocated_size=Offset,
2562+                         canary=Referenceable):
2563         """
2564hunk ./src/allmydata/interfaces.py 103
2565-        @param storage_index: the index of the bucket to be created or
2566+        @param storage_index: the index of the shares to be created or
2567                               increfed.
2568hunk ./src/allmydata/interfaces.py 105
2569-        @param sharenums: these are the share numbers (probably between 0 and
2570-                          99) that the sender is proposing to store on this
2571-                          server.
2572-        @param renew_secret: This is the secret used to protect bucket refresh
2573+        @param renew_secret: This is the secret used to protect shares refresh
2574                              This secret is generated by the client and
2575                              stored for later comparison by the server. Each
2576                              server is given a different secret.
2577hunk ./src/allmydata/interfaces.py 109
2578-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2579-        @param canary: If the canary is lost before close(), the bucket is
2580+        @param cancel_secret: Like renew_secret, but protects shares decref.
2581+        @param sharenums: these are the share numbers (probably between 0 and
2582+                          99) that the sender is proposing to store on this
2583+                          server.
2584+        @param allocated_size: XXX The size of the shares the client wishes to store.
2585+        @param canary: If the canary is lost before close(), the shares are
2586                        deleted.
2587hunk ./src/allmydata/interfaces.py 116
2588+
2589         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2590                  already have and allocated is what we hereby agree to accept.
2591                  New leases are added for shares in both lists.
2592hunk ./src/allmydata/interfaces.py 128
2593                   renew_secret=LeaseRenewSecret,
2594                   cancel_secret=LeaseCancelSecret):
2595         """
2596-        Add a new lease on the given bucket. If the renew_secret matches an
2597+        Add a new lease on the given shares. If the renew_secret matches an
2598         existing lease, that lease will be renewed instead. If there is no
2599         bucket for the given storage_index, return silently. (note that in
2600         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2601hunk ./src/allmydata/storage/server.py 17
2602 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2603      create_mutable_sharefile
2604 
2605-from zope.interface import implements
2606-
2607 # storage/
2608 # storage/shares/incoming
2609 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2610hunk ./src/allmydata/test/test_backends.py 6
2611 from StringIO import StringIO
2612 
2613 from allmydata.test.common_util import ReallyEqualMixin
2614+from allmydata.util.assertutil import _assert
2615 
2616 import mock, os
2617 
2618hunk ./src/allmydata/test/test_backends.py 92
2619                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2620             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2621                 return StringIO()
2622+            else:
2623+                _assert(False, "The tester code doesn't recognize this case.") 
2624+
2625         mockopen.side_effect = call_open
2626         testbackend = DASCore(tempdir, expiration_policy)
2627         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2628hunk ./src/allmydata/test/test_backends.py 109
2629 
2630         def call_listdir(dirname):
2631             self.failUnlessReallyEqual(dirname, sharedirname)
2632-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2633+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2634 
2635         mocklistdir.side_effect = call_listdir
2636 
2637hunk ./src/allmydata/test/test_backends.py 113
2638+        def call_isdir(dirname):
2639+            self.failUnlessReallyEqual(dirname, sharedirname)
2640+            return True
2641+
2642+        mockisdir.side_effect = call_isdir
2643+
2644+        def call_mkdir(dirname, permissions):
2645+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2646+                self.Fail
2647+            else:
2648+                return True
2649+
2650+        mockmkdir.side_effect = call_mkdir
2651+
2652         class MockFile:
2653             def __init__(self):
2654                 self.buffer = ''
2655hunk ./src/allmydata/test/test_backends.py 156
2656             return sharefile
2657 
2658         mockopen.side_effect = call_open
2659+
2660         # Now begin the test.
2661         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2662         bs[0].remote_write(0, 'a')
2663hunk ./src/allmydata/test/test_backends.py 161
2664         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2665+       
2666+        # Now test the allocated_size method.
2667+        spaceint = self.s.allocated_size()
2668 
2669     @mock.patch('os.path.exists')
2670     @mock.patch('os.path.getsize')
2671}
2672[checkpoint 7
2673wilcoxjg@gmail.com**20110706200820
2674 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2675] hunk ./src/allmydata/test/test_backends.py 164
2676         
2677         # Now test the allocated_size method.
2678         spaceint = self.s.allocated_size()
2679+        self.failUnlessReallyEqual(spaceint, 1)
2680 
2681     @mock.patch('os.path.exists')
2682     @mock.patch('os.path.getsize')
2683[checkpoint8
2684wilcoxjg@gmail.com**20110706223126
2685 Ignore-this: 97336180883cb798b16f15411179f827
2686   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2687] hunk ./src/allmydata/test/test_backends.py 32
2688                      'cutoff_date' : None,
2689                      'sharetypes' : None}
2690 
2691+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2692+    def setUp(self):
2693+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2694+
2695+    @mock.patch('os.mkdir')
2696+    @mock.patch('__builtin__.open')
2697+    @mock.patch('os.listdir')
2698+    @mock.patch('os.path.isdir')
2699+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2700+        """ Write a new share. """
2701+
2702+        # Now begin the test.
2703+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2704+        bs[0].remote_write(0, 'a')
2705+        self.failIf(mockisdir.called)
2706+        self.failIf(mocklistdir.called)
2707+        self.failIf(mockopen.called)
2708+        self.failIf(mockmkdir.called)
2709+
2710 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2711     @mock.patch('time.time')
2712     @mock.patch('os.mkdir')
2713[checkpoint 9
2714wilcoxjg@gmail.com**20110707042942
2715 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2716] {
2717hunk ./src/allmydata/storage/backends/das/core.py 88
2718                     filename = os.path.join(finalstoragedir, f)
2719                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2720         except OSError:
2721-            # Commonly caused by there being no buckets at all.
2722+            # Commonly caused by there being no shares at all.
2723             pass
2724         
2725     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2726hunk ./src/allmydata/storage/backends/das/core.py 141
2727         self.storage_index = storageindex
2728         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2729         self._max_size = max_size
2730+        self.incomingdir = os.path.join(sharedir, 'incoming')
2731+        si_dir = storage_index_to_dir(storageindex)
2732+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2733+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2734         if create:
2735             # touch the file, so later callers will see that we're working on
2736             # it. Also construct the metadata.
2737hunk ./src/allmydata/storage/backends/das/core.py 177
2738             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2739         self._data_offset = 0xc
2740 
2741+    def close(self):
2742+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2743+        fileutil.rename(self.incominghome, self.finalhome)
2744+        try:
2745+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2746+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2747+            # these directories lying around forever, but the delete might
2748+            # fail if we're working on another share for the same storage
2749+            # index (like ab/abcde/5). The alternative approach would be to
2750+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2751+            # ShareWriter), each of which is responsible for a single
2752+            # directory on disk, and have them use reference counting of
2753+            # their children to know when they should do the rmdir. This
2754+            # approach is simpler, but relies on os.rmdir refusing to delete
2755+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2756+            os.rmdir(os.path.dirname(self.incominghome))
2757+            # we also delete the grandparent (prefix) directory, .../ab ,
2758+            # again to avoid leaving directories lying around. This might
2759+            # fail if there is another bucket open that shares a prefix (like
2760+            # ab/abfff).
2761+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2762+            # we leave the great-grandparent (incoming/) directory in place.
2763+        except EnvironmentError:
2764+            # ignore the "can't rmdir because the directory is not empty"
2765+            # exceptions, those are normal consequences of the
2766+            # above-mentioned conditions.
2767+            pass
2768+        pass
2769+       
2770+    def stat(self):
2771+        return os.stat(self.finalhome)[stat.ST_SIZE]
2772+
2773     def get_shnum(self):
2774         return self.shnum
2775 
2776hunk ./src/allmydata/storage/immutable.py 7
2777 
2778 from zope.interface import implements
2779 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2780-from allmydata.util import base32, fileutil, log
2781+from allmydata.util import base32, log
2782 from allmydata.util.assertutil import precondition
2783 from allmydata.util.hashutil import constant_time_compare
2784 from allmydata.storage.lease import LeaseInfo
2785hunk ./src/allmydata/storage/immutable.py 44
2786     def remote_close(self):
2787         precondition(not self.closed)
2788         start = time.time()
2789-
2790-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2791-        fileutil.rename(self.incominghome, self.finalhome)
2792-        try:
2793-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2794-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2795-            # these directories lying around forever, but the delete might
2796-            # fail if we're working on another share for the same storage
2797-            # index (like ab/abcde/5). The alternative approach would be to
2798-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2799-            # ShareWriter), each of which is responsible for a single
2800-            # directory on disk, and have them use reference counting of
2801-            # their children to know when they should do the rmdir. This
2802-            # approach is simpler, but relies on os.rmdir refusing to delete
2803-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2804-            os.rmdir(os.path.dirname(self.incominghome))
2805-            # we also delete the grandparent (prefix) directory, .../ab ,
2806-            # again to avoid leaving directories lying around. This might
2807-            # fail if there is another bucket open that shares a prefix (like
2808-            # ab/abfff).
2809-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2810-            # we leave the great-grandparent (incoming/) directory in place.
2811-        except EnvironmentError:
2812-            # ignore the "can't rmdir because the directory is not empty"
2813-            # exceptions, those are normal consequences of the
2814-            # above-mentioned conditions.
2815-            pass
2816+        self._sharefile.close()
2817         self._sharefile = None
2818         self.closed = True
2819         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2820hunk ./src/allmydata/storage/immutable.py 49
2821 
2822-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2823+        filelen = self._sharefile.stat()
2824         self.ss.bucket_writer_closed(self, filelen)
2825         self.ss.add_latency("close", time.time() - start)
2826         self.ss.count("close")
2827hunk ./src/allmydata/storage/server.py 45
2828         self._active_writers = weakref.WeakKeyDictionary()
2829         self.backend = backend
2830         self.backend.setServiceParent(self)
2831+        self.backend.set_storage_server(self)
2832         log.msg("StorageServer created", facility="tahoe.storage")
2833 
2834         self.latencies = {"allocate": [], # immutable
2835hunk ./src/allmydata/storage/server.py 220
2836 
2837         for shnum in (sharenums - alreadygot):
2838             if (not limited) or (remaining_space >= max_space_per_bucket):
2839-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2840-                self.backend.set_storage_server(self)
2841                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2842                                                      max_space_per_bucket, lease_info, canary)
2843                 bucketwriters[shnum] = bw
2844hunk ./src/allmydata/test/test_backends.py 117
2845         mockopen.side_effect = call_open
2846         testbackend = DASCore(tempdir, expiration_policy)
2847         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2848-
2849+   
2850+    @mock.patch('allmydata.util.fileutil.get_available_space')
2851     @mock.patch('time.time')
2852     @mock.patch('os.mkdir')
2853     @mock.patch('__builtin__.open')
2854hunk ./src/allmydata/test/test_backends.py 124
2855     @mock.patch('os.listdir')
2856     @mock.patch('os.path.isdir')
2857-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2858+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2859+                             mockget_available_space):
2860         """ Write a new share. """
2861 
2862         def call_listdir(dirname):
2863hunk ./src/allmydata/test/test_backends.py 148
2864 
2865         mockmkdir.side_effect = call_mkdir
2866 
2867+        def call_get_available_space(storedir, reserved_space):
2868+            self.failUnlessReallyEqual(storedir, tempdir)
2869+            return 1
2870+
2871+        mockget_available_space.side_effect = call_get_available_space
2872+
2873         class MockFile:
2874             def __init__(self):
2875                 self.buffer = ''
2876hunk ./src/allmydata/test/test_backends.py 188
2877         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2878         bs[0].remote_write(0, 'a')
2879         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2880-       
2881+
2882+        # What happens when there's not enough space for the client's request?
2883+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2884+
2885         # Now test the allocated_size method.
2886         spaceint = self.s.allocated_size()
2887         self.failUnlessReallyEqual(spaceint, 1)
2888}
2889
2890Context:
2891
2892[add Protovis.js-based download-status timeline visualization
2893Brian Warner <warner@lothar.com>**20110629222606
2894 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
2895 
2896 provide status overlap info on the webapi t=json output, add decode/decrypt
2897 rate tooltips, add zoomin/zoomout buttons
2898]
2899[add more download-status data, fix tests
2900Brian Warner <warner@lothar.com>**20110629222555
2901 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
2902]
2903[prepare for viz: improve DownloadStatus events
2904Brian Warner <warner@lothar.com>**20110629222542
2905 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
2906 
2907 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
2908]
2909[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
2910zooko@zooko.com**20110629185711
2911 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
2912]
2913[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
2914david-sarah@jacaranda.org**20110130235809
2915 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
2916]
2917[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
2918david-sarah@jacaranda.org**20110626054124
2919 Ignore-this: abb864427a1b91bd10d5132b4589fd90
2920]
2921[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
2922david-sarah@jacaranda.org**20110623205528
2923 Ignore-this: c63e23146c39195de52fb17c7c49b2da
2924]
2925[Rename test_package_initialization.py to (much shorter) test_import.py .
2926Brian Warner <warner@lothar.com>**20110611190234
2927 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
2928 
2929 The former name was making my 'ls' listings hard to read, by forcing them
2930 down to just two columns.
2931]
2932[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
2933zooko@zooko.com**20110611163741
2934 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
2935 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
2936 fixes #1412
2937]
2938[wui: right-align the size column in the WUI
2939zooko@zooko.com**20110611153758
2940 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
2941 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
2942 fixes #1412
2943]
2944[docs: three minor fixes
2945zooko@zooko.com**20110610121656
2946 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
2947 CREDITS for arc for stats tweak
2948 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
2949 English usage tweak
2950]
2951[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
2952david-sarah@jacaranda.org**20110609223719
2953 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
2954]
2955[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
2956wilcoxjg@gmail.com**20110527120135
2957 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
2958 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
2959 NEWS.rst, stats.py: documentation of change to get_latencies
2960 stats.rst: now documents percentile modification in get_latencies
2961 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
2962 fixes #1392
2963]
2964[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
2965david-sarah@jacaranda.org**20110517011214
2966 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
2967]
2968[docs: convert NEWS to NEWS.rst and change all references to it.
2969david-sarah@jacaranda.org**20110517010255
2970 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
2971]
2972[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
2973david-sarah@jacaranda.org**20110512140559
2974 Ignore-this: 784548fc5367fac5450df1c46890876d
2975]
2976[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
2977david-sarah@jacaranda.org**20110130164923
2978 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
2979]
2980[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
2981zooko@zooko.com**20110128142006
2982 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
2983 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
2984]
2985[M-x whitespace-cleanup
2986zooko@zooko.com**20110510193653
2987 Ignore-this: dea02f831298c0f65ad096960e7df5c7
2988]
2989[docs: fix typo in running.rst, thanks to arch_o_median
2990zooko@zooko.com**20110510193633
2991 Ignore-this: ca06de166a46abbc61140513918e79e8
2992]
2993[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
2994david-sarah@jacaranda.org**20110204204902
2995 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
2996]
2997[relnotes.txt: forseeable -> foreseeable. refs #1342
2998david-sarah@jacaranda.org**20110204204116
2999 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
3000]
3001[replace remaining .html docs with .rst docs
3002zooko@zooko.com**20110510191650
3003 Ignore-this: d557d960a986d4ac8216d1677d236399
3004 Remove install.html (long since deprecated).
3005 Also replace some obsolete references to install.html with references to quickstart.rst.
3006 Fix some broken internal references within docs/historical/historical_known_issues.txt.
3007 Thanks to Ravi Pinjala and Patrick McDonald.
3008 refs #1227
3009]
3010[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
3011zooko@zooko.com**20110428055232
3012 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
3013]
3014[munin tahoe_files plugin: fix incorrect file count
3015francois@ctrlaltdel.ch**20110428055312
3016 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
3017 fixes #1391
3018]
3019[corrected "k must never be smaller than N" to "k must never be greater than N"
3020secorp@allmydata.org**20110425010308
3021 Ignore-this: 233129505d6c70860087f22541805eac
3022]
3023[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
3024david-sarah@jacaranda.org**20110411190738
3025 Ignore-this: 7847d26bc117c328c679f08a7baee519
3026]
3027[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
3028david-sarah@jacaranda.org**20110410155844
3029 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
3030]
3031[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
3032david-sarah@jacaranda.org**20110410155705
3033 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
3034]
3035[remove unused variable detected by pyflakes
3036zooko@zooko.com**20110407172231
3037 Ignore-this: 7344652d5e0720af822070d91f03daf9
3038]
3039[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
3040david-sarah@jacaranda.org**20110401202750
3041 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
3042]
3043[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
3044Brian Warner <warner@lothar.com>**20110325232511
3045 Ignore-this: d5307faa6900f143193bfbe14e0f01a
3046]
3047[control.py: remove all uses of s.get_serverid()
3048warner@lothar.com**20110227011203
3049 Ignore-this: f80a787953bd7fa3d40e828bde00e855
3050]
3051[web: remove some uses of s.get_serverid(), not all
3052warner@lothar.com**20110227011159
3053 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
3054]
3055[immutable/downloader/fetcher.py: remove all get_serverid() calls
3056warner@lothar.com**20110227011156
3057 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
3058]
3059[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
3060warner@lothar.com**20110227011153
3061 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
3062 
3063 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
3064 _shares_from_server dict was being popped incorrectly (using shnum as the
3065 index instead of serverid). I'm still thinking through the consequences of
3066 this bug. It was probably benign and really hard to detect. I think it would
3067 cause us to incorrectly believe that we're pulling too many shares from a
3068 server, and thus prefer a different server rather than asking for a second
3069 share from the first server. The diversity code is intended to spread out the
3070 number of shares simultaneously being requested from each server, but with
3071 this bug, it might be spreading out the total number of shares requested at
3072 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
3073 segment, so the effect doesn't last very long).
3074]
3075[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
3076warner@lothar.com**20110227011150
3077 Ignore-this: d8d56dd8e7b280792b40105e13664554
3078 
3079 test_download.py: create+check MyShare instances better, make sure they share
3080 Server objects, now that finder.py cares
3081]
3082[immutable/downloader/finder.py: reduce use of get_serverid(), one left
3083warner@lothar.com**20110227011146
3084 Ignore-this: 5785be173b491ae8a78faf5142892020
3085]
3086[immutable/offloaded.py: reduce use of get_serverid() a bit more
3087warner@lothar.com**20110227011142
3088 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
3089]
3090[immutable/upload.py: reduce use of get_serverid()
3091warner@lothar.com**20110227011138
3092 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
3093]
3094[immutable/checker.py: remove some uses of s.get_serverid(), not all
3095warner@lothar.com**20110227011134
3096 Ignore-this: e480a37efa9e94e8016d826c492f626e
3097]
3098[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
3099warner@lothar.com**20110227011132
3100 Ignore-this: 6078279ddf42b179996a4b53bee8c421
3101 MockIServer stubs
3102]
3103[upload.py: rearrange _make_trackers a bit, no behavior changes
3104warner@lothar.com**20110227011128
3105 Ignore-this: 296d4819e2af452b107177aef6ebb40f
3106]
3107[happinessutil.py: finally rename merge_peers to merge_servers
3108warner@lothar.com**20110227011124
3109 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
3110]
3111[test_upload.py: factor out FakeServerTracker
3112warner@lothar.com**20110227011120
3113 Ignore-this: 6c182cba90e908221099472cc159325b
3114]
3115[test_upload.py: server-vs-tracker cleanup
3116warner@lothar.com**20110227011115
3117 Ignore-this: 2915133be1a3ba456e8603885437e03
3118]
3119[happinessutil.py: server-vs-tracker cleanup
3120warner@lothar.com**20110227011111
3121 Ignore-this: b856c84033562d7d718cae7cb01085a9
3122]
3123[upload.py: more tracker-vs-server cleanup
3124warner@lothar.com**20110227011107
3125 Ignore-this: bb75ed2afef55e47c085b35def2de315
3126]
3127[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
3128warner@lothar.com**20110227011103
3129 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
3130]
3131[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
3132warner@lothar.com**20110227011100
3133 Ignore-this: 7ea858755cbe5896ac212a925840fe68
3134 
3135 No behavioral changes, just updating variable/method names and log messages.
3136 The effects outside these three files should be minimal: some exception
3137 messages changed (to say "server" instead of "peer"), and some internal class
3138 names were changed. A few things still use "peer" to minimize external
3139 changes, like UploadResults.timings["peer_selection"] and
3140 happinessutil.merge_peers, which can be changed later.
3141]
3142[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
3143warner@lothar.com**20110227011056
3144 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
3145]
3146[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
3147warner@lothar.com**20110227011051
3148 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
3149]
3150[test: increase timeout on a network test because Francois's ARM machine hit that timeout
3151zooko@zooko.com**20110317165909
3152 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
3153 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
3154]
3155[docs/configuration.rst: add a "Frontend Configuration" section
3156Brian Warner <warner@lothar.com>**20110222014323
3157 Ignore-this: 657018aa501fe4f0efef9851628444ca
3158 
3159 this points to docs/frontends/*.rst, which were previously underlinked
3160]
3161[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
3162"Brian Warner <warner@lothar.com>"**20110221061544
3163 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
3164]
3165[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
3166david-sarah@jacaranda.org**20110221015817
3167 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
3168]
3169[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
3170david-sarah@jacaranda.org**20110221020125
3171 Ignore-this: b0744ed58f161bf188e037bad077fc48
3172]
3173[Refactor StorageFarmBroker handling of servers
3174Brian Warner <warner@lothar.com>**20110221015804
3175 Ignore-this: 842144ed92f5717699b8f580eab32a51
3176 
3177 Pass around IServer instance instead of (peerid, rref) tuple. Replace
3178 "descriptor" with "server". Other replacements:
3179 
3180  get_all_servers -> get_connected_servers/get_known_servers
3181  get_servers_for_index -> get_servers_for_psi (now returns IServers)
3182 
3183 This change still needs to be pushed further down: lots of code is now
3184 getting the IServer and then distributing (peerid, rref) internally.
3185 Instead, it ought to distribute the IServer internally and delay
3186 extracting a serverid or rref until the last moment.
3187 
3188 no_network.py was updated to retain parallelism.
3189]
3190[TAG allmydata-tahoe-1.8.2
3191warner@lothar.com**20110131020101]
3192Patch bundle hash:
319310d4c3bb24707942370b71fc11ad2f6a18ac6835