Ticket #999: checkpoint10.darcs.patch

File checkpoint10.darcs.patch, 141.6 KB (added by arch_o_median, at 2011-07-07T17:45:22Z)

Completed coverage of remote_allocate_buckets

Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40New patches:
41
42[storage: new mocking tests of storage server read and write
43wilcoxjg@gmail.com**20110325203514
44 Ignore-this: df65c3c4f061dd1516f88662023fdb41
45 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
46] {
47addfile ./src/allmydata/test/test_server.py
48hunk ./src/allmydata/test/test_server.py 1
49+from twisted.trial import unittest
50+
51+from StringIO import StringIO
52+
53+from allmydata.test.common_util import ReallyEqualMixin
54+
55+import mock
56+
57+# This is the code that we're going to be testing.
58+from allmydata.storage.server import StorageServer
59+
60+# The following share file contents was generated with
61+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
62+# with share data == 'a'.
63+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
64+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
65+
66+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
67+
68+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
69+    @mock.patch('__builtin__.open')
70+    def test_create_server(self, mockopen):
71+        """ This tests whether a server instance can be constructed. """
72+
73+        def call_open(fname, mode):
74+            if fname == 'testdir/bucket_counter.state':
75+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
76+            elif fname == 'testdir/lease_checker.state':
77+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
78+            elif fname == 'testdir/lease_checker.history':
79+                return StringIO()
80+        mockopen.side_effect = call_open
81+
82+        # Now begin the test.
83+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
84+
85+        # You passed!
86+
87+class TestServer(unittest.TestCase, ReallyEqualMixin):
88+    @mock.patch('__builtin__.open')
89+    def setUp(self, mockopen):
90+        def call_open(fname, mode):
91+            if fname == 'testdir/bucket_counter.state':
92+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
93+            elif fname == 'testdir/lease_checker.state':
94+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
95+            elif fname == 'testdir/lease_checker.history':
96+                return StringIO()
97+        mockopen.side_effect = call_open
98+
99+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
100+
101+
102+    @mock.patch('time.time')
103+    @mock.patch('os.mkdir')
104+    @mock.patch('__builtin__.open')
105+    @mock.patch('os.listdir')
106+    @mock.patch('os.path.isdir')
107+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
108+        """Handle a report of corruption."""
109+
110+        def call_listdir(dirname):
111+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
112+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
113+
114+        mocklistdir.side_effect = call_listdir
115+
116+        class MockFile:
117+            def __init__(self):
118+                self.buffer = ''
119+                self.pos = 0
120+            def write(self, instring):
121+                begin = self.pos
122+                padlen = begin - len(self.buffer)
123+                if padlen > 0:
124+                    self.buffer += '\x00' * padlen
125+                end = self.pos + len(instring)
126+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
127+                self.pos = end
128+            def close(self):
129+                pass
130+            def seek(self, pos):
131+                self.pos = pos
132+            def read(self, numberbytes):
133+                return self.buffer[self.pos:self.pos+numberbytes]
134+            def tell(self):
135+                return self.pos
136+
137+        mocktime.return_value = 0
138+
139+        sharefile = MockFile()
140+        def call_open(fname, mode):
141+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
142+            return sharefile
143+
144+        mockopen.side_effect = call_open
145+        # Now begin the test.
146+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
147+        print bs
148+        bs[0].remote_write(0, 'a')
149+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
150+
151+
152+    @mock.patch('os.path.exists')
153+    @mock.patch('os.path.getsize')
154+    @mock.patch('__builtin__.open')
155+    @mock.patch('os.listdir')
156+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
157+        """ This tests whether the code correctly finds and reads
158+        shares written out by old (Tahoe-LAFS <= v1.8.2)
159+        servers. There is a similar test in test_download, but that one
160+        is from the perspective of the client and exercises a deeper
161+        stack of code. This one is for exercising just the
162+        StorageServer object. """
163+
164+        def call_listdir(dirname):
165+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
166+            return ['0']
167+
168+        mocklistdir.side_effect = call_listdir
169+
170+        def call_open(fname, mode):
171+            self.failUnlessReallyEqual(fname, sharefname)
172+            self.failUnless('r' in mode, mode)
173+            self.failUnless('b' in mode, mode)
174+
175+            return StringIO(share_file_data)
176+        mockopen.side_effect = call_open
177+
178+        datalen = len(share_file_data)
179+        def call_getsize(fname):
180+            self.failUnlessReallyEqual(fname, sharefname)
181+            return datalen
182+        mockgetsize.side_effect = call_getsize
183+
184+        def call_exists(fname):
185+            self.failUnlessReallyEqual(fname, sharefname)
186+            return True
187+        mockexists.side_effect = call_exists
188+
189+        # Now begin the test.
190+        bs = self.s.remote_get_buckets('teststorage_index')
191+
192+        self.failUnlessEqual(len(bs), 1)
193+        b = bs[0]
194+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
195+        # If you try to read past the end you get the as much data as is there.
196+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
197+        # If you start reading past the end of the file you get the empty string.
198+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
199}
200[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
201wilcoxjg@gmail.com**20110624202850
202 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
203 sloppy not for production
204] {
205move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
206hunk ./src/allmydata/storage/crawler.py 13
207     pass
208 
209 class ShareCrawler(service.MultiService):
210-    """A ShareCrawler subclass is attached to a StorageServer, and
211+    """A subcless of ShareCrawler is attached to a StorageServer, and
212     periodically walks all of its shares, processing each one in some
213     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
214     since large servers can easily have a terabyte of shares, in several
215hunk ./src/allmydata/storage/crawler.py 31
216     We assume that the normal upload/download/get_buckets traffic of a tahoe
217     grid will cause the prefixdir contents to be mostly cached in the kernel,
218     or that the number of buckets in each prefixdir will be small enough to
219-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
220+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
221     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
222     prefix. On this server, each prefixdir took 130ms-200ms to list the first
223     time, and 17ms to list the second time.
224hunk ./src/allmydata/storage/crawler.py 68
225     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
226     minimum_cycle_time = 300 # don't run a cycle faster than this
227 
228-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
229+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
230         service.MultiService.__init__(self)
231         if allowed_cpu_percentage is not None:
232             self.allowed_cpu_percentage = allowed_cpu_percentage
233hunk ./src/allmydata/storage/crawler.py 72
234-        self.server = server
235-        self.sharedir = server.sharedir
236-        self.statefile = statefile
237+        self.backend = backend
238         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
239                          for i in range(2**10)]
240         self.prefixes.sort()
241hunk ./src/allmydata/storage/crawler.py 446
242 
243     minimum_cycle_time = 60*60 # we don't need this more than once an hour
244 
245-    def __init__(self, server, statefile, num_sample_prefixes=1):
246-        ShareCrawler.__init__(self, server, statefile)
247+    def __init__(self, statefile, num_sample_prefixes=1):
248+        ShareCrawler.__init__(self, statefile)
249         self.num_sample_prefixes = num_sample_prefixes
250 
251     def add_initial_state(self):
252hunk ./src/allmydata/storage/expirer.py 15
253     removed.
254 
255     I collect statistics on the leases and make these available to a web
256-    status page, including::
257+    status page, including:
258 
259     Space recovered during this cycle-so-far:
260      actual (only if expiration_enabled=True):
261hunk ./src/allmydata/storage/expirer.py 51
262     slow_start = 360 # wait 6 minutes after startup
263     minimum_cycle_time = 12*60*60 # not more than twice per day
264 
265-    def __init__(self, server, statefile, historyfile,
266+    def __init__(self, statefile, historyfile,
267                  expiration_enabled, mode,
268                  override_lease_duration, # used if expiration_mode=="age"
269                  cutoff_date, # used if expiration_mode=="cutoff-date"
270hunk ./src/allmydata/storage/expirer.py 71
271         else:
272             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
273         self.sharetypes_to_expire = sharetypes
274-        ShareCrawler.__init__(self, server, statefile)
275+        ShareCrawler.__init__(self, statefile)
276 
277     def add_initial_state(self):
278         # we fill ["cycle-to-date"] here (even though they will be reset in
279hunk ./src/allmydata/storage/immutable.py 44
280     sharetype = "immutable"
281 
282     def __init__(self, filename, max_size=None, create=False):
283-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
284+        """ If max_size is not None then I won't allow more than
285+        max_size to be written to me. If create=True then max_size
286+        must not be None. """
287         precondition((max_size is not None) or (not create), max_size, create)
288         self.home = filename
289         self._max_size = max_size
290hunk ./src/allmydata/storage/immutable.py 87
291 
292     def read_share_data(self, offset, length):
293         precondition(offset >= 0)
294-        # reads beyond the end of the data are truncated. Reads that start
295-        # beyond the end of the data return an empty string. I wonder why
296-        # Python doesn't do the following computation for me?
297+        # Reads beyond the end of the data are truncated. Reads that start
298+        # beyond the end of the data return an empty string.
299         seekpos = self._data_offset+offset
300         fsize = os.path.getsize(self.home)
301         actuallength = max(0, min(length, fsize-seekpos))
302hunk ./src/allmydata/storage/immutable.py 198
303             space_freed += os.stat(self.home)[stat.ST_SIZE]
304             self.unlink()
305         return space_freed
306+class NullBucketWriter(Referenceable):
307+    implements(RIBucketWriter)
308 
309hunk ./src/allmydata/storage/immutable.py 201
310+    def remote_write(self, offset, data):
311+        return
312 
313 class BucketWriter(Referenceable):
314     implements(RIBucketWriter)
315hunk ./src/allmydata/storage/server.py 7
316 from twisted.application import service
317 
318 from zope.interface import implements
319-from allmydata.interfaces import RIStorageServer, IStatsProducer
320+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
321 from allmydata.util import fileutil, idlib, log, time_format
322 import allmydata # for __full_version__
323 
324hunk ./src/allmydata/storage/server.py 16
325 from allmydata.storage.lease import LeaseInfo
326 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
327      create_mutable_sharefile
328-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
329+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
330 from allmydata.storage.crawler import BucketCountingCrawler
331 from allmydata.storage.expirer import LeaseCheckingCrawler
332 
333hunk ./src/allmydata/storage/server.py 20
334+from zope.interface import implements
335+
336+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
337+# be started and stopped.
338+class Backend(service.MultiService):
339+    implements(IStatsProducer)
340+    def __init__(self):
341+        service.MultiService.__init__(self)
342+
343+    def get_bucket_shares(self):
344+        """XXX"""
345+        raise NotImplementedError
346+
347+    def get_share(self):
348+        """XXX"""
349+        raise NotImplementedError
350+
351+    def make_bucket_writer(self):
352+        """XXX"""
353+        raise NotImplementedError
354+
355+class NullBackend(Backend):
356+    def __init__(self):
357+        Backend.__init__(self)
358+
359+    def get_available_space(self):
360+        return None
361+
362+    def get_bucket_shares(self, storage_index):
363+        return set()
364+
365+    def get_share(self, storage_index, sharenum):
366+        return None
367+
368+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
369+        return NullBucketWriter()
370+
371+class FSBackend(Backend):
372+    def __init__(self, storedir, readonly=False, reserved_space=0):
373+        Backend.__init__(self)
374+
375+        self._setup_storage(storedir, readonly, reserved_space)
376+        self._setup_corruption_advisory()
377+        self._setup_bucket_counter()
378+        self._setup_lease_checkerf()
379+
380+    def _setup_storage(self, storedir, readonly, reserved_space):
381+        self.storedir = storedir
382+        self.readonly = readonly
383+        self.reserved_space = int(reserved_space)
384+        if self.reserved_space:
385+            if self.get_available_space() is None:
386+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
387+                        umid="0wZ27w", level=log.UNUSUAL)
388+
389+        self.sharedir = os.path.join(self.storedir, "shares")
390+        fileutil.make_dirs(self.sharedir)
391+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
392+        self._clean_incomplete()
393+
394+    def _clean_incomplete(self):
395+        fileutil.rm_dir(self.incomingdir)
396+        fileutil.make_dirs(self.incomingdir)
397+
398+    def _setup_corruption_advisory(self):
399+        # we don't actually create the corruption-advisory dir until necessary
400+        self.corruption_advisory_dir = os.path.join(self.storedir,
401+                                                    "corruption-advisories")
402+
403+    def _setup_bucket_counter(self):
404+        statefile = os.path.join(self.storedir, "bucket_counter.state")
405+        self.bucket_counter = BucketCountingCrawler(statefile)
406+        self.bucket_counter.setServiceParent(self)
407+
408+    def _setup_lease_checkerf(self):
409+        statefile = os.path.join(self.storedir, "lease_checker.state")
410+        historyfile = os.path.join(self.storedir, "lease_checker.history")
411+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
412+                                   expiration_enabled, expiration_mode,
413+                                   expiration_override_lease_duration,
414+                                   expiration_cutoff_date,
415+                                   expiration_sharetypes)
416+        self.lease_checker.setServiceParent(self)
417+
418+    def get_available_space(self):
419+        if self.readonly:
420+            return 0
421+        return fileutil.get_available_space(self.storedir, self.reserved_space)
422+
423+    def get_bucket_shares(self, storage_index):
424+        """Return a list of (shnum, pathname) tuples for files that hold
425+        shares for this storage_index. In each tuple, 'shnum' will always be
426+        the integer form of the last component of 'pathname'."""
427+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
428+        try:
429+            for f in os.listdir(storagedir):
430+                if NUM_RE.match(f):
431+                    filename = os.path.join(storagedir, f)
432+                    yield (int(f), filename)
433+        except OSError:
434+            # Commonly caused by there being no buckets at all.
435+            pass
436+
437 # storage/
438 # storage/shares/incoming
439 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
440hunk ./src/allmydata/storage/server.py 143
441     name = 'storage'
442     LeaseCheckerClass = LeaseCheckingCrawler
443 
444-    def __init__(self, storedir, nodeid, reserved_space=0,
445-                 discard_storage=False, readonly_storage=False,
446+    def __init__(self, nodeid, backend, reserved_space=0,
447+                 readonly_storage=False,
448                  stats_provider=None,
449                  expiration_enabled=False,
450                  expiration_mode="age",
451hunk ./src/allmydata/storage/server.py 155
452         assert isinstance(nodeid, str)
453         assert len(nodeid) == 20
454         self.my_nodeid = nodeid
455-        self.storedir = storedir
456-        sharedir = os.path.join(storedir, "shares")
457-        fileutil.make_dirs(sharedir)
458-        self.sharedir = sharedir
459-        # we don't actually create the corruption-advisory dir until necessary
460-        self.corruption_advisory_dir = os.path.join(storedir,
461-                                                    "corruption-advisories")
462-        self.reserved_space = int(reserved_space)
463-        self.no_storage = discard_storage
464-        self.readonly_storage = readonly_storage
465         self.stats_provider = stats_provider
466         if self.stats_provider:
467             self.stats_provider.register_producer(self)
468hunk ./src/allmydata/storage/server.py 158
469-        self.incomingdir = os.path.join(sharedir, 'incoming')
470-        self._clean_incomplete()
471-        fileutil.make_dirs(self.incomingdir)
472         self._active_writers = weakref.WeakKeyDictionary()
473hunk ./src/allmydata/storage/server.py 159
474+        self.backend = backend
475+        self.backend.setServiceParent(self)
476         log.msg("StorageServer created", facility="tahoe.storage")
477 
478hunk ./src/allmydata/storage/server.py 163
479-        if reserved_space:
480-            if self.get_available_space() is None:
481-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
482-                        umin="0wZ27w", level=log.UNUSUAL)
483-
484         self.latencies = {"allocate": [], # immutable
485                           "write": [],
486                           "close": [],
487hunk ./src/allmydata/storage/server.py 174
488                           "renew": [],
489                           "cancel": [],
490                           }
491-        self.add_bucket_counter()
492-
493-        statefile = os.path.join(self.storedir, "lease_checker.state")
494-        historyfile = os.path.join(self.storedir, "lease_checker.history")
495-        klass = self.LeaseCheckerClass
496-        self.lease_checker = klass(self, statefile, historyfile,
497-                                   expiration_enabled, expiration_mode,
498-                                   expiration_override_lease_duration,
499-                                   expiration_cutoff_date,
500-                                   expiration_sharetypes)
501-        self.lease_checker.setServiceParent(self)
502 
503     def __repr__(self):
504         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
505hunk ./src/allmydata/storage/server.py 178
506 
507-    def add_bucket_counter(self):
508-        statefile = os.path.join(self.storedir, "bucket_counter.state")
509-        self.bucket_counter = BucketCountingCrawler(self, statefile)
510-        self.bucket_counter.setServiceParent(self)
511-
512     def count(self, name, delta=1):
513         if self.stats_provider:
514             self.stats_provider.count("storage_server." + name, delta)
515hunk ./src/allmydata/storage/server.py 233
516             kwargs["facility"] = "tahoe.storage"
517         return log.msg(*args, **kwargs)
518 
519-    def _clean_incomplete(self):
520-        fileutil.rm_dir(self.incomingdir)
521-
522     def get_stats(self):
523         # remember: RIStatsProvider requires that our return dict
524         # contains numeric values.
525hunk ./src/allmydata/storage/server.py 269
526             stats['storage_server.total_bucket_count'] = bucket_count
527         return stats
528 
529-    def get_available_space(self):
530-        """Returns available space for share storage in bytes, or None if no
531-        API to get this information is available."""
532-
533-        if self.readonly_storage:
534-            return 0
535-        return fileutil.get_available_space(self.storedir, self.reserved_space)
536-
537     def allocated_size(self):
538         space = 0
539         for bw in self._active_writers:
540hunk ./src/allmydata/storage/server.py 276
541         return space
542 
543     def remote_get_version(self):
544-        remaining_space = self.get_available_space()
545+        remaining_space = self.backend.get_available_space()
546         if remaining_space is None:
547             # We're on a platform that has no API to get disk stats.
548             remaining_space = 2**64
549hunk ./src/allmydata/storage/server.py 301
550         self.count("allocate")
551         alreadygot = set()
552         bucketwriters = {} # k: shnum, v: BucketWriter
553-        si_dir = storage_index_to_dir(storage_index)
554-        si_s = si_b2a(storage_index)
555 
556hunk ./src/allmydata/storage/server.py 302
557+        si_s = si_b2a(storage_index)
558         log.msg("storage: allocate_buckets %s" % si_s)
559 
560         # in this implementation, the lease information (including secrets)
561hunk ./src/allmydata/storage/server.py 316
562 
563         max_space_per_bucket = allocated_size
564 
565-        remaining_space = self.get_available_space()
566+        remaining_space = self.backend.get_available_space()
567         limited = remaining_space is not None
568         if limited:
569             # this is a bit conservative, since some of this allocated_size()
570hunk ./src/allmydata/storage/server.py 329
571         # they asked about: this will save them a lot of work. Add or update
572         # leases for all of them: if they want us to hold shares for this
573         # file, they'll want us to hold leases for this file.
574-        for (shnum, fn) in self._get_bucket_shares(storage_index):
575+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
576             alreadygot.add(shnum)
577             sf = ShareFile(fn)
578             sf.add_or_renew_lease(lease_info)
579hunk ./src/allmydata/storage/server.py 335
580 
581         for shnum in sharenums:
582-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
583-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
584-            if os.path.exists(finalhome):
585+            share = self.backend.get_share(storage_index, shnum)
586+
587+            if not share:
588+                if (not limited) or (remaining_space >= max_space_per_bucket):
589+                    # ok! we need to create the new share file.
590+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
591+                                      max_space_per_bucket, lease_info, canary)
592+                    bucketwriters[shnum] = bw
593+                    self._active_writers[bw] = 1
594+                    if limited:
595+                        remaining_space -= max_space_per_bucket
596+                else:
597+                    # bummer! not enough space to accept this bucket
598+                    pass
599+
600+            elif share.is_complete():
601                 # great! we already have it. easy.
602                 pass
603hunk ./src/allmydata/storage/server.py 353
604-            elif os.path.exists(incominghome):
605+            elif not share.is_complete():
606                 # Note that we don't create BucketWriters for shnums that
607                 # have a partial share (in incoming/), so if a second upload
608                 # occurs while the first is still in progress, the second
609hunk ./src/allmydata/storage/server.py 359
610                 # uploader will use different storage servers.
611                 pass
612-            elif (not limited) or (remaining_space >= max_space_per_bucket):
613-                # ok! we need to create the new share file.
614-                bw = BucketWriter(self, incominghome, finalhome,
615-                                  max_space_per_bucket, lease_info, canary)
616-                if self.no_storage:
617-                    bw.throw_out_all_data = True
618-                bucketwriters[shnum] = bw
619-                self._active_writers[bw] = 1
620-                if limited:
621-                    remaining_space -= max_space_per_bucket
622-            else:
623-                # bummer! not enough space to accept this bucket
624-                pass
625-
626-        if bucketwriters:
627-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
628 
629         self.add_latency("allocate", time.time() - start)
630         return alreadygot, bucketwriters
631hunk ./src/allmydata/storage/server.py 437
632             self.stats_provider.count('storage_server.bytes_added', consumed_size)
633         del self._active_writers[bw]
634 
635-    def _get_bucket_shares(self, storage_index):
636-        """Return a list of (shnum, pathname) tuples for files that hold
637-        shares for this storage_index. In each tuple, 'shnum' will always be
638-        the integer form of the last component of 'pathname'."""
639-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
640-        try:
641-            for f in os.listdir(storagedir):
642-                if NUM_RE.match(f):
643-                    filename = os.path.join(storagedir, f)
644-                    yield (int(f), filename)
645-        except OSError:
646-            # Commonly caused by there being no buckets at all.
647-            pass
648 
649     def remote_get_buckets(self, storage_index):
650         start = time.time()
651hunk ./src/allmydata/storage/server.py 444
652         si_s = si_b2a(storage_index)
653         log.msg("storage: get_buckets %s" % si_s)
654         bucketreaders = {} # k: sharenum, v: BucketReader
655-        for shnum, filename in self._get_bucket_shares(storage_index):
656+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
657             bucketreaders[shnum] = BucketReader(self, filename,
658                                                 storage_index, shnum)
659         self.add_latency("get", time.time() - start)
660hunk ./src/allmydata/test/test_backends.py 10
661 import mock
662 
663 # This is the code that we're going to be testing.
664-from allmydata.storage.server import StorageServer
665+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
666 
667 # The following share file contents was generated with
668 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
669hunk ./src/allmydata/test/test_backends.py 21
670 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
671 
672 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
673+    @mock.patch('time.time')
674+    @mock.patch('os.mkdir')
675+    @mock.patch('__builtin__.open')
676+    @mock.patch('os.listdir')
677+    @mock.patch('os.path.isdir')
678+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
679+        """ This tests whether a server instance can be constructed
680+        with a null backend. The server instance fails the test if it
681+        tries to read or write to the file system. """
682+
683+        # Now begin the test.
684+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
685+
686+        self.failIf(mockisdir.called)
687+        self.failIf(mocklistdir.called)
688+        self.failIf(mockopen.called)
689+        self.failIf(mockmkdir.called)
690+
691+        # You passed!
692+
693+    @mock.patch('time.time')
694+    @mock.patch('os.mkdir')
695     @mock.patch('__builtin__.open')
696hunk ./src/allmydata/test/test_backends.py 44
697-    def test_create_server(self, mockopen):
698-        """ This tests whether a server instance can be constructed. """
699+    @mock.patch('os.listdir')
700+    @mock.patch('os.path.isdir')
701+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
702+        """ This tests whether a server instance can be constructed
703+        with a filesystem backend. To pass the test, it has to use the
704+        filesystem in only the prescribed ways. """
705 
706         def call_open(fname, mode):
707             if fname == 'testdir/bucket_counter.state':
708hunk ./src/allmydata/test/test_backends.py 58
709                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
710             elif fname == 'testdir/lease_checker.history':
711                 return StringIO()
712+            else:
713+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
714         mockopen.side_effect = call_open
715 
716         # Now begin the test.
717hunk ./src/allmydata/test/test_backends.py 63
718-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
719+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
720+
721+        self.failIf(mockisdir.called)
722+        self.failIf(mocklistdir.called)
723+        self.failIf(mockopen.called)
724+        self.failIf(mockmkdir.called)
725+        self.failIf(mocktime.called)
726 
727         # You passed!
728 
729hunk ./src/allmydata/test/test_backends.py 73
730-class TestServer(unittest.TestCase, ReallyEqualMixin):
731+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
732+    def setUp(self):
733+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
734+
735+    @mock.patch('os.mkdir')
736+    @mock.patch('__builtin__.open')
737+    @mock.patch('os.listdir')
738+    @mock.patch('os.path.isdir')
739+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
740+        """ Write a new share. """
741+
742+        # Now begin the test.
743+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
744+        bs[0].remote_write(0, 'a')
745+        self.failIf(mockisdir.called)
746+        self.failIf(mocklistdir.called)
747+        self.failIf(mockopen.called)
748+        self.failIf(mockmkdir.called)
749+
750+    @mock.patch('os.path.exists')
751+    @mock.patch('os.path.getsize')
752+    @mock.patch('__builtin__.open')
753+    @mock.patch('os.listdir')
754+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
755+        """ This tests whether the code correctly finds and reads
756+        shares written out by old (Tahoe-LAFS <= v1.8.2)
757+        servers. There is a similar test in test_download, but that one
758+        is from the perspective of the client and exercises a deeper
759+        stack of code. This one is for exercising just the
760+        StorageServer object. """
761+
762+        # Now begin the test.
763+        bs = self.s.remote_get_buckets('teststorage_index')
764+
765+        self.failUnlessEqual(len(bs), 0)
766+        self.failIf(mocklistdir.called)
767+        self.failIf(mockopen.called)
768+        self.failIf(mockgetsize.called)
769+        self.failIf(mockexists.called)
770+
771+
772+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
773     @mock.patch('__builtin__.open')
774     def setUp(self, mockopen):
775         def call_open(fname, mode):
776hunk ./src/allmydata/test/test_backends.py 126
777                 return StringIO()
778         mockopen.side_effect = call_open
779 
780-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
781-
782+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
783 
784     @mock.patch('time.time')
785     @mock.patch('os.mkdir')
786hunk ./src/allmydata/test/test_backends.py 134
787     @mock.patch('os.listdir')
788     @mock.patch('os.path.isdir')
789     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
790-        """Handle a report of corruption."""
791+        """ Write a new share. """
792 
793         def call_listdir(dirname):
794             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
795hunk ./src/allmydata/test/test_backends.py 173
796         mockopen.side_effect = call_open
797         # Now begin the test.
798         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
799-        print bs
800         bs[0].remote_write(0, 'a')
801         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
802 
803hunk ./src/allmydata/test/test_backends.py 176
804-
805     @mock.patch('os.path.exists')
806     @mock.patch('os.path.getsize')
807     @mock.patch('__builtin__.open')
808hunk ./src/allmydata/test/test_backends.py 218
809 
810         self.failUnlessEqual(len(bs), 1)
811         b = bs[0]
812+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
813         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
814         # If you try to read past the end you get the as much data as is there.
815         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
816hunk ./src/allmydata/test/test_backends.py 224
817         # If you start reading past the end of the file you get the empty string.
818         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
819+
820+
821}
822[a temp patch used as a snapshot
823wilcoxjg@gmail.com**20110626052732
824 Ignore-this: 95f05e314eaec870afa04c76d979aa44
825] {
826hunk ./docs/configuration.rst 637
827   [storage]
828   enabled = True
829   readonly = True
830-  sizelimit = 10000000000
831 
832 
833   [helper]
834hunk ./docs/garbage-collection.rst 16
835 
836 When a file or directory in the virtual filesystem is no longer referenced,
837 the space that its shares occupied on each storage server can be freed,
838-making room for other shares. Tahoe currently uses a garbage collection
839+making room for other shares. Tahoe uses a garbage collection
840 ("GC") mechanism to implement this space-reclamation process. Each share has
841 one or more "leases", which are managed by clients who want the
842 file/directory to be retained. The storage server accepts each share for a
843hunk ./docs/garbage-collection.rst 34
844 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
845 If lease renewal occurs quickly and with 100% reliability, than any renewal
846 time that is shorter than the lease duration will suffice, but a larger ratio
847-of duration-over-renewal-time will be more robust in the face of occasional
848+of lease duration to renewal time will be more robust in the face of occasional
849 delays or failures.
850 
851 The current recommended values for a small Tahoe grid are to renew the leases
852replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
853hunk ./src/allmydata/client.py 260
854             sharetypes.append("mutable")
855         expiration_sharetypes = tuple(sharetypes)
856 
857+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
858+            xyz
859+        xyz
860         ss = StorageServer(storedir, self.nodeid,
861                            reserved_space=reserved,
862                            discard_storage=discard,
863hunk ./src/allmydata/storage/crawler.py 234
864         f = open(tmpfile, "wb")
865         pickle.dump(self.state, f)
866         f.close()
867-        fileutil.move_into_place(tmpfile, self.statefile)
868+        fileutil.move_into_place(tmpfile, self.statefname)
869 
870     def startService(self):
871         # arrange things to look like we were just sleeping, so
872}
873[snapshot of progress on backend implementation (not suitable for trunk)
874wilcoxjg@gmail.com**20110626053244
875 Ignore-this: 50c764af791c2b99ada8289546806a0a
876] {
877adddir ./src/allmydata/storage/backends
878adddir ./src/allmydata/storage/backends/das
879move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
880adddir ./src/allmydata/storage/backends/null
881hunk ./src/allmydata/interfaces.py 270
882         store that on disk.
883         """
884 
885+class IStorageBackend(Interface):
886+    """
887+    Objects of this kind live on the server side and are used by the
888+    storage server object.
889+    """
890+    def get_available_space(self, reserved_space):
891+        """ Returns available space for share storage in bytes, or
892+        None if this information is not available or if the available
893+        space is unlimited.
894+
895+        If the backend is configured for read-only mode then this will
896+        return 0.
897+
898+        reserved_space is how many bytes to subtract from the answer, so
899+        you can pass how many bytes you would like to leave unused on this
900+        filesystem as reserved_space. """
901+
902+    def get_bucket_shares(self):
903+        """XXX"""
904+
905+    def get_share(self):
906+        """XXX"""
907+
908+    def make_bucket_writer(self):
909+        """XXX"""
910+
911+class IStorageBackendShare(Interface):
912+    """
913+    This object contains as much as all of the share data.  It is intended
914+    for lazy evaluation such that in many use cases substantially less than
915+    all of the share data will be accessed.
916+    """
917+    def is_complete(self):
918+        """
919+        Returns the share state, or None if the share does not exist.
920+        """
921+
922 class IStorageBucketWriter(Interface):
923     """
924     Objects of this kind live on the client side.
925hunk ./src/allmydata/interfaces.py 2492
926 
927 class EmptyPathnameComponentError(Exception):
928     """The webapi disallows empty pathname components."""
929+
930+class IShareStore(Interface):
931+    pass
932+
933addfile ./src/allmydata/storage/backends/__init__.py
934addfile ./src/allmydata/storage/backends/das/__init__.py
935addfile ./src/allmydata/storage/backends/das/core.py
936hunk ./src/allmydata/storage/backends/das/core.py 1
937+from allmydata.interfaces import IStorageBackend
938+from allmydata.storage.backends.base import Backend
939+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
940+from allmydata.util.assertutil import precondition
941+
942+import os, re, weakref, struct, time
943+
944+from foolscap.api import Referenceable
945+from twisted.application import service
946+
947+from zope.interface import implements
948+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
949+from allmydata.util import fileutil, idlib, log, time_format
950+import allmydata # for __full_version__
951+
952+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
953+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
954+from allmydata.storage.lease import LeaseInfo
955+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
956+     create_mutable_sharefile
957+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
958+from allmydata.storage.crawler import FSBucketCountingCrawler
959+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
960+
961+from zope.interface import implements
962+
963+class DASCore(Backend):
964+    implements(IStorageBackend)
965+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
966+        Backend.__init__(self)
967+
968+        self._setup_storage(storedir, readonly, reserved_space)
969+        self._setup_corruption_advisory()
970+        self._setup_bucket_counter()
971+        self._setup_lease_checkerf(expiration_policy)
972+
973+    def _setup_storage(self, storedir, readonly, reserved_space):
974+        self.storedir = storedir
975+        self.readonly = readonly
976+        self.reserved_space = int(reserved_space)
977+        if self.reserved_space:
978+            if self.get_available_space() is None:
979+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
980+                        umid="0wZ27w", level=log.UNUSUAL)
981+
982+        self.sharedir = os.path.join(self.storedir, "shares")
983+        fileutil.make_dirs(self.sharedir)
984+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
985+        self._clean_incomplete()
986+
987+    def _clean_incomplete(self):
988+        fileutil.rm_dir(self.incomingdir)
989+        fileutil.make_dirs(self.incomingdir)
990+
991+    def _setup_corruption_advisory(self):
992+        # we don't actually create the corruption-advisory dir until necessary
993+        self.corruption_advisory_dir = os.path.join(self.storedir,
994+                                                    "corruption-advisories")
995+
996+    def _setup_bucket_counter(self):
997+        statefname = os.path.join(self.storedir, "bucket_counter.state")
998+        self.bucket_counter = FSBucketCountingCrawler(statefname)
999+        self.bucket_counter.setServiceParent(self)
1000+
1001+    def _setup_lease_checkerf(self, expiration_policy):
1002+        statefile = os.path.join(self.storedir, "lease_checker.state")
1003+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1004+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1005+        self.lease_checker.setServiceParent(self)
1006+
1007+    def get_available_space(self):
1008+        if self.readonly:
1009+            return 0
1010+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1011+
1012+    def get_shares(self, storage_index):
1013+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1014+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1015+        try:
1016+            for f in os.listdir(finalstoragedir):
1017+                if NUM_RE.match(f):
1018+                    filename = os.path.join(finalstoragedir, f)
1019+                    yield FSBShare(filename, int(f))
1020+        except OSError:
1021+            # Commonly caused by there being no buckets at all.
1022+            pass
1023+       
1024+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1025+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1026+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1027+        return bw
1028+       
1029+
1030+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1031+# and share data. The share data is accessed by RIBucketWriter.write and
1032+# RIBucketReader.read . The lease information is not accessible through these
1033+# interfaces.
1034+
1035+# The share file has the following layout:
1036+#  0x00: share file version number, four bytes, current version is 1
1037+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1038+#  0x08: number of leases, four bytes big-endian
1039+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1040+#  A+0x0c = B: first lease. Lease format is:
1041+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1042+#   B+0x04: renew secret, 32 bytes (SHA256)
1043+#   B+0x24: cancel secret, 32 bytes (SHA256)
1044+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1045+#   B+0x48: next lease, or end of record
1046+
1047+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1048+# but it is still filled in by storage servers in case the storage server
1049+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1050+# share file is moved from one storage server to another. The value stored in
1051+# this field is truncated, so if the actual share data length is >= 2**32,
1052+# then the value stored in this field will be the actual share data length
1053+# modulo 2**32.
1054+
1055+class ImmutableShare:
1056+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1057+    sharetype = "immutable"
1058+
1059+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1060+        """ If max_size is not None then I won't allow more than
1061+        max_size to be written to me. If create=True then max_size
1062+        must not be None. """
1063+        precondition((max_size is not None) or (not create), max_size, create)
1064+        self.shnum = shnum
1065+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1066+        self._max_size = max_size
1067+        if create:
1068+            # touch the file, so later callers will see that we're working on
1069+            # it. Also construct the metadata.
1070+            assert not os.path.exists(self.fname)
1071+            fileutil.make_dirs(os.path.dirname(self.fname))
1072+            f = open(self.fname, 'wb')
1073+            # The second field -- the four-byte share data length -- is no
1074+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1075+            # there in case someone downgrades a storage server from >=
1076+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1077+            # server to another, etc. We do saturation -- a share data length
1078+            # larger than 2**32-1 (what can fit into the field) is marked as
1079+            # the largest length that can fit into the field. That way, even
1080+            # if this does happen, the old < v1.3.0 server will still allow
1081+            # clients to read the first part of the share.
1082+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1083+            f.close()
1084+            self._lease_offset = max_size + 0x0c
1085+            self._num_leases = 0
1086+        else:
1087+            f = open(self.fname, 'rb')
1088+            filesize = os.path.getsize(self.fname)
1089+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1090+            f.close()
1091+            if version != 1:
1092+                msg = "sharefile %s had version %d but we wanted 1" % \
1093+                      (self.fname, version)
1094+                raise UnknownImmutableContainerVersionError(msg)
1095+            self._num_leases = num_leases
1096+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1097+        self._data_offset = 0xc
1098+
1099+    def unlink(self):
1100+        os.unlink(self.fname)
1101+
1102+    def read_share_data(self, offset, length):
1103+        precondition(offset >= 0)
1104+        # Reads beyond the end of the data are truncated. Reads that start
1105+        # beyond the end of the data return an empty string.
1106+        seekpos = self._data_offset+offset
1107+        fsize = os.path.getsize(self.fname)
1108+        actuallength = max(0, min(length, fsize-seekpos))
1109+        if actuallength == 0:
1110+            return ""
1111+        f = open(self.fname, 'rb')
1112+        f.seek(seekpos)
1113+        return f.read(actuallength)
1114+
1115+    def write_share_data(self, offset, data):
1116+        length = len(data)
1117+        precondition(offset >= 0, offset)
1118+        if self._max_size is not None and offset+length > self._max_size:
1119+            raise DataTooLargeError(self._max_size, offset, length)
1120+        f = open(self.fname, 'rb+')
1121+        real_offset = self._data_offset+offset
1122+        f.seek(real_offset)
1123+        assert f.tell() == real_offset
1124+        f.write(data)
1125+        f.close()
1126+
1127+    def _write_lease_record(self, f, lease_number, lease_info):
1128+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1129+        f.seek(offset)
1130+        assert f.tell() == offset
1131+        f.write(lease_info.to_immutable_data())
1132+
1133+    def _read_num_leases(self, f):
1134+        f.seek(0x08)
1135+        (num_leases,) = struct.unpack(">L", f.read(4))
1136+        return num_leases
1137+
1138+    def _write_num_leases(self, f, num_leases):
1139+        f.seek(0x08)
1140+        f.write(struct.pack(">L", num_leases))
1141+
1142+    def _truncate_leases(self, f, num_leases):
1143+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1144+
1145+    def get_leases(self):
1146+        """Yields a LeaseInfo instance for all leases."""
1147+        f = open(self.fname, 'rb')
1148+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1149+        f.seek(self._lease_offset)
1150+        for i in range(num_leases):
1151+            data = f.read(self.LEASE_SIZE)
1152+            if data:
1153+                yield LeaseInfo().from_immutable_data(data)
1154+
1155+    def add_lease(self, lease_info):
1156+        f = open(self.fname, 'rb+')
1157+        num_leases = self._read_num_leases(f)
1158+        self._write_lease_record(f, num_leases, lease_info)
1159+        self._write_num_leases(f, num_leases+1)
1160+        f.close()
1161+
1162+    def renew_lease(self, renew_secret, new_expire_time):
1163+        for i,lease in enumerate(self.get_leases()):
1164+            if constant_time_compare(lease.renew_secret, renew_secret):
1165+                # yup. See if we need to update the owner time.
1166+                if new_expire_time > lease.expiration_time:
1167+                    # yes
1168+                    lease.expiration_time = new_expire_time
1169+                    f = open(self.fname, 'rb+')
1170+                    self._write_lease_record(f, i, lease)
1171+                    f.close()
1172+                return
1173+        raise IndexError("unable to renew non-existent lease")
1174+
1175+    def add_or_renew_lease(self, lease_info):
1176+        try:
1177+            self.renew_lease(lease_info.renew_secret,
1178+                             lease_info.expiration_time)
1179+        except IndexError:
1180+            self.add_lease(lease_info)
1181+
1182+
1183+    def cancel_lease(self, cancel_secret):
1184+        """Remove a lease with the given cancel_secret. If the last lease is
1185+        cancelled, the file will be removed. Return the number of bytes that
1186+        were freed (by truncating the list of leases, and possibly by
1187+        deleting the file. Raise IndexError if there was no lease with the
1188+        given cancel_secret.
1189+        """
1190+
1191+        leases = list(self.get_leases())
1192+        num_leases_removed = 0
1193+        for i,lease in enumerate(leases):
1194+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1195+                leases[i] = None
1196+                num_leases_removed += 1
1197+        if not num_leases_removed:
1198+            raise IndexError("unable to find matching lease to cancel")
1199+        if num_leases_removed:
1200+            # pack and write out the remaining leases. We write these out in
1201+            # the same order as they were added, so that if we crash while
1202+            # doing this, we won't lose any non-cancelled leases.
1203+            leases = [l for l in leases if l] # remove the cancelled leases
1204+            f = open(self.fname, 'rb+')
1205+            for i,lease in enumerate(leases):
1206+                self._write_lease_record(f, i, lease)
1207+            self._write_num_leases(f, len(leases))
1208+            self._truncate_leases(f, len(leases))
1209+            f.close()
1210+        space_freed = self.LEASE_SIZE * num_leases_removed
1211+        if not len(leases):
1212+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1213+            self.unlink()
1214+        return space_freed
1215hunk ./src/allmydata/storage/backends/das/expirer.py 2
1216 import time, os, pickle, struct
1217-from allmydata.storage.crawler import ShareCrawler
1218-from allmydata.storage.shares import get_share_file
1219+from allmydata.storage.crawler import FSShareCrawler
1220 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1221      UnknownImmutableContainerVersionError
1222 from twisted.python import log as twlog
1223hunk ./src/allmydata/storage/backends/das/expirer.py 7
1224 
1225-class LeaseCheckingCrawler(ShareCrawler):
1226+class FSLeaseCheckingCrawler(FSShareCrawler):
1227     """I examine the leases on all shares, determining which are still valid
1228     and which have expired. I can remove the expired leases (if so
1229     configured), and the share will be deleted when the last lease is
1230hunk ./src/allmydata/storage/backends/das/expirer.py 50
1231     slow_start = 360 # wait 6 minutes after startup
1232     minimum_cycle_time = 12*60*60 # not more than twice per day
1233 
1234-    def __init__(self, statefile, historyfile,
1235-                 expiration_enabled, mode,
1236-                 override_lease_duration, # used if expiration_mode=="age"
1237-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1238-                 sharetypes):
1239+    def __init__(self, statefile, historyfile, expiration_policy):
1240         self.historyfile = historyfile
1241hunk ./src/allmydata/storage/backends/das/expirer.py 52
1242-        self.expiration_enabled = expiration_enabled
1243-        self.mode = mode
1244+        self.expiration_enabled = expiration_policy['enabled']
1245+        self.mode = expiration_policy['mode']
1246         self.override_lease_duration = None
1247         self.cutoff_date = None
1248         if self.mode == "age":
1249hunk ./src/allmydata/storage/backends/das/expirer.py 57
1250-            assert isinstance(override_lease_duration, (int, type(None)))
1251-            self.override_lease_duration = override_lease_duration # seconds
1252+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1253+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1254         elif self.mode == "cutoff-date":
1255hunk ./src/allmydata/storage/backends/das/expirer.py 60
1256-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1257+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1258             assert cutoff_date is not None
1259hunk ./src/allmydata/storage/backends/das/expirer.py 62
1260-            self.cutoff_date = cutoff_date
1261+            self.cutoff_date = expiration_policy['cutoff_date']
1262         else:
1263hunk ./src/allmydata/storage/backends/das/expirer.py 64
1264-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1265-        self.sharetypes_to_expire = sharetypes
1266-        ShareCrawler.__init__(self, statefile)
1267+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1268+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1269+        FSShareCrawler.__init__(self, statefile)
1270 
1271     def add_initial_state(self):
1272         # we fill ["cycle-to-date"] here (even though they will be reset in
1273hunk ./src/allmydata/storage/backends/das/expirer.py 156
1274 
1275     def process_share(self, sharefilename):
1276         # first, find out what kind of a share it is
1277-        sf = get_share_file(sharefilename)
1278+        f = open(sharefilename, "rb")
1279+        prefix = f.read(32)
1280+        f.close()
1281+        if prefix == MutableShareFile.MAGIC:
1282+            sf = MutableShareFile(sharefilename)
1283+        else:
1284+            # otherwise assume it's immutable
1285+            sf = FSBShare(sharefilename)
1286         sharetype = sf.sharetype
1287         now = time.time()
1288         s = self.stat(sharefilename)
1289addfile ./src/allmydata/storage/backends/null/__init__.py
1290addfile ./src/allmydata/storage/backends/null/core.py
1291hunk ./src/allmydata/storage/backends/null/core.py 1
1292+from allmydata.storage.backends.base import Backend
1293+
1294+class NullCore(Backend):
1295+    def __init__(self):
1296+        Backend.__init__(self)
1297+
1298+    def get_available_space(self):
1299+        return None
1300+
1301+    def get_shares(self, storage_index):
1302+        return set()
1303+
1304+    def get_share(self, storage_index, sharenum):
1305+        return None
1306+
1307+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1308+        return NullBucketWriter()
1309hunk ./src/allmydata/storage/crawler.py 12
1310 class TimeSliceExceeded(Exception):
1311     pass
1312 
1313-class ShareCrawler(service.MultiService):
1314+class FSShareCrawler(service.MultiService):
1315     """A subcless of ShareCrawler is attached to a StorageServer, and
1316     periodically walks all of its shares, processing each one in some
1317     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1318hunk ./src/allmydata/storage/crawler.py 68
1319     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1320     minimum_cycle_time = 300 # don't run a cycle faster than this
1321 
1322-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1323+    def __init__(self, statefname, allowed_cpu_percentage=None):
1324         service.MultiService.__init__(self)
1325         if allowed_cpu_percentage is not None:
1326             self.allowed_cpu_percentage = allowed_cpu_percentage
1327hunk ./src/allmydata/storage/crawler.py 72
1328-        self.backend = backend
1329+        self.statefname = statefname
1330         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1331                          for i in range(2**10)]
1332         self.prefixes.sort()
1333hunk ./src/allmydata/storage/crawler.py 192
1334         #                            of the last bucket to be processed, or
1335         #                            None if we are sleeping between cycles
1336         try:
1337-            f = open(self.statefile, "rb")
1338+            f = open(self.statefname, "rb")
1339             state = pickle.load(f)
1340             f.close()
1341         except EnvironmentError:
1342hunk ./src/allmydata/storage/crawler.py 230
1343         else:
1344             last_complete_prefix = self.prefixes[lcpi]
1345         self.state["last-complete-prefix"] = last_complete_prefix
1346-        tmpfile = self.statefile + ".tmp"
1347+        tmpfile = self.statefname + ".tmp"
1348         f = open(tmpfile, "wb")
1349         pickle.dump(self.state, f)
1350         f.close()
1351hunk ./src/allmydata/storage/crawler.py 433
1352         pass
1353 
1354 
1355-class BucketCountingCrawler(ShareCrawler):
1356+class FSBucketCountingCrawler(FSShareCrawler):
1357     """I keep track of how many buckets are being managed by this server.
1358     This is equivalent to the number of distributed files and directories for
1359     which I am providing storage. The actual number of files+directories in
1360hunk ./src/allmydata/storage/crawler.py 446
1361 
1362     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1363 
1364-    def __init__(self, statefile, num_sample_prefixes=1):
1365-        ShareCrawler.__init__(self, statefile)
1366+    def __init__(self, statefname, num_sample_prefixes=1):
1367+        FSShareCrawler.__init__(self, statefname)
1368         self.num_sample_prefixes = num_sample_prefixes
1369 
1370     def add_initial_state(self):
1371hunk ./src/allmydata/storage/immutable.py 14
1372 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1373      DataTooLargeError
1374 
1375-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1376-# and share data. The share data is accessed by RIBucketWriter.write and
1377-# RIBucketReader.read . The lease information is not accessible through these
1378-# interfaces.
1379-
1380-# The share file has the following layout:
1381-#  0x00: share file version number, four bytes, current version is 1
1382-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1383-#  0x08: number of leases, four bytes big-endian
1384-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1385-#  A+0x0c = B: first lease. Lease format is:
1386-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1387-#   B+0x04: renew secret, 32 bytes (SHA256)
1388-#   B+0x24: cancel secret, 32 bytes (SHA256)
1389-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1390-#   B+0x48: next lease, or end of record
1391-
1392-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1393-# but it is still filled in by storage servers in case the storage server
1394-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1395-# share file is moved from one storage server to another. The value stored in
1396-# this field is truncated, so if the actual share data length is >= 2**32,
1397-# then the value stored in this field will be the actual share data length
1398-# modulo 2**32.
1399-
1400-class ShareFile:
1401-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1402-    sharetype = "immutable"
1403-
1404-    def __init__(self, filename, max_size=None, create=False):
1405-        """ If max_size is not None then I won't allow more than
1406-        max_size to be written to me. If create=True then max_size
1407-        must not be None. """
1408-        precondition((max_size is not None) or (not create), max_size, create)
1409-        self.home = filename
1410-        self._max_size = max_size
1411-        if create:
1412-            # touch the file, so later callers will see that we're working on
1413-            # it. Also construct the metadata.
1414-            assert not os.path.exists(self.home)
1415-            fileutil.make_dirs(os.path.dirname(self.home))
1416-            f = open(self.home, 'wb')
1417-            # The second field -- the four-byte share data length -- is no
1418-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1419-            # there in case someone downgrades a storage server from >=
1420-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1421-            # server to another, etc. We do saturation -- a share data length
1422-            # larger than 2**32-1 (what can fit into the field) is marked as
1423-            # the largest length that can fit into the field. That way, even
1424-            # if this does happen, the old < v1.3.0 server will still allow
1425-            # clients to read the first part of the share.
1426-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1427-            f.close()
1428-            self._lease_offset = max_size + 0x0c
1429-            self._num_leases = 0
1430-        else:
1431-            f = open(self.home, 'rb')
1432-            filesize = os.path.getsize(self.home)
1433-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1434-            f.close()
1435-            if version != 1:
1436-                msg = "sharefile %s had version %d but we wanted 1" % \
1437-                      (filename, version)
1438-                raise UnknownImmutableContainerVersionError(msg)
1439-            self._num_leases = num_leases
1440-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1441-        self._data_offset = 0xc
1442-
1443-    def unlink(self):
1444-        os.unlink(self.home)
1445-
1446-    def read_share_data(self, offset, length):
1447-        precondition(offset >= 0)
1448-        # Reads beyond the end of the data are truncated. Reads that start
1449-        # beyond the end of the data return an empty string.
1450-        seekpos = self._data_offset+offset
1451-        fsize = os.path.getsize(self.home)
1452-        actuallength = max(0, min(length, fsize-seekpos))
1453-        if actuallength == 0:
1454-            return ""
1455-        f = open(self.home, 'rb')
1456-        f.seek(seekpos)
1457-        return f.read(actuallength)
1458-
1459-    def write_share_data(self, offset, data):
1460-        length = len(data)
1461-        precondition(offset >= 0, offset)
1462-        if self._max_size is not None and offset+length > self._max_size:
1463-            raise DataTooLargeError(self._max_size, offset, length)
1464-        f = open(self.home, 'rb+')
1465-        real_offset = self._data_offset+offset
1466-        f.seek(real_offset)
1467-        assert f.tell() == real_offset
1468-        f.write(data)
1469-        f.close()
1470-
1471-    def _write_lease_record(self, f, lease_number, lease_info):
1472-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1473-        f.seek(offset)
1474-        assert f.tell() == offset
1475-        f.write(lease_info.to_immutable_data())
1476-
1477-    def _read_num_leases(self, f):
1478-        f.seek(0x08)
1479-        (num_leases,) = struct.unpack(">L", f.read(4))
1480-        return num_leases
1481-
1482-    def _write_num_leases(self, f, num_leases):
1483-        f.seek(0x08)
1484-        f.write(struct.pack(">L", num_leases))
1485-
1486-    def _truncate_leases(self, f, num_leases):
1487-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1488-
1489-    def get_leases(self):
1490-        """Yields a LeaseInfo instance for all leases."""
1491-        f = open(self.home, 'rb')
1492-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1493-        f.seek(self._lease_offset)
1494-        for i in range(num_leases):
1495-            data = f.read(self.LEASE_SIZE)
1496-            if data:
1497-                yield LeaseInfo().from_immutable_data(data)
1498-
1499-    def add_lease(self, lease_info):
1500-        f = open(self.home, 'rb+')
1501-        num_leases = self._read_num_leases(f)
1502-        self._write_lease_record(f, num_leases, lease_info)
1503-        self._write_num_leases(f, num_leases+1)
1504-        f.close()
1505-
1506-    def renew_lease(self, renew_secret, new_expire_time):
1507-        for i,lease in enumerate(self.get_leases()):
1508-            if constant_time_compare(lease.renew_secret, renew_secret):
1509-                # yup. See if we need to update the owner time.
1510-                if new_expire_time > lease.expiration_time:
1511-                    # yes
1512-                    lease.expiration_time = new_expire_time
1513-                    f = open(self.home, 'rb+')
1514-                    self._write_lease_record(f, i, lease)
1515-                    f.close()
1516-                return
1517-        raise IndexError("unable to renew non-existent lease")
1518-
1519-    def add_or_renew_lease(self, lease_info):
1520-        try:
1521-            self.renew_lease(lease_info.renew_secret,
1522-                             lease_info.expiration_time)
1523-        except IndexError:
1524-            self.add_lease(lease_info)
1525-
1526-
1527-    def cancel_lease(self, cancel_secret):
1528-        """Remove a lease with the given cancel_secret. If the last lease is
1529-        cancelled, the file will be removed. Return the number of bytes that
1530-        were freed (by truncating the list of leases, and possibly by
1531-        deleting the file. Raise IndexError if there was no lease with the
1532-        given cancel_secret.
1533-        """
1534-
1535-        leases = list(self.get_leases())
1536-        num_leases_removed = 0
1537-        for i,lease in enumerate(leases):
1538-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1539-                leases[i] = None
1540-                num_leases_removed += 1
1541-        if not num_leases_removed:
1542-            raise IndexError("unable to find matching lease to cancel")
1543-        if num_leases_removed:
1544-            # pack and write out the remaining leases. We write these out in
1545-            # the same order as they were added, so that if we crash while
1546-            # doing this, we won't lose any non-cancelled leases.
1547-            leases = [l for l in leases if l] # remove the cancelled leases
1548-            f = open(self.home, 'rb+')
1549-            for i,lease in enumerate(leases):
1550-                self._write_lease_record(f, i, lease)
1551-            self._write_num_leases(f, len(leases))
1552-            self._truncate_leases(f, len(leases))
1553-            f.close()
1554-        space_freed = self.LEASE_SIZE * num_leases_removed
1555-        if not len(leases):
1556-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1557-            self.unlink()
1558-        return space_freed
1559-class NullBucketWriter(Referenceable):
1560-    implements(RIBucketWriter)
1561-
1562-    def remote_write(self, offset, data):
1563-        return
1564-
1565 class BucketWriter(Referenceable):
1566     implements(RIBucketWriter)
1567 
1568hunk ./src/allmydata/storage/immutable.py 17
1569-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1570+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1571         self.ss = ss
1572hunk ./src/allmydata/storage/immutable.py 19
1573-        self.incominghome = incominghome
1574-        self.finalhome = finalhome
1575         self._max_size = max_size # don't allow the client to write more than this
1576         self._canary = canary
1577         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1578hunk ./src/allmydata/storage/immutable.py 24
1579         self.closed = False
1580         self.throw_out_all_data = False
1581-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1582+        self._sharefile = immutableshare
1583         # also, add our lease to the file now, so that other ones can be
1584         # added by simultaneous uploaders
1585         self._sharefile.add_lease(lease_info)
1586hunk ./src/allmydata/storage/server.py 16
1587 from allmydata.storage.lease import LeaseInfo
1588 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1589      create_mutable_sharefile
1590-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1591-from allmydata.storage.crawler import BucketCountingCrawler
1592-from allmydata.storage.expirer import LeaseCheckingCrawler
1593 
1594 from zope.interface import implements
1595 
1596hunk ./src/allmydata/storage/server.py 19
1597-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1598-# be started and stopped.
1599-class Backend(service.MultiService):
1600-    implements(IStatsProducer)
1601-    def __init__(self):
1602-        service.MultiService.__init__(self)
1603-
1604-    def get_bucket_shares(self):
1605-        """XXX"""
1606-        raise NotImplementedError
1607-
1608-    def get_share(self):
1609-        """XXX"""
1610-        raise NotImplementedError
1611-
1612-    def make_bucket_writer(self):
1613-        """XXX"""
1614-        raise NotImplementedError
1615-
1616-class NullBackend(Backend):
1617-    def __init__(self):
1618-        Backend.__init__(self)
1619-
1620-    def get_available_space(self):
1621-        return None
1622-
1623-    def get_bucket_shares(self, storage_index):
1624-        return set()
1625-
1626-    def get_share(self, storage_index, sharenum):
1627-        return None
1628-
1629-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1630-        return NullBucketWriter()
1631-
1632-class FSBackend(Backend):
1633-    def __init__(self, storedir, readonly=False, reserved_space=0):
1634-        Backend.__init__(self)
1635-
1636-        self._setup_storage(storedir, readonly, reserved_space)
1637-        self._setup_corruption_advisory()
1638-        self._setup_bucket_counter()
1639-        self._setup_lease_checkerf()
1640-
1641-    def _setup_storage(self, storedir, readonly, reserved_space):
1642-        self.storedir = storedir
1643-        self.readonly = readonly
1644-        self.reserved_space = int(reserved_space)
1645-        if self.reserved_space:
1646-            if self.get_available_space() is None:
1647-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1648-                        umid="0wZ27w", level=log.UNUSUAL)
1649-
1650-        self.sharedir = os.path.join(self.storedir, "shares")
1651-        fileutil.make_dirs(self.sharedir)
1652-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1653-        self._clean_incomplete()
1654-
1655-    def _clean_incomplete(self):
1656-        fileutil.rm_dir(self.incomingdir)
1657-        fileutil.make_dirs(self.incomingdir)
1658-
1659-    def _setup_corruption_advisory(self):
1660-        # we don't actually create the corruption-advisory dir until necessary
1661-        self.corruption_advisory_dir = os.path.join(self.storedir,
1662-                                                    "corruption-advisories")
1663-
1664-    def _setup_bucket_counter(self):
1665-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1666-        self.bucket_counter = BucketCountingCrawler(statefile)
1667-        self.bucket_counter.setServiceParent(self)
1668-
1669-    def _setup_lease_checkerf(self):
1670-        statefile = os.path.join(self.storedir, "lease_checker.state")
1671-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1672-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1673-                                   expiration_enabled, expiration_mode,
1674-                                   expiration_override_lease_duration,
1675-                                   expiration_cutoff_date,
1676-                                   expiration_sharetypes)
1677-        self.lease_checker.setServiceParent(self)
1678-
1679-    def get_available_space(self):
1680-        if self.readonly:
1681-            return 0
1682-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1683-
1684-    def get_bucket_shares(self, storage_index):
1685-        """Return a list of (shnum, pathname) tuples for files that hold
1686-        shares for this storage_index. In each tuple, 'shnum' will always be
1687-        the integer form of the last component of 'pathname'."""
1688-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1689-        try:
1690-            for f in os.listdir(storagedir):
1691-                if NUM_RE.match(f):
1692-                    filename = os.path.join(storagedir, f)
1693-                    yield (int(f), filename)
1694-        except OSError:
1695-            # Commonly caused by there being no buckets at all.
1696-            pass
1697-
1698 # storage/
1699 # storage/shares/incoming
1700 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1701hunk ./src/allmydata/storage/server.py 32
1702 # $SHARENUM matches this regex:
1703 NUM_RE=re.compile("^[0-9]+$")
1704 
1705-
1706-
1707 class StorageServer(service.MultiService, Referenceable):
1708     implements(RIStorageServer, IStatsProducer)
1709     name = 'storage'
1710hunk ./src/allmydata/storage/server.py 35
1711-    LeaseCheckerClass = LeaseCheckingCrawler
1712 
1713     def __init__(self, nodeid, backend, reserved_space=0,
1714                  readonly_storage=False,
1715hunk ./src/allmydata/storage/server.py 38
1716-                 stats_provider=None,
1717-                 expiration_enabled=False,
1718-                 expiration_mode="age",
1719-                 expiration_override_lease_duration=None,
1720-                 expiration_cutoff_date=None,
1721-                 expiration_sharetypes=("mutable", "immutable")):
1722+                 stats_provider=None ):
1723         service.MultiService.__init__(self)
1724         assert isinstance(nodeid, str)
1725         assert len(nodeid) == 20
1726hunk ./src/allmydata/storage/server.py 217
1727         # they asked about: this will save them a lot of work. Add or update
1728         # leases for all of them: if they want us to hold shares for this
1729         # file, they'll want us to hold leases for this file.
1730-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1731-            alreadygot.add(shnum)
1732-            sf = ShareFile(fn)
1733-            sf.add_or_renew_lease(lease_info)
1734-
1735-        for shnum in sharenums:
1736-            share = self.backend.get_share(storage_index, shnum)
1737+        for share in self.backend.get_shares(storage_index):
1738+            alreadygot.add(share.shnum)
1739+            share.add_or_renew_lease(lease_info)
1740 
1741hunk ./src/allmydata/storage/server.py 221
1742-            if not share:
1743-                if (not limited) or (remaining_space >= max_space_per_bucket):
1744-                    # ok! we need to create the new share file.
1745-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1746-                                      max_space_per_bucket, lease_info, canary)
1747-                    bucketwriters[shnum] = bw
1748-                    self._active_writers[bw] = 1
1749-                    if limited:
1750-                        remaining_space -= max_space_per_bucket
1751-                else:
1752-                    # bummer! not enough space to accept this bucket
1753-                    pass
1754+        for shnum in (sharenums - alreadygot):
1755+            if (not limited) or (remaining_space >= max_space_per_bucket):
1756+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1757+                self.backend.set_storage_server(self)
1758+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1759+                                                     max_space_per_bucket, lease_info, canary)
1760+                bucketwriters[shnum] = bw
1761+                self._active_writers[bw] = 1
1762+                if limited:
1763+                    remaining_space -= max_space_per_bucket
1764 
1765hunk ./src/allmydata/storage/server.py 232
1766-            elif share.is_complete():
1767-                # great! we already have it. easy.
1768-                pass
1769-            elif not share.is_complete():
1770-                # Note that we don't create BucketWriters for shnums that
1771-                # have a partial share (in incoming/), so if a second upload
1772-                # occurs while the first is still in progress, the second
1773-                # uploader will use different storage servers.
1774-                pass
1775+        #XXX We SHOULD DOCUMENT LATER.
1776 
1777         self.add_latency("allocate", time.time() - start)
1778         return alreadygot, bucketwriters
1779hunk ./src/allmydata/storage/server.py 238
1780 
1781     def _iter_share_files(self, storage_index):
1782-        for shnum, filename in self._get_bucket_shares(storage_index):
1783+        for shnum, filename in self._get_shares(storage_index):
1784             f = open(filename, 'rb')
1785             header = f.read(32)
1786             f.close()
1787hunk ./src/allmydata/storage/server.py 318
1788         si_s = si_b2a(storage_index)
1789         log.msg("storage: get_buckets %s" % si_s)
1790         bucketreaders = {} # k: sharenum, v: BucketReader
1791-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1792+        for shnum, filename in self.backend.get_shares(storage_index):
1793             bucketreaders[shnum] = BucketReader(self, filename,
1794                                                 storage_index, shnum)
1795         self.add_latency("get", time.time() - start)
1796hunk ./src/allmydata/storage/server.py 334
1797         # since all shares get the same lease data, we just grab the leases
1798         # from the first share
1799         try:
1800-            shnum, filename = self._get_bucket_shares(storage_index).next()
1801+            shnum, filename = self._get_shares(storage_index).next()
1802             sf = ShareFile(filename)
1803             return sf.get_leases()
1804         except StopIteration:
1805hunk ./src/allmydata/storage/shares.py 1
1806-#! /usr/bin/python
1807-
1808-from allmydata.storage.mutable import MutableShareFile
1809-from allmydata.storage.immutable import ShareFile
1810-
1811-def get_share_file(filename):
1812-    f = open(filename, "rb")
1813-    prefix = f.read(32)
1814-    f.close()
1815-    if prefix == MutableShareFile.MAGIC:
1816-        return MutableShareFile(filename)
1817-    # otherwise assume it's immutable
1818-    return ShareFile(filename)
1819-
1820rmfile ./src/allmydata/storage/shares.py
1821hunk ./src/allmydata/test/common_util.py 20
1822 
1823 def flip_one_bit(s, offset=0, size=None):
1824     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1825-    than offset+size. """
1826+    than offset+size. Return the new string. """
1827     if size is None:
1828         size=len(s)-offset
1829     i = randrange(offset, offset+size)
1830hunk ./src/allmydata/test/test_backends.py 7
1831 
1832 from allmydata.test.common_util import ReallyEqualMixin
1833 
1834-import mock
1835+import mock, os
1836 
1837 # This is the code that we're going to be testing.
1838hunk ./src/allmydata/test/test_backends.py 10
1839-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1840+from allmydata.storage.server import StorageServer
1841+
1842+from allmydata.storage.backends.das.core import DASCore
1843+from allmydata.storage.backends.null.core import NullCore
1844+
1845 
1846 # The following share file contents was generated with
1847 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1848hunk ./src/allmydata/test/test_backends.py 22
1849 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1850 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1851 
1852-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1853+tempdir = 'teststoredir'
1854+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1855+sharefname = os.path.join(sharedirname, '0')
1856 
1857 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1858     @mock.patch('time.time')
1859hunk ./src/allmydata/test/test_backends.py 58
1860         filesystem in only the prescribed ways. """
1861 
1862         def call_open(fname, mode):
1863-            if fname == 'testdir/bucket_counter.state':
1864-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1865-            elif fname == 'testdir/lease_checker.state':
1866-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1867-            elif fname == 'testdir/lease_checker.history':
1868+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1869+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1870+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1871+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1872+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1873                 return StringIO()
1874             else:
1875                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1876hunk ./src/allmydata/test/test_backends.py 124
1877     @mock.patch('__builtin__.open')
1878     def setUp(self, mockopen):
1879         def call_open(fname, mode):
1880-            if fname == 'testdir/bucket_counter.state':
1881-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1882-            elif fname == 'testdir/lease_checker.state':
1883-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1884-            elif fname == 'testdir/lease_checker.history':
1885+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1886+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1887+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1888+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1889+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1890                 return StringIO()
1891         mockopen.side_effect = call_open
1892hunk ./src/allmydata/test/test_backends.py 131
1893-
1894-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1895+        expiration_policy = {'enabled' : False,
1896+                             'mode' : 'age',
1897+                             'override_lease_duration' : None,
1898+                             'cutoff_date' : None,
1899+                             'sharetypes' : None}
1900+        testbackend = DASCore(tempdir, expiration_policy)
1901+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1902 
1903     @mock.patch('time.time')
1904     @mock.patch('os.mkdir')
1905hunk ./src/allmydata/test/test_backends.py 148
1906         """ Write a new share. """
1907 
1908         def call_listdir(dirname):
1909-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1910-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1911+            self.failUnlessReallyEqual(dirname, sharedirname)
1912+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1913 
1914         mocklistdir.side_effect = call_listdir
1915 
1916hunk ./src/allmydata/test/test_backends.py 178
1917 
1918         sharefile = MockFile()
1919         def call_open(fname, mode):
1920-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1921+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1922             return sharefile
1923 
1924         mockopen.side_effect = call_open
1925hunk ./src/allmydata/test/test_backends.py 200
1926         StorageServer object. """
1927 
1928         def call_listdir(dirname):
1929-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1930+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1931             return ['0']
1932 
1933         mocklistdir.side_effect = call_listdir
1934}
1935[checkpoint patch
1936wilcoxjg@gmail.com**20110626165715
1937 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1938] {
1939hunk ./src/allmydata/storage/backends/das/core.py 21
1940 from allmydata.storage.lease import LeaseInfo
1941 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1942      create_mutable_sharefile
1943-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1944+from allmydata.storage.immutable import BucketWriter, BucketReader
1945 from allmydata.storage.crawler import FSBucketCountingCrawler
1946 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1947 
1948hunk ./src/allmydata/storage/backends/das/core.py 27
1949 from zope.interface import implements
1950 
1951+# $SHARENUM matches this regex:
1952+NUM_RE=re.compile("^[0-9]+$")
1953+
1954 class DASCore(Backend):
1955     implements(IStorageBackend)
1956     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1957hunk ./src/allmydata/storage/backends/das/core.py 80
1958         return fileutil.get_available_space(self.storedir, self.reserved_space)
1959 
1960     def get_shares(self, storage_index):
1961-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1962+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
1963         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1964         try:
1965             for f in os.listdir(finalstoragedir):
1966hunk ./src/allmydata/storage/backends/das/core.py 86
1967                 if NUM_RE.match(f):
1968                     filename = os.path.join(finalstoragedir, f)
1969-                    yield FSBShare(filename, int(f))
1970+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
1971         except OSError:
1972             # Commonly caused by there being no buckets at all.
1973             pass
1974hunk ./src/allmydata/storage/backends/das/core.py 95
1975         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1976         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1977         return bw
1978+
1979+    def set_storage_server(self, ss):
1980+        self.ss = ss
1981         
1982 
1983 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
1984hunk ./src/allmydata/storage/server.py 29
1985 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
1986 # base-32 chars).
1987 
1988-# $SHARENUM matches this regex:
1989-NUM_RE=re.compile("^[0-9]+$")
1990 
1991 class StorageServer(service.MultiService, Referenceable):
1992     implements(RIStorageServer, IStatsProducer)
1993}
1994[checkpoint4
1995wilcoxjg@gmail.com**20110628202202
1996 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
1997] {
1998hunk ./src/allmydata/storage/backends/das/core.py 96
1999         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2000         return bw
2001 
2002+    def make_bucket_reader(self, share):
2003+        return BucketReader(self.ss, share)
2004+
2005     def set_storage_server(self, ss):
2006         self.ss = ss
2007         
2008hunk ./src/allmydata/storage/backends/das/core.py 138
2009         must not be None. """
2010         precondition((max_size is not None) or (not create), max_size, create)
2011         self.shnum = shnum
2012+        self.storage_index = storageindex
2013         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2014         self._max_size = max_size
2015         if create:
2016hunk ./src/allmydata/storage/backends/das/core.py 173
2017             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2018         self._data_offset = 0xc
2019 
2020+    def get_shnum(self):
2021+        return self.shnum
2022+
2023     def unlink(self):
2024         os.unlink(self.fname)
2025 
2026hunk ./src/allmydata/storage/backends/null/core.py 2
2027 from allmydata.storage.backends.base import Backend
2028+from allmydata.storage.immutable import BucketWriter, BucketReader
2029 
2030 class NullCore(Backend):
2031     def __init__(self):
2032hunk ./src/allmydata/storage/backends/null/core.py 17
2033     def get_share(self, storage_index, sharenum):
2034         return None
2035 
2036-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2037-        return NullBucketWriter()
2038+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2039+       
2040+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2041+
2042+    def set_storage_server(self, ss):
2043+        self.ss = ss
2044+
2045+class ImmutableShare:
2046+    sharetype = "immutable"
2047+
2048+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2049+        """ If max_size is not None then I won't allow more than
2050+        max_size to be written to me. If create=True then max_size
2051+        must not be None. """
2052+        precondition((max_size is not None) or (not create), max_size, create)
2053+        self.shnum = shnum
2054+        self.storage_index = storageindex
2055+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2056+        self._max_size = max_size
2057+        if create:
2058+            # touch the file, so later callers will see that we're working on
2059+            # it. Also construct the metadata.
2060+            assert not os.path.exists(self.fname)
2061+            fileutil.make_dirs(os.path.dirname(self.fname))
2062+            f = open(self.fname, 'wb')
2063+            # The second field -- the four-byte share data length -- is no
2064+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2065+            # there in case someone downgrades a storage server from >=
2066+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2067+            # server to another, etc. We do saturation -- a share data length
2068+            # larger than 2**32-1 (what can fit into the field) is marked as
2069+            # the largest length that can fit into the field. That way, even
2070+            # if this does happen, the old < v1.3.0 server will still allow
2071+            # clients to read the first part of the share.
2072+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2073+            f.close()
2074+            self._lease_offset = max_size + 0x0c
2075+            self._num_leases = 0
2076+        else:
2077+            f = open(self.fname, 'rb')
2078+            filesize = os.path.getsize(self.fname)
2079+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2080+            f.close()
2081+            if version != 1:
2082+                msg = "sharefile %s had version %d but we wanted 1" % \
2083+                      (self.fname, version)
2084+                raise UnknownImmutableContainerVersionError(msg)
2085+            self._num_leases = num_leases
2086+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2087+        self._data_offset = 0xc
2088+
2089+    def get_shnum(self):
2090+        return self.shnum
2091+
2092+    def unlink(self):
2093+        os.unlink(self.fname)
2094+
2095+    def read_share_data(self, offset, length):
2096+        precondition(offset >= 0)
2097+        # Reads beyond the end of the data are truncated. Reads that start
2098+        # beyond the end of the data return an empty string.
2099+        seekpos = self._data_offset+offset
2100+        fsize = os.path.getsize(self.fname)
2101+        actuallength = max(0, min(length, fsize-seekpos))
2102+        if actuallength == 0:
2103+            return ""
2104+        f = open(self.fname, 'rb')
2105+        f.seek(seekpos)
2106+        return f.read(actuallength)
2107+
2108+    def write_share_data(self, offset, data):
2109+        length = len(data)
2110+        precondition(offset >= 0, offset)
2111+        if self._max_size is not None and offset+length > self._max_size:
2112+            raise DataTooLargeError(self._max_size, offset, length)
2113+        f = open(self.fname, 'rb+')
2114+        real_offset = self._data_offset+offset
2115+        f.seek(real_offset)
2116+        assert f.tell() == real_offset
2117+        f.write(data)
2118+        f.close()
2119+
2120+    def _write_lease_record(self, f, lease_number, lease_info):
2121+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2122+        f.seek(offset)
2123+        assert f.tell() == offset
2124+        f.write(lease_info.to_immutable_data())
2125+
2126+    def _read_num_leases(self, f):
2127+        f.seek(0x08)
2128+        (num_leases,) = struct.unpack(">L", f.read(4))
2129+        return num_leases
2130+
2131+    def _write_num_leases(self, f, num_leases):
2132+        f.seek(0x08)
2133+        f.write(struct.pack(">L", num_leases))
2134+
2135+    def _truncate_leases(self, f, num_leases):
2136+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2137+
2138+    def get_leases(self):
2139+        """Yields a LeaseInfo instance for all leases."""
2140+        f = open(self.fname, 'rb')
2141+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2142+        f.seek(self._lease_offset)
2143+        for i in range(num_leases):
2144+            data = f.read(self.LEASE_SIZE)
2145+            if data:
2146+                yield LeaseInfo().from_immutable_data(data)
2147+
2148+    def add_lease(self, lease_info):
2149+        f = open(self.fname, 'rb+')
2150+        num_leases = self._read_num_leases(f)
2151+        self._write_lease_record(f, num_leases, lease_info)
2152+        self._write_num_leases(f, num_leases+1)
2153+        f.close()
2154+
2155+    def renew_lease(self, renew_secret, new_expire_time):
2156+        for i,lease in enumerate(self.get_leases()):
2157+            if constant_time_compare(lease.renew_secret, renew_secret):
2158+                # yup. See if we need to update the owner time.
2159+                if new_expire_time > lease.expiration_time:
2160+                    # yes
2161+                    lease.expiration_time = new_expire_time
2162+                    f = open(self.fname, 'rb+')
2163+                    self._write_lease_record(f, i, lease)
2164+                    f.close()
2165+                return
2166+        raise IndexError("unable to renew non-existent lease")
2167+
2168+    def add_or_renew_lease(self, lease_info):
2169+        try:
2170+            self.renew_lease(lease_info.renew_secret,
2171+                             lease_info.expiration_time)
2172+        except IndexError:
2173+            self.add_lease(lease_info)
2174+
2175+
2176+    def cancel_lease(self, cancel_secret):
2177+        """Remove a lease with the given cancel_secret. If the last lease is
2178+        cancelled, the file will be removed. Return the number of bytes that
2179+        were freed (by truncating the list of leases, and possibly by
2180+        deleting the file. Raise IndexError if there was no lease with the
2181+        given cancel_secret.
2182+        """
2183+
2184+        leases = list(self.get_leases())
2185+        num_leases_removed = 0
2186+        for i,lease in enumerate(leases):
2187+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2188+                leases[i] = None
2189+                num_leases_removed += 1
2190+        if not num_leases_removed:
2191+            raise IndexError("unable to find matching lease to cancel")
2192+        if num_leases_removed:
2193+            # pack and write out the remaining leases. We write these out in
2194+            # the same order as they were added, so that if we crash while
2195+            # doing this, we won't lose any non-cancelled leases.
2196+            leases = [l for l in leases if l] # remove the cancelled leases
2197+            f = open(self.fname, 'rb+')
2198+            for i,lease in enumerate(leases):
2199+                self._write_lease_record(f, i, lease)
2200+            self._write_num_leases(f, len(leases))
2201+            self._truncate_leases(f, len(leases))
2202+            f.close()
2203+        space_freed = self.LEASE_SIZE * num_leases_removed
2204+        if not len(leases):
2205+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2206+            self.unlink()
2207+        return space_freed
2208hunk ./src/allmydata/storage/immutable.py 114
2209 class BucketReader(Referenceable):
2210     implements(RIBucketReader)
2211 
2212-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2213+    def __init__(self, ss, share):
2214         self.ss = ss
2215hunk ./src/allmydata/storage/immutable.py 116
2216-        self._share_file = ShareFile(sharefname)
2217-        self.storage_index = storage_index
2218-        self.shnum = shnum
2219+        self._share_file = share
2220+        self.storage_index = share.storage_index
2221+        self.shnum = share.shnum
2222 
2223     def __repr__(self):
2224         return "<%s %s %s>" % (self.__class__.__name__,
2225hunk ./src/allmydata/storage/server.py 316
2226         si_s = si_b2a(storage_index)
2227         log.msg("storage: get_buckets %s" % si_s)
2228         bucketreaders = {} # k: sharenum, v: BucketReader
2229-        for shnum, filename in self.backend.get_shares(storage_index):
2230-            bucketreaders[shnum] = BucketReader(self, filename,
2231-                                                storage_index, shnum)
2232+        self.backend.set_storage_server(self)
2233+        for share in self.backend.get_shares(storage_index):
2234+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2235         self.add_latency("get", time.time() - start)
2236         return bucketreaders
2237 
2238hunk ./src/allmydata/test/test_backends.py 25
2239 tempdir = 'teststoredir'
2240 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2241 sharefname = os.path.join(sharedirname, '0')
2242+expiration_policy = {'enabled' : False,
2243+                     'mode' : 'age',
2244+                     'override_lease_duration' : None,
2245+                     'cutoff_date' : None,
2246+                     'sharetypes' : None}
2247 
2248 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2249     @mock.patch('time.time')
2250hunk ./src/allmydata/test/test_backends.py 43
2251         tries to read or write to the file system. """
2252 
2253         # Now begin the test.
2254-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2255+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2256 
2257         self.failIf(mockisdir.called)
2258         self.failIf(mocklistdir.called)
2259hunk ./src/allmydata/test/test_backends.py 74
2260         mockopen.side_effect = call_open
2261 
2262         # Now begin the test.
2263-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2264+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2265 
2266         self.failIf(mockisdir.called)
2267         self.failIf(mocklistdir.called)
2268hunk ./src/allmydata/test/test_backends.py 86
2269 
2270 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2271     def setUp(self):
2272-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2273+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2274 
2275     @mock.patch('os.mkdir')
2276     @mock.patch('__builtin__.open')
2277hunk ./src/allmydata/test/test_backends.py 136
2278             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2279                 return StringIO()
2280         mockopen.side_effect = call_open
2281-        expiration_policy = {'enabled' : False,
2282-                             'mode' : 'age',
2283-                             'override_lease_duration' : None,
2284-                             'cutoff_date' : None,
2285-                             'sharetypes' : None}
2286         testbackend = DASCore(tempdir, expiration_policy)
2287         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2288 
2289}
2290[checkpoint5
2291wilcoxjg@gmail.com**20110705034626
2292 Ignore-this: 255780bd58299b0aa33c027e9d008262
2293] {
2294addfile ./src/allmydata/storage/backends/base.py
2295hunk ./src/allmydata/storage/backends/base.py 1
2296+from twisted.application import service
2297+
2298+class Backend(service.MultiService):
2299+    def __init__(self):
2300+        service.MultiService.__init__(self)
2301hunk ./src/allmydata/storage/backends/null/core.py 19
2302 
2303     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2304         
2305+        immutableshare = ImmutableShare()
2306         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2307 
2308     def set_storage_server(self, ss):
2309hunk ./src/allmydata/storage/backends/null/core.py 28
2310 class ImmutableShare:
2311     sharetype = "immutable"
2312 
2313-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2314+    def __init__(self):
2315         """ If max_size is not None then I won't allow more than
2316         max_size to be written to me. If create=True then max_size
2317         must not be None. """
2318hunk ./src/allmydata/storage/backends/null/core.py 32
2319-        precondition((max_size is not None) or (not create), max_size, create)
2320-        self.shnum = shnum
2321-        self.storage_index = storageindex
2322-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2323-        self._max_size = max_size
2324-        if create:
2325-            # touch the file, so later callers will see that we're working on
2326-            # it. Also construct the metadata.
2327-            assert not os.path.exists(self.fname)
2328-            fileutil.make_dirs(os.path.dirname(self.fname))
2329-            f = open(self.fname, 'wb')
2330-            # The second field -- the four-byte share data length -- is no
2331-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2332-            # there in case someone downgrades a storage server from >=
2333-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2334-            # server to another, etc. We do saturation -- a share data length
2335-            # larger than 2**32-1 (what can fit into the field) is marked as
2336-            # the largest length that can fit into the field. That way, even
2337-            # if this does happen, the old < v1.3.0 server will still allow
2338-            # clients to read the first part of the share.
2339-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2340-            f.close()
2341-            self._lease_offset = max_size + 0x0c
2342-            self._num_leases = 0
2343-        else:
2344-            f = open(self.fname, 'rb')
2345-            filesize = os.path.getsize(self.fname)
2346-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2347-            f.close()
2348-            if version != 1:
2349-                msg = "sharefile %s had version %d but we wanted 1" % \
2350-                      (self.fname, version)
2351-                raise UnknownImmutableContainerVersionError(msg)
2352-            self._num_leases = num_leases
2353-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2354-        self._data_offset = 0xc
2355+        pass
2356 
2357     def get_shnum(self):
2358         return self.shnum
2359hunk ./src/allmydata/storage/backends/null/core.py 54
2360         return f.read(actuallength)
2361 
2362     def write_share_data(self, offset, data):
2363-        length = len(data)
2364-        precondition(offset >= 0, offset)
2365-        if self._max_size is not None and offset+length > self._max_size:
2366-            raise DataTooLargeError(self._max_size, offset, length)
2367-        f = open(self.fname, 'rb+')
2368-        real_offset = self._data_offset+offset
2369-        f.seek(real_offset)
2370-        assert f.tell() == real_offset
2371-        f.write(data)
2372-        f.close()
2373+        pass
2374 
2375     def _write_lease_record(self, f, lease_number, lease_info):
2376         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2377hunk ./src/allmydata/storage/backends/null/core.py 84
2378             if data:
2379                 yield LeaseInfo().from_immutable_data(data)
2380 
2381-    def add_lease(self, lease_info):
2382-        f = open(self.fname, 'rb+')
2383-        num_leases = self._read_num_leases(f)
2384-        self._write_lease_record(f, num_leases, lease_info)
2385-        self._write_num_leases(f, num_leases+1)
2386-        f.close()
2387+    def add_lease(self, lease):
2388+        pass
2389 
2390     def renew_lease(self, renew_secret, new_expire_time):
2391         for i,lease in enumerate(self.get_leases()):
2392hunk ./src/allmydata/test/test_backends.py 32
2393                      'sharetypes' : None}
2394 
2395 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2396-    @mock.patch('time.time')
2397-    @mock.patch('os.mkdir')
2398-    @mock.patch('__builtin__.open')
2399-    @mock.patch('os.listdir')
2400-    @mock.patch('os.path.isdir')
2401-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2402-        """ This tests whether a server instance can be constructed
2403-        with a null backend. The server instance fails the test if it
2404-        tries to read or write to the file system. """
2405-
2406-        # Now begin the test.
2407-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2408-
2409-        self.failIf(mockisdir.called)
2410-        self.failIf(mocklistdir.called)
2411-        self.failIf(mockopen.called)
2412-        self.failIf(mockmkdir.called)
2413-
2414-        # You passed!
2415-
2416     @mock.patch('time.time')
2417     @mock.patch('os.mkdir')
2418     @mock.patch('__builtin__.open')
2419hunk ./src/allmydata/test/test_backends.py 53
2420                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2421         mockopen.side_effect = call_open
2422 
2423-        # Now begin the test.
2424-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2425-
2426-        self.failIf(mockisdir.called)
2427-        self.failIf(mocklistdir.called)
2428-        self.failIf(mockopen.called)
2429-        self.failIf(mockmkdir.called)
2430-        self.failIf(mocktime.called)
2431-
2432-        # You passed!
2433-
2434-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2435-    def setUp(self):
2436-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2437-
2438-    @mock.patch('os.mkdir')
2439-    @mock.patch('__builtin__.open')
2440-    @mock.patch('os.listdir')
2441-    @mock.patch('os.path.isdir')
2442-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2443-        """ Write a new share. """
2444-
2445-        # Now begin the test.
2446-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2447-        bs[0].remote_write(0, 'a')
2448-        self.failIf(mockisdir.called)
2449-        self.failIf(mocklistdir.called)
2450-        self.failIf(mockopen.called)
2451-        self.failIf(mockmkdir.called)
2452+        def call_isdir(fname):
2453+            if fname == os.path.join(tempdir,'shares'):
2454+                return True
2455+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2456+                return True
2457+            else:
2458+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2459+        mockisdir.side_effect = call_isdir
2460 
2461hunk ./src/allmydata/test/test_backends.py 62
2462-    @mock.patch('os.path.exists')
2463-    @mock.patch('os.path.getsize')
2464-    @mock.patch('__builtin__.open')
2465-    @mock.patch('os.listdir')
2466-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2467-        """ This tests whether the code correctly finds and reads
2468-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2469-        servers. There is a similar test in test_download, but that one
2470-        is from the perspective of the client and exercises a deeper
2471-        stack of code. This one is for exercising just the
2472-        StorageServer object. """
2473+        def call_mkdir(fname, mode):
2474+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2475+            self.failUnlessEqual(0777, mode)
2476+            if fname == tempdir:
2477+                return None
2478+            elif fname == os.path.join(tempdir,'shares'):
2479+                return None
2480+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2481+                return None
2482+            else:
2483+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2484+        mockmkdir.side_effect = call_mkdir
2485 
2486         # Now begin the test.
2487hunk ./src/allmydata/test/test_backends.py 76
2488-        bs = self.s.remote_get_buckets('teststorage_index')
2489+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2490 
2491hunk ./src/allmydata/test/test_backends.py 78
2492-        self.failUnlessEqual(len(bs), 0)
2493-        self.failIf(mocklistdir.called)
2494-        self.failIf(mockopen.called)
2495-        self.failIf(mockgetsize.called)
2496-        self.failIf(mockexists.called)
2497+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2498 
2499 
2500 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2501hunk ./src/allmydata/test/test_backends.py 193
2502         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2503 
2504 
2505+
2506+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2507+    @mock.patch('time.time')
2508+    @mock.patch('os.mkdir')
2509+    @mock.patch('__builtin__.open')
2510+    @mock.patch('os.listdir')
2511+    @mock.patch('os.path.isdir')
2512+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2513+        """ This tests whether a file system backend instance can be
2514+        constructed. To pass the test, it has to use the
2515+        filesystem in only the prescribed ways. """
2516+
2517+        def call_open(fname, mode):
2518+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2519+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2520+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2521+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2522+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2523+                return StringIO()
2524+            else:
2525+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2526+        mockopen.side_effect = call_open
2527+
2528+        def call_isdir(fname):
2529+            if fname == os.path.join(tempdir,'shares'):
2530+                return True
2531+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2532+                return True
2533+            else:
2534+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2535+        mockisdir.side_effect = call_isdir
2536+
2537+        def call_mkdir(fname, mode):
2538+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2539+            self.failUnlessEqual(0777, mode)
2540+            if fname == tempdir:
2541+                return None
2542+            elif fname == os.path.join(tempdir,'shares'):
2543+                return None
2544+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2545+                return None
2546+            else:
2547+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2548+        mockmkdir.side_effect = call_mkdir
2549+
2550+        # Now begin the test.
2551+        DASCore('teststoredir', expiration_policy)
2552+
2553+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2554}
2555[checkpoint 6
2556wilcoxjg@gmail.com**20110706190824
2557 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2558] {
2559hunk ./src/allmydata/interfaces.py 100
2560                          renew_secret=LeaseRenewSecret,
2561                          cancel_secret=LeaseCancelSecret,
2562                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2563-                         allocated_size=Offset, canary=Referenceable):
2564+                         allocated_size=Offset,
2565+                         canary=Referenceable):
2566         """
2567hunk ./src/allmydata/interfaces.py 103
2568-        @param storage_index: the index of the bucket to be created or
2569+        @param storage_index: the index of the shares to be created or
2570                               increfed.
2571hunk ./src/allmydata/interfaces.py 105
2572-        @param sharenums: these are the share numbers (probably between 0 and
2573-                          99) that the sender is proposing to store on this
2574-                          server.
2575-        @param renew_secret: This is the secret used to protect bucket refresh
2576+        @param renew_secret: This is the secret used to protect shares refresh
2577                              This secret is generated by the client and
2578                              stored for later comparison by the server. Each
2579                              server is given a different secret.
2580hunk ./src/allmydata/interfaces.py 109
2581-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2582-        @param canary: If the canary is lost before close(), the bucket is
2583+        @param cancel_secret: Like renew_secret, but protects shares decref.
2584+        @param sharenums: these are the share numbers (probably between 0 and
2585+                          99) that the sender is proposing to store on this
2586+                          server.
2587+        @param allocated_size: XXX The size of the shares the client wishes to store.
2588+        @param canary: If the canary is lost before close(), the shares are
2589                        deleted.
2590hunk ./src/allmydata/interfaces.py 116
2591+
2592         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2593                  already have and allocated is what we hereby agree to accept.
2594                  New leases are added for shares in both lists.
2595hunk ./src/allmydata/interfaces.py 128
2596                   renew_secret=LeaseRenewSecret,
2597                   cancel_secret=LeaseCancelSecret):
2598         """
2599-        Add a new lease on the given bucket. If the renew_secret matches an
2600+        Add a new lease on the given shares. If the renew_secret matches an
2601         existing lease, that lease will be renewed instead. If there is no
2602         bucket for the given storage_index, return silently. (note that in
2603         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2604hunk ./src/allmydata/storage/server.py 17
2605 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2606      create_mutable_sharefile
2607 
2608-from zope.interface import implements
2609-
2610 # storage/
2611 # storage/shares/incoming
2612 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2613hunk ./src/allmydata/test/test_backends.py 6
2614 from StringIO import StringIO
2615 
2616 from allmydata.test.common_util import ReallyEqualMixin
2617+from allmydata.util.assertutil import _assert
2618 
2619 import mock, os
2620 
2621hunk ./src/allmydata/test/test_backends.py 92
2622                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2623             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2624                 return StringIO()
2625+            else:
2626+                _assert(False, "The tester code doesn't recognize this case.") 
2627+
2628         mockopen.side_effect = call_open
2629         testbackend = DASCore(tempdir, expiration_policy)
2630         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2631hunk ./src/allmydata/test/test_backends.py 109
2632 
2633         def call_listdir(dirname):
2634             self.failUnlessReallyEqual(dirname, sharedirname)
2635-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2636+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2637 
2638         mocklistdir.side_effect = call_listdir
2639 
2640hunk ./src/allmydata/test/test_backends.py 113
2641+        def call_isdir(dirname):
2642+            self.failUnlessReallyEqual(dirname, sharedirname)
2643+            return True
2644+
2645+        mockisdir.side_effect = call_isdir
2646+
2647+        def call_mkdir(dirname, permissions):
2648+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2649+                self.Fail
2650+            else:
2651+                return True
2652+
2653+        mockmkdir.side_effect = call_mkdir
2654+
2655         class MockFile:
2656             def __init__(self):
2657                 self.buffer = ''
2658hunk ./src/allmydata/test/test_backends.py 156
2659             return sharefile
2660 
2661         mockopen.side_effect = call_open
2662+
2663         # Now begin the test.
2664         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2665         bs[0].remote_write(0, 'a')
2666hunk ./src/allmydata/test/test_backends.py 161
2667         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2668+       
2669+        # Now test the allocated_size method.
2670+        spaceint = self.s.allocated_size()
2671 
2672     @mock.patch('os.path.exists')
2673     @mock.patch('os.path.getsize')
2674}
2675[checkpoint 7
2676wilcoxjg@gmail.com**20110706200820
2677 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2678] hunk ./src/allmydata/test/test_backends.py 164
2679         
2680         # Now test the allocated_size method.
2681         spaceint = self.s.allocated_size()
2682+        self.failUnlessReallyEqual(spaceint, 1)
2683 
2684     @mock.patch('os.path.exists')
2685     @mock.patch('os.path.getsize')
2686[checkpoint8
2687wilcoxjg@gmail.com**20110706223126
2688 Ignore-this: 97336180883cb798b16f15411179f827
2689   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2690] hunk ./src/allmydata/test/test_backends.py 32
2691                      'cutoff_date' : None,
2692                      'sharetypes' : None}
2693 
2694+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2695+    def setUp(self):
2696+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2697+
2698+    @mock.patch('os.mkdir')
2699+    @mock.patch('__builtin__.open')
2700+    @mock.patch('os.listdir')
2701+    @mock.patch('os.path.isdir')
2702+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2703+        """ Write a new share. """
2704+
2705+        # Now begin the test.
2706+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2707+        bs[0].remote_write(0, 'a')
2708+        self.failIf(mockisdir.called)
2709+        self.failIf(mocklistdir.called)
2710+        self.failIf(mockopen.called)
2711+        self.failIf(mockmkdir.called)
2712+
2713 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2714     @mock.patch('time.time')
2715     @mock.patch('os.mkdir')
2716[checkpoint 9
2717wilcoxjg@gmail.com**20110707042942
2718 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2719] {
2720hunk ./src/allmydata/storage/backends/das/core.py 88
2721                     filename = os.path.join(finalstoragedir, f)
2722                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2723         except OSError:
2724-            # Commonly caused by there being no buckets at all.
2725+            # Commonly caused by there being no shares at all.
2726             pass
2727         
2728     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2729hunk ./src/allmydata/storage/backends/das/core.py 141
2730         self.storage_index = storageindex
2731         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2732         self._max_size = max_size
2733+        self.incomingdir = os.path.join(sharedir, 'incoming')
2734+        si_dir = storage_index_to_dir(storageindex)
2735+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2736+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2737         if create:
2738             # touch the file, so later callers will see that we're working on
2739             # it. Also construct the metadata.
2740hunk ./src/allmydata/storage/backends/das/core.py 177
2741             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2742         self._data_offset = 0xc
2743 
2744+    def close(self):
2745+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2746+        fileutil.rename(self.incominghome, self.finalhome)
2747+        try:
2748+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2749+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2750+            # these directories lying around forever, but the delete might
2751+            # fail if we're working on another share for the same storage
2752+            # index (like ab/abcde/5). The alternative approach would be to
2753+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2754+            # ShareWriter), each of which is responsible for a single
2755+            # directory on disk, and have them use reference counting of
2756+            # their children to know when they should do the rmdir. This
2757+            # approach is simpler, but relies on os.rmdir refusing to delete
2758+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2759+            os.rmdir(os.path.dirname(self.incominghome))
2760+            # we also delete the grandparent (prefix) directory, .../ab ,
2761+            # again to avoid leaving directories lying around. This might
2762+            # fail if there is another bucket open that shares a prefix (like
2763+            # ab/abfff).
2764+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2765+            # we leave the great-grandparent (incoming/) directory in place.
2766+        except EnvironmentError:
2767+            # ignore the "can't rmdir because the directory is not empty"
2768+            # exceptions, those are normal consequences of the
2769+            # above-mentioned conditions.
2770+            pass
2771+        pass
2772+       
2773+    def stat(self):
2774+        return os.stat(self.finalhome)[stat.ST_SIZE]
2775+
2776     def get_shnum(self):
2777         return self.shnum
2778 
2779hunk ./src/allmydata/storage/immutable.py 7
2780 
2781 from zope.interface import implements
2782 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2783-from allmydata.util import base32, fileutil, log
2784+from allmydata.util import base32, log
2785 from allmydata.util.assertutil import precondition
2786 from allmydata.util.hashutil import constant_time_compare
2787 from allmydata.storage.lease import LeaseInfo
2788hunk ./src/allmydata/storage/immutable.py 44
2789     def remote_close(self):
2790         precondition(not self.closed)
2791         start = time.time()
2792-
2793-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2794-        fileutil.rename(self.incominghome, self.finalhome)
2795-        try:
2796-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2797-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2798-            # these directories lying around forever, but the delete might
2799-            # fail if we're working on another share for the same storage
2800-            # index (like ab/abcde/5). The alternative approach would be to
2801-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2802-            # ShareWriter), each of which is responsible for a single
2803-            # directory on disk, and have them use reference counting of
2804-            # their children to know when they should do the rmdir. This
2805-            # approach is simpler, but relies on os.rmdir refusing to delete
2806-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2807-            os.rmdir(os.path.dirname(self.incominghome))
2808-            # we also delete the grandparent (prefix) directory, .../ab ,
2809-            # again to avoid leaving directories lying around. This might
2810-            # fail if there is another bucket open that shares a prefix (like
2811-            # ab/abfff).
2812-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2813-            # we leave the great-grandparent (incoming/) directory in place.
2814-        except EnvironmentError:
2815-            # ignore the "can't rmdir because the directory is not empty"
2816-            # exceptions, those are normal consequences of the
2817-            # above-mentioned conditions.
2818-            pass
2819+        self._sharefile.close()
2820         self._sharefile = None
2821         self.closed = True
2822         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2823hunk ./src/allmydata/storage/immutable.py 49
2824 
2825-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2826+        filelen = self._sharefile.stat()
2827         self.ss.bucket_writer_closed(self, filelen)
2828         self.ss.add_latency("close", time.time() - start)
2829         self.ss.count("close")
2830hunk ./src/allmydata/storage/server.py 45
2831         self._active_writers = weakref.WeakKeyDictionary()
2832         self.backend = backend
2833         self.backend.setServiceParent(self)
2834+        self.backend.set_storage_server(self)
2835         log.msg("StorageServer created", facility="tahoe.storage")
2836 
2837         self.latencies = {"allocate": [], # immutable
2838hunk ./src/allmydata/storage/server.py 220
2839 
2840         for shnum in (sharenums - alreadygot):
2841             if (not limited) or (remaining_space >= max_space_per_bucket):
2842-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2843-                self.backend.set_storage_server(self)
2844                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2845                                                      max_space_per_bucket, lease_info, canary)
2846                 bucketwriters[shnum] = bw
2847hunk ./src/allmydata/test/test_backends.py 117
2848         mockopen.side_effect = call_open
2849         testbackend = DASCore(tempdir, expiration_policy)
2850         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2851-
2852+   
2853+    @mock.patch('allmydata.util.fileutil.get_available_space')
2854     @mock.patch('time.time')
2855     @mock.patch('os.mkdir')
2856     @mock.patch('__builtin__.open')
2857hunk ./src/allmydata/test/test_backends.py 124
2858     @mock.patch('os.listdir')
2859     @mock.patch('os.path.isdir')
2860-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2861+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2862+                             mockget_available_space):
2863         """ Write a new share. """
2864 
2865         def call_listdir(dirname):
2866hunk ./src/allmydata/test/test_backends.py 148
2867 
2868         mockmkdir.side_effect = call_mkdir
2869 
2870+        def call_get_available_space(storedir, reserved_space):
2871+            self.failUnlessReallyEqual(storedir, tempdir)
2872+            return 1
2873+
2874+        mockget_available_space.side_effect = call_get_available_space
2875+
2876         class MockFile:
2877             def __init__(self):
2878                 self.buffer = ''
2879hunk ./src/allmydata/test/test_backends.py 188
2880         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2881         bs[0].remote_write(0, 'a')
2882         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2883-       
2884+
2885+        # What happens when there's not enough space for the client's request?
2886+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2887+
2888         # Now test the allocated_size method.
2889         spaceint = self.s.allocated_size()
2890         self.failUnlessReallyEqual(spaceint, 1)
2891}
2892[checkpoint10
2893wilcoxjg@gmail.com**20110707172049
2894 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2895] {
2896hunk ./src/allmydata/test/test_backends.py 20
2897 # The following share file contents was generated with
2898 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2899 # with share data == 'a'.
2900-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2901+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2902+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2903+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2904 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2905 
2906hunk ./src/allmydata/test/test_backends.py 25
2907+testnodeid = 'testnodeidxxxxxxxxxx'
2908 tempdir = 'teststoredir'
2909 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2910 sharefname = os.path.join(sharedirname, '0')
2911hunk ./src/allmydata/test/test_backends.py 37
2912 
2913 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2914     def setUp(self):
2915-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2916+        self.s = StorageServer(testnodeid, backend=NullCore())
2917 
2918     @mock.patch('os.mkdir')
2919     @mock.patch('__builtin__.open')
2920hunk ./src/allmydata/test/test_backends.py 99
2921         mockmkdir.side_effect = call_mkdir
2922 
2923         # Now begin the test.
2924-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2925+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2926 
2927         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2928 
2929hunk ./src/allmydata/test/test_backends.py 119
2930 
2931         mockopen.side_effect = call_open
2932         testbackend = DASCore(tempdir, expiration_policy)
2933-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2934-   
2935+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
2936+       
2937+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
2938     @mock.patch('allmydata.util.fileutil.get_available_space')
2939     @mock.patch('time.time')
2940     @mock.patch('os.mkdir')
2941hunk ./src/allmydata/test/test_backends.py 129
2942     @mock.patch('os.listdir')
2943     @mock.patch('os.path.isdir')
2944     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2945-                             mockget_available_space):
2946+                             mockget_available_space, mockget_shares):
2947         """ Write a new share. """
2948 
2949         def call_listdir(dirname):
2950hunk ./src/allmydata/test/test_backends.py 139
2951         mocklistdir.side_effect = call_listdir
2952 
2953         def call_isdir(dirname):
2954+            #XXX Should there be any other tests here?
2955             self.failUnlessReallyEqual(dirname, sharedirname)
2956             return True
2957 
2958hunk ./src/allmydata/test/test_backends.py 159
2959 
2960         mockget_available_space.side_effect = call_get_available_space
2961 
2962+        mocktime.return_value = 0
2963+        class MockShare:
2964+            def __init__(self):
2965+                self.shnum = 1
2966+               
2967+            def add_or_renew_lease(elf, lease_info):
2968+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
2969+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
2970+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
2971+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
2972+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
2973+               
2974+
2975+        share = MockShare()
2976+        def call_get_shares(storageindex):
2977+            return [share]
2978+
2979+        mockget_shares.side_effect = call_get_shares
2980+
2981         class MockFile:
2982             def __init__(self):
2983                 self.buffer = ''
2984hunk ./src/allmydata/test/test_backends.py 199
2985             def tell(self):
2986                 return self.pos
2987 
2988-        mocktime.return_value = 0
2989 
2990         sharefile = MockFile()
2991         def call_open(fname, mode):
2992}
2993
2994Context:
2995
2996[add Protovis.js-based download-status timeline visualization
2997Brian Warner <warner@lothar.com>**20110629222606
2998 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
2999 
3000 provide status overlap info on the webapi t=json output, add decode/decrypt
3001 rate tooltips, add zoomin/zoomout buttons
3002]
3003[add more download-status data, fix tests
3004Brian Warner <warner@lothar.com>**20110629222555
3005 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
3006]
3007[prepare for viz: improve DownloadStatus events
3008Brian Warner <warner@lothar.com>**20110629222542
3009 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
3010 
3011 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
3012]
3013[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
3014zooko@zooko.com**20110629185711
3015 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
3016]
3017[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
3018david-sarah@jacaranda.org**20110130235809
3019 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
3020]
3021[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
3022david-sarah@jacaranda.org**20110626054124
3023 Ignore-this: abb864427a1b91bd10d5132b4589fd90
3024]
3025[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
3026david-sarah@jacaranda.org**20110623205528
3027 Ignore-this: c63e23146c39195de52fb17c7c49b2da
3028]
3029[Rename test_package_initialization.py to (much shorter) test_import.py .
3030Brian Warner <warner@lothar.com>**20110611190234
3031 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
3032 
3033 The former name was making my 'ls' listings hard to read, by forcing them
3034 down to just two columns.
3035]
3036[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
3037zooko@zooko.com**20110611163741
3038 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
3039 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
3040 fixes #1412
3041]
3042[wui: right-align the size column in the WUI
3043zooko@zooko.com**20110611153758
3044 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
3045 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
3046 fixes #1412
3047]
3048[docs: three minor fixes
3049zooko@zooko.com**20110610121656
3050 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
3051 CREDITS for arc for stats tweak
3052 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
3053 English usage tweak
3054]
3055[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
3056david-sarah@jacaranda.org**20110609223719
3057 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
3058]
3059[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
3060wilcoxjg@gmail.com**20110527120135
3061 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
3062 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
3063 NEWS.rst, stats.py: documentation of change to get_latencies
3064 stats.rst: now documents percentile modification in get_latencies
3065 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
3066 fixes #1392
3067]
3068[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
3069david-sarah@jacaranda.org**20110517011214
3070 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
3071]
3072[docs: convert NEWS to NEWS.rst and change all references to it.
3073david-sarah@jacaranda.org**20110517010255
3074 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
3075]
3076[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
3077david-sarah@jacaranda.org**20110512140559
3078 Ignore-this: 784548fc5367fac5450df1c46890876d
3079]
3080[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
3081david-sarah@jacaranda.org**20110130164923
3082 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
3083]
3084[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
3085zooko@zooko.com**20110128142006
3086 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
3087 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
3088]
3089[M-x whitespace-cleanup
3090zooko@zooko.com**20110510193653
3091 Ignore-this: dea02f831298c0f65ad096960e7df5c7
3092]
3093[docs: fix typo in running.rst, thanks to arch_o_median
3094zooko@zooko.com**20110510193633
3095 Ignore-this: ca06de166a46abbc61140513918e79e8
3096]
3097[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
3098david-sarah@jacaranda.org**20110204204902
3099 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
3100]
3101[relnotes.txt: forseeable -> foreseeable. refs #1342
3102david-sarah@jacaranda.org**20110204204116
3103 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
3104]
3105[replace remaining .html docs with .rst docs
3106zooko@zooko.com**20110510191650
3107 Ignore-this: d557d960a986d4ac8216d1677d236399
3108 Remove install.html (long since deprecated).
3109 Also replace some obsolete references to install.html with references to quickstart.rst.
3110 Fix some broken internal references within docs/historical/historical_known_issues.txt.
3111 Thanks to Ravi Pinjala and Patrick McDonald.
3112 refs #1227
3113]
3114[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
3115zooko@zooko.com**20110428055232
3116 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
3117]
3118[munin tahoe_files plugin: fix incorrect file count
3119francois@ctrlaltdel.ch**20110428055312
3120 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
3121 fixes #1391
3122]
3123[corrected "k must never be smaller than N" to "k must never be greater than N"
3124secorp@allmydata.org**20110425010308
3125 Ignore-this: 233129505d6c70860087f22541805eac
3126]
3127[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
3128david-sarah@jacaranda.org**20110411190738
3129 Ignore-this: 7847d26bc117c328c679f08a7baee519
3130]
3131[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
3132david-sarah@jacaranda.org**20110410155844
3133 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
3134]
3135[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
3136david-sarah@jacaranda.org**20110410155705
3137 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
3138]
3139[remove unused variable detected by pyflakes
3140zooko@zooko.com**20110407172231
3141 Ignore-this: 7344652d5e0720af822070d91f03daf9
3142]
3143[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
3144david-sarah@jacaranda.org**20110401202750
3145 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
3146]
3147[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
3148Brian Warner <warner@lothar.com>**20110325232511
3149 Ignore-this: d5307faa6900f143193bfbe14e0f01a
3150]
3151[control.py: remove all uses of s.get_serverid()
3152warner@lothar.com**20110227011203
3153 Ignore-this: f80a787953bd7fa3d40e828bde00e855
3154]
3155[web: remove some uses of s.get_serverid(), not all
3156warner@lothar.com**20110227011159
3157 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
3158]
3159[immutable/downloader/fetcher.py: remove all get_serverid() calls
3160warner@lothar.com**20110227011156
3161 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
3162]
3163[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
3164warner@lothar.com**20110227011153
3165 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
3166 
3167 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
3168 _shares_from_server dict was being popped incorrectly (using shnum as the
3169 index instead of serverid). I'm still thinking through the consequences of
3170 this bug. It was probably benign and really hard to detect. I think it would
3171 cause us to incorrectly believe that we're pulling too many shares from a
3172 server, and thus prefer a different server rather than asking for a second
3173 share from the first server. The diversity code is intended to spread out the
3174 number of shares simultaneously being requested from each server, but with
3175 this bug, it might be spreading out the total number of shares requested at
3176 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
3177 segment, so the effect doesn't last very long).
3178]
3179[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
3180warner@lothar.com**20110227011150
3181 Ignore-this: d8d56dd8e7b280792b40105e13664554
3182 
3183 test_download.py: create+check MyShare instances better, make sure they share
3184 Server objects, now that finder.py cares
3185]
3186[immutable/downloader/finder.py: reduce use of get_serverid(), one left
3187warner@lothar.com**20110227011146
3188 Ignore-this: 5785be173b491ae8a78faf5142892020
3189]
3190[immutable/offloaded.py: reduce use of get_serverid() a bit more
3191warner@lothar.com**20110227011142
3192 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
3193]
3194[immutable/upload.py: reduce use of get_serverid()
3195warner@lothar.com**20110227011138
3196 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
3197]
3198[immutable/checker.py: remove some uses of s.get_serverid(), not all
3199warner@lothar.com**20110227011134
3200 Ignore-this: e480a37efa9e94e8016d826c492f626e
3201]
3202[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
3203warner@lothar.com**20110227011132
3204 Ignore-this: 6078279ddf42b179996a4b53bee8c421
3205 MockIServer stubs
3206]
3207[upload.py: rearrange _make_trackers a bit, no behavior changes
3208warner@lothar.com**20110227011128
3209 Ignore-this: 296d4819e2af452b107177aef6ebb40f
3210]
3211[happinessutil.py: finally rename merge_peers to merge_servers
3212warner@lothar.com**20110227011124
3213 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
3214]
3215[test_upload.py: factor out FakeServerTracker
3216warner@lothar.com**20110227011120
3217 Ignore-this: 6c182cba90e908221099472cc159325b
3218]
3219[test_upload.py: server-vs-tracker cleanup
3220warner@lothar.com**20110227011115
3221 Ignore-this: 2915133be1a3ba456e8603885437e03
3222]
3223[happinessutil.py: server-vs-tracker cleanup
3224warner@lothar.com**20110227011111
3225 Ignore-this: b856c84033562d7d718cae7cb01085a9
3226]
3227[upload.py: more tracker-vs-server cleanup
3228warner@lothar.com**20110227011107
3229 Ignore-this: bb75ed2afef55e47c085b35def2de315
3230]
3231[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
3232warner@lothar.com**20110227011103
3233 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
3234]
3235[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
3236warner@lothar.com**20110227011100
3237 Ignore-this: 7ea858755cbe5896ac212a925840fe68
3238 
3239 No behavioral changes, just updating variable/method names and log messages.
3240 The effects outside these three files should be minimal: some exception
3241 messages changed (to say "server" instead of "peer"), and some internal class
3242 names were changed. A few things still use "peer" to minimize external
3243 changes, like UploadResults.timings["peer_selection"] and
3244 happinessutil.merge_peers, which can be changed later.
3245]
3246[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
3247warner@lothar.com**20110227011056
3248 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
3249]
3250[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
3251warner@lothar.com**20110227011051
3252 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
3253]
3254[test: increase timeout on a network test because Francois's ARM machine hit that timeout
3255zooko@zooko.com**20110317165909
3256 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
3257 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
3258]
3259[docs/configuration.rst: add a "Frontend Configuration" section
3260Brian Warner <warner@lothar.com>**20110222014323
3261 Ignore-this: 657018aa501fe4f0efef9851628444ca
3262 
3263 this points to docs/frontends/*.rst, which were previously underlinked
3264]
3265[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
3266"Brian Warner <warner@lothar.com>"**20110221061544
3267 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
3268]
3269[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
3270david-sarah@jacaranda.org**20110221015817
3271 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
3272]
3273[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
3274david-sarah@jacaranda.org**20110221020125
3275 Ignore-this: b0744ed58f161bf188e037bad077fc48
3276]
3277[Refactor StorageFarmBroker handling of servers
3278Brian Warner <warner@lothar.com>**20110221015804
3279 Ignore-this: 842144ed92f5717699b8f580eab32a51
3280 
3281 Pass around IServer instance instead of (peerid, rref) tuple. Replace
3282 "descriptor" with "server". Other replacements:
3283 
3284  get_all_servers -> get_connected_servers/get_known_servers
3285  get_servers_for_index -> get_servers_for_psi (now returns IServers)
3286 
3287 This change still needs to be pushed further down: lots of code is now
3288 getting the IServer and then distributing (peerid, rref) internally.
3289 Instead, it ought to distribute the IServer internally and delay
3290 extracting a serverid or rref until the last moment.
3291 
3292 no_network.py was updated to retain parallelism.
3293]
3294[TAG allmydata-tahoe-1.8.2
3295warner@lothar.com**20110131020101]
3296Patch bundle hash:
3297c7e36cea13160ea5ffa89dd80b80b44bc1a7d349