Ticket #999: checkpoint8.darcs.patch

File checkpoint8.darcs.patch, 128.9 KB (added by arch_o_median, at 2011-07-06T22:31:09Z)

The null backend test is useful for testing what happens when there's no effective limit on the backend

Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34New patches:
35
36[storage: new mocking tests of storage server read and write
37wilcoxjg@gmail.com**20110325203514
38 Ignore-this: df65c3c4f061dd1516f88662023fdb41
39 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
40] {
41addfile ./src/allmydata/test/test_server.py
42hunk ./src/allmydata/test/test_server.py 1
43+from twisted.trial import unittest
44+
45+from StringIO import StringIO
46+
47+from allmydata.test.common_util import ReallyEqualMixin
48+
49+import mock
50+
51+# This is the code that we're going to be testing.
52+from allmydata.storage.server import StorageServer
53+
54+# The following share file contents was generated with
55+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
56+# with share data == 'a'.
57+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
58+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
59+
60+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
61+
62+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
63+    @mock.patch('__builtin__.open')
64+    def test_create_server(self, mockopen):
65+        """ This tests whether a server instance can be constructed. """
66+
67+        def call_open(fname, mode):
68+            if fname == 'testdir/bucket_counter.state':
69+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
70+            elif fname == 'testdir/lease_checker.state':
71+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
72+            elif fname == 'testdir/lease_checker.history':
73+                return StringIO()
74+        mockopen.side_effect = call_open
75+
76+        # Now begin the test.
77+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
78+
79+        # You passed!
80+
81+class TestServer(unittest.TestCase, ReallyEqualMixin):
82+    @mock.patch('__builtin__.open')
83+    def setUp(self, mockopen):
84+        def call_open(fname, mode):
85+            if fname == 'testdir/bucket_counter.state':
86+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
87+            elif fname == 'testdir/lease_checker.state':
88+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
89+            elif fname == 'testdir/lease_checker.history':
90+                return StringIO()
91+        mockopen.side_effect = call_open
92+
93+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
94+
95+
96+    @mock.patch('time.time')
97+    @mock.patch('os.mkdir')
98+    @mock.patch('__builtin__.open')
99+    @mock.patch('os.listdir')
100+    @mock.patch('os.path.isdir')
101+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
102+        """Handle a report of corruption."""
103+
104+        def call_listdir(dirname):
105+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
106+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
107+
108+        mocklistdir.side_effect = call_listdir
109+
110+        class MockFile:
111+            def __init__(self):
112+                self.buffer = ''
113+                self.pos = 0
114+            def write(self, instring):
115+                begin = self.pos
116+                padlen = begin - len(self.buffer)
117+                if padlen > 0:
118+                    self.buffer += '\x00' * padlen
119+                end = self.pos + len(instring)
120+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
121+                self.pos = end
122+            def close(self):
123+                pass
124+            def seek(self, pos):
125+                self.pos = pos
126+            def read(self, numberbytes):
127+                return self.buffer[self.pos:self.pos+numberbytes]
128+            def tell(self):
129+                return self.pos
130+
131+        mocktime.return_value = 0
132+
133+        sharefile = MockFile()
134+        def call_open(fname, mode):
135+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
136+            return sharefile
137+
138+        mockopen.side_effect = call_open
139+        # Now begin the test.
140+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
141+        print bs
142+        bs[0].remote_write(0, 'a')
143+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
144+
145+
146+    @mock.patch('os.path.exists')
147+    @mock.patch('os.path.getsize')
148+    @mock.patch('__builtin__.open')
149+    @mock.patch('os.listdir')
150+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
151+        """ This tests whether the code correctly finds and reads
152+        shares written out by old (Tahoe-LAFS <= v1.8.2)
153+        servers. There is a similar test in test_download, but that one
154+        is from the perspective of the client and exercises a deeper
155+        stack of code. This one is for exercising just the
156+        StorageServer object. """
157+
158+        def call_listdir(dirname):
159+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
160+            return ['0']
161+
162+        mocklistdir.side_effect = call_listdir
163+
164+        def call_open(fname, mode):
165+            self.failUnlessReallyEqual(fname, sharefname)
166+            self.failUnless('r' in mode, mode)
167+            self.failUnless('b' in mode, mode)
168+
169+            return StringIO(share_file_data)
170+        mockopen.side_effect = call_open
171+
172+        datalen = len(share_file_data)
173+        def call_getsize(fname):
174+            self.failUnlessReallyEqual(fname, sharefname)
175+            return datalen
176+        mockgetsize.side_effect = call_getsize
177+
178+        def call_exists(fname):
179+            self.failUnlessReallyEqual(fname, sharefname)
180+            return True
181+        mockexists.side_effect = call_exists
182+
183+        # Now begin the test.
184+        bs = self.s.remote_get_buckets('teststorage_index')
185+
186+        self.failUnlessEqual(len(bs), 1)
187+        b = bs[0]
188+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
189+        # If you try to read past the end you get the as much data as is there.
190+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
191+        # If you start reading past the end of the file you get the empty string.
192+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
193}
194[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
195wilcoxjg@gmail.com**20110624202850
196 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
197 sloppy not for production
198] {
199move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
200hunk ./src/allmydata/storage/crawler.py 13
201     pass
202 
203 class ShareCrawler(service.MultiService):
204-    """A ShareCrawler subclass is attached to a StorageServer, and
205+    """A subcless of ShareCrawler is attached to a StorageServer, and
206     periodically walks all of its shares, processing each one in some
207     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
208     since large servers can easily have a terabyte of shares, in several
209hunk ./src/allmydata/storage/crawler.py 31
210     We assume that the normal upload/download/get_buckets traffic of a tahoe
211     grid will cause the prefixdir contents to be mostly cached in the kernel,
212     or that the number of buckets in each prefixdir will be small enough to
213-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
214+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
215     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
216     prefix. On this server, each prefixdir took 130ms-200ms to list the first
217     time, and 17ms to list the second time.
218hunk ./src/allmydata/storage/crawler.py 68
219     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
220     minimum_cycle_time = 300 # don't run a cycle faster than this
221 
222-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
223+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
224         service.MultiService.__init__(self)
225         if allowed_cpu_percentage is not None:
226             self.allowed_cpu_percentage = allowed_cpu_percentage
227hunk ./src/allmydata/storage/crawler.py 72
228-        self.server = server
229-        self.sharedir = server.sharedir
230-        self.statefile = statefile
231+        self.backend = backend
232         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
233                          for i in range(2**10)]
234         self.prefixes.sort()
235hunk ./src/allmydata/storage/crawler.py 446
236 
237     minimum_cycle_time = 60*60 # we don't need this more than once an hour
238 
239-    def __init__(self, server, statefile, num_sample_prefixes=1):
240-        ShareCrawler.__init__(self, server, statefile)
241+    def __init__(self, statefile, num_sample_prefixes=1):
242+        ShareCrawler.__init__(self, statefile)
243         self.num_sample_prefixes = num_sample_prefixes
244 
245     def add_initial_state(self):
246hunk ./src/allmydata/storage/expirer.py 15
247     removed.
248 
249     I collect statistics on the leases and make these available to a web
250-    status page, including::
251+    status page, including:
252 
253     Space recovered during this cycle-so-far:
254      actual (only if expiration_enabled=True):
255hunk ./src/allmydata/storage/expirer.py 51
256     slow_start = 360 # wait 6 minutes after startup
257     minimum_cycle_time = 12*60*60 # not more than twice per day
258 
259-    def __init__(self, server, statefile, historyfile,
260+    def __init__(self, statefile, historyfile,
261                  expiration_enabled, mode,
262                  override_lease_duration, # used if expiration_mode=="age"
263                  cutoff_date, # used if expiration_mode=="cutoff-date"
264hunk ./src/allmydata/storage/expirer.py 71
265         else:
266             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
267         self.sharetypes_to_expire = sharetypes
268-        ShareCrawler.__init__(self, server, statefile)
269+        ShareCrawler.__init__(self, statefile)
270 
271     def add_initial_state(self):
272         # we fill ["cycle-to-date"] here (even though they will be reset in
273hunk ./src/allmydata/storage/immutable.py 44
274     sharetype = "immutable"
275 
276     def __init__(self, filename, max_size=None, create=False):
277-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
278+        """ If max_size is not None then I won't allow more than
279+        max_size to be written to me. If create=True then max_size
280+        must not be None. """
281         precondition((max_size is not None) or (not create), max_size, create)
282         self.home = filename
283         self._max_size = max_size
284hunk ./src/allmydata/storage/immutable.py 87
285 
286     def read_share_data(self, offset, length):
287         precondition(offset >= 0)
288-        # reads beyond the end of the data are truncated. Reads that start
289-        # beyond the end of the data return an empty string. I wonder why
290-        # Python doesn't do the following computation for me?
291+        # Reads beyond the end of the data are truncated. Reads that start
292+        # beyond the end of the data return an empty string.
293         seekpos = self._data_offset+offset
294         fsize = os.path.getsize(self.home)
295         actuallength = max(0, min(length, fsize-seekpos))
296hunk ./src/allmydata/storage/immutable.py 198
297             space_freed += os.stat(self.home)[stat.ST_SIZE]
298             self.unlink()
299         return space_freed
300+class NullBucketWriter(Referenceable):
301+    implements(RIBucketWriter)
302 
303hunk ./src/allmydata/storage/immutable.py 201
304+    def remote_write(self, offset, data):
305+        return
306 
307 class BucketWriter(Referenceable):
308     implements(RIBucketWriter)
309hunk ./src/allmydata/storage/server.py 7
310 from twisted.application import service
311 
312 from zope.interface import implements
313-from allmydata.interfaces import RIStorageServer, IStatsProducer
314+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
315 from allmydata.util import fileutil, idlib, log, time_format
316 import allmydata # for __full_version__
317 
318hunk ./src/allmydata/storage/server.py 16
319 from allmydata.storage.lease import LeaseInfo
320 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
321      create_mutable_sharefile
322-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
323+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
324 from allmydata.storage.crawler import BucketCountingCrawler
325 from allmydata.storage.expirer import LeaseCheckingCrawler
326 
327hunk ./src/allmydata/storage/server.py 20
328+from zope.interface import implements
329+
330+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
331+# be started and stopped.
332+class Backend(service.MultiService):
333+    implements(IStatsProducer)
334+    def __init__(self):
335+        service.MultiService.__init__(self)
336+
337+    def get_bucket_shares(self):
338+        """XXX"""
339+        raise NotImplementedError
340+
341+    def get_share(self):
342+        """XXX"""
343+        raise NotImplementedError
344+
345+    def make_bucket_writer(self):
346+        """XXX"""
347+        raise NotImplementedError
348+
349+class NullBackend(Backend):
350+    def __init__(self):
351+        Backend.__init__(self)
352+
353+    def get_available_space(self):
354+        return None
355+
356+    def get_bucket_shares(self, storage_index):
357+        return set()
358+
359+    def get_share(self, storage_index, sharenum):
360+        return None
361+
362+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
363+        return NullBucketWriter()
364+
365+class FSBackend(Backend):
366+    def __init__(self, storedir, readonly=False, reserved_space=0):
367+        Backend.__init__(self)
368+
369+        self._setup_storage(storedir, readonly, reserved_space)
370+        self._setup_corruption_advisory()
371+        self._setup_bucket_counter()
372+        self._setup_lease_checkerf()
373+
374+    def _setup_storage(self, storedir, readonly, reserved_space):
375+        self.storedir = storedir
376+        self.readonly = readonly
377+        self.reserved_space = int(reserved_space)
378+        if self.reserved_space:
379+            if self.get_available_space() is None:
380+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
381+                        umid="0wZ27w", level=log.UNUSUAL)
382+
383+        self.sharedir = os.path.join(self.storedir, "shares")
384+        fileutil.make_dirs(self.sharedir)
385+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
386+        self._clean_incomplete()
387+
388+    def _clean_incomplete(self):
389+        fileutil.rm_dir(self.incomingdir)
390+        fileutil.make_dirs(self.incomingdir)
391+
392+    def _setup_corruption_advisory(self):
393+        # we don't actually create the corruption-advisory dir until necessary
394+        self.corruption_advisory_dir = os.path.join(self.storedir,
395+                                                    "corruption-advisories")
396+
397+    def _setup_bucket_counter(self):
398+        statefile = os.path.join(self.storedir, "bucket_counter.state")
399+        self.bucket_counter = BucketCountingCrawler(statefile)
400+        self.bucket_counter.setServiceParent(self)
401+
402+    def _setup_lease_checkerf(self):
403+        statefile = os.path.join(self.storedir, "lease_checker.state")
404+        historyfile = os.path.join(self.storedir, "lease_checker.history")
405+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
406+                                   expiration_enabled, expiration_mode,
407+                                   expiration_override_lease_duration,
408+                                   expiration_cutoff_date,
409+                                   expiration_sharetypes)
410+        self.lease_checker.setServiceParent(self)
411+
412+    def get_available_space(self):
413+        if self.readonly:
414+            return 0
415+        return fileutil.get_available_space(self.storedir, self.reserved_space)
416+
417+    def get_bucket_shares(self, storage_index):
418+        """Return a list of (shnum, pathname) tuples for files that hold
419+        shares for this storage_index. In each tuple, 'shnum' will always be
420+        the integer form of the last component of 'pathname'."""
421+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
422+        try:
423+            for f in os.listdir(storagedir):
424+                if NUM_RE.match(f):
425+                    filename = os.path.join(storagedir, f)
426+                    yield (int(f), filename)
427+        except OSError:
428+            # Commonly caused by there being no buckets at all.
429+            pass
430+
431 # storage/
432 # storage/shares/incoming
433 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
434hunk ./src/allmydata/storage/server.py 143
435     name = 'storage'
436     LeaseCheckerClass = LeaseCheckingCrawler
437 
438-    def __init__(self, storedir, nodeid, reserved_space=0,
439-                 discard_storage=False, readonly_storage=False,
440+    def __init__(self, nodeid, backend, reserved_space=0,
441+                 readonly_storage=False,
442                  stats_provider=None,
443                  expiration_enabled=False,
444                  expiration_mode="age",
445hunk ./src/allmydata/storage/server.py 155
446         assert isinstance(nodeid, str)
447         assert len(nodeid) == 20
448         self.my_nodeid = nodeid
449-        self.storedir = storedir
450-        sharedir = os.path.join(storedir, "shares")
451-        fileutil.make_dirs(sharedir)
452-        self.sharedir = sharedir
453-        # we don't actually create the corruption-advisory dir until necessary
454-        self.corruption_advisory_dir = os.path.join(storedir,
455-                                                    "corruption-advisories")
456-        self.reserved_space = int(reserved_space)
457-        self.no_storage = discard_storage
458-        self.readonly_storage = readonly_storage
459         self.stats_provider = stats_provider
460         if self.stats_provider:
461             self.stats_provider.register_producer(self)
462hunk ./src/allmydata/storage/server.py 158
463-        self.incomingdir = os.path.join(sharedir, 'incoming')
464-        self._clean_incomplete()
465-        fileutil.make_dirs(self.incomingdir)
466         self._active_writers = weakref.WeakKeyDictionary()
467hunk ./src/allmydata/storage/server.py 159
468+        self.backend = backend
469+        self.backend.setServiceParent(self)
470         log.msg("StorageServer created", facility="tahoe.storage")
471 
472hunk ./src/allmydata/storage/server.py 163
473-        if reserved_space:
474-            if self.get_available_space() is None:
475-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
476-                        umin="0wZ27w", level=log.UNUSUAL)
477-
478         self.latencies = {"allocate": [], # immutable
479                           "write": [],
480                           "close": [],
481hunk ./src/allmydata/storage/server.py 174
482                           "renew": [],
483                           "cancel": [],
484                           }
485-        self.add_bucket_counter()
486-
487-        statefile = os.path.join(self.storedir, "lease_checker.state")
488-        historyfile = os.path.join(self.storedir, "lease_checker.history")
489-        klass = self.LeaseCheckerClass
490-        self.lease_checker = klass(self, statefile, historyfile,
491-                                   expiration_enabled, expiration_mode,
492-                                   expiration_override_lease_duration,
493-                                   expiration_cutoff_date,
494-                                   expiration_sharetypes)
495-        self.lease_checker.setServiceParent(self)
496 
497     def __repr__(self):
498         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
499hunk ./src/allmydata/storage/server.py 178
500 
501-    def add_bucket_counter(self):
502-        statefile = os.path.join(self.storedir, "bucket_counter.state")
503-        self.bucket_counter = BucketCountingCrawler(self, statefile)
504-        self.bucket_counter.setServiceParent(self)
505-
506     def count(self, name, delta=1):
507         if self.stats_provider:
508             self.stats_provider.count("storage_server." + name, delta)
509hunk ./src/allmydata/storage/server.py 233
510             kwargs["facility"] = "tahoe.storage"
511         return log.msg(*args, **kwargs)
512 
513-    def _clean_incomplete(self):
514-        fileutil.rm_dir(self.incomingdir)
515-
516     def get_stats(self):
517         # remember: RIStatsProvider requires that our return dict
518         # contains numeric values.
519hunk ./src/allmydata/storage/server.py 269
520             stats['storage_server.total_bucket_count'] = bucket_count
521         return stats
522 
523-    def get_available_space(self):
524-        """Returns available space for share storage in bytes, or None if no
525-        API to get this information is available."""
526-
527-        if self.readonly_storage:
528-            return 0
529-        return fileutil.get_available_space(self.storedir, self.reserved_space)
530-
531     def allocated_size(self):
532         space = 0
533         for bw in self._active_writers:
534hunk ./src/allmydata/storage/server.py 276
535         return space
536 
537     def remote_get_version(self):
538-        remaining_space = self.get_available_space()
539+        remaining_space = self.backend.get_available_space()
540         if remaining_space is None:
541             # We're on a platform that has no API to get disk stats.
542             remaining_space = 2**64
543hunk ./src/allmydata/storage/server.py 301
544         self.count("allocate")
545         alreadygot = set()
546         bucketwriters = {} # k: shnum, v: BucketWriter
547-        si_dir = storage_index_to_dir(storage_index)
548-        si_s = si_b2a(storage_index)
549 
550hunk ./src/allmydata/storage/server.py 302
551+        si_s = si_b2a(storage_index)
552         log.msg("storage: allocate_buckets %s" % si_s)
553 
554         # in this implementation, the lease information (including secrets)
555hunk ./src/allmydata/storage/server.py 316
556 
557         max_space_per_bucket = allocated_size
558 
559-        remaining_space = self.get_available_space()
560+        remaining_space = self.backend.get_available_space()
561         limited = remaining_space is not None
562         if limited:
563             # this is a bit conservative, since some of this allocated_size()
564hunk ./src/allmydata/storage/server.py 329
565         # they asked about: this will save them a lot of work. Add or update
566         # leases for all of them: if they want us to hold shares for this
567         # file, they'll want us to hold leases for this file.
568-        for (shnum, fn) in self._get_bucket_shares(storage_index):
569+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
570             alreadygot.add(shnum)
571             sf = ShareFile(fn)
572             sf.add_or_renew_lease(lease_info)
573hunk ./src/allmydata/storage/server.py 335
574 
575         for shnum in sharenums:
576-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
577-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
578-            if os.path.exists(finalhome):
579+            share = self.backend.get_share(storage_index, shnum)
580+
581+            if not share:
582+                if (not limited) or (remaining_space >= max_space_per_bucket):
583+                    # ok! we need to create the new share file.
584+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
585+                                      max_space_per_bucket, lease_info, canary)
586+                    bucketwriters[shnum] = bw
587+                    self._active_writers[bw] = 1
588+                    if limited:
589+                        remaining_space -= max_space_per_bucket
590+                else:
591+                    # bummer! not enough space to accept this bucket
592+                    pass
593+
594+            elif share.is_complete():
595                 # great! we already have it. easy.
596                 pass
597hunk ./src/allmydata/storage/server.py 353
598-            elif os.path.exists(incominghome):
599+            elif not share.is_complete():
600                 # Note that we don't create BucketWriters for shnums that
601                 # have a partial share (in incoming/), so if a second upload
602                 # occurs while the first is still in progress, the second
603hunk ./src/allmydata/storage/server.py 359
604                 # uploader will use different storage servers.
605                 pass
606-            elif (not limited) or (remaining_space >= max_space_per_bucket):
607-                # ok! we need to create the new share file.
608-                bw = BucketWriter(self, incominghome, finalhome,
609-                                  max_space_per_bucket, lease_info, canary)
610-                if self.no_storage:
611-                    bw.throw_out_all_data = True
612-                bucketwriters[shnum] = bw
613-                self._active_writers[bw] = 1
614-                if limited:
615-                    remaining_space -= max_space_per_bucket
616-            else:
617-                # bummer! not enough space to accept this bucket
618-                pass
619-
620-        if bucketwriters:
621-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
622 
623         self.add_latency("allocate", time.time() - start)
624         return alreadygot, bucketwriters
625hunk ./src/allmydata/storage/server.py 437
626             self.stats_provider.count('storage_server.bytes_added', consumed_size)
627         del self._active_writers[bw]
628 
629-    def _get_bucket_shares(self, storage_index):
630-        """Return a list of (shnum, pathname) tuples for files that hold
631-        shares for this storage_index. In each tuple, 'shnum' will always be
632-        the integer form of the last component of 'pathname'."""
633-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
634-        try:
635-            for f in os.listdir(storagedir):
636-                if NUM_RE.match(f):
637-                    filename = os.path.join(storagedir, f)
638-                    yield (int(f), filename)
639-        except OSError:
640-            # Commonly caused by there being no buckets at all.
641-            pass
642 
643     def remote_get_buckets(self, storage_index):
644         start = time.time()
645hunk ./src/allmydata/storage/server.py 444
646         si_s = si_b2a(storage_index)
647         log.msg("storage: get_buckets %s" % si_s)
648         bucketreaders = {} # k: sharenum, v: BucketReader
649-        for shnum, filename in self._get_bucket_shares(storage_index):
650+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
651             bucketreaders[shnum] = BucketReader(self, filename,
652                                                 storage_index, shnum)
653         self.add_latency("get", time.time() - start)
654hunk ./src/allmydata/test/test_backends.py 10
655 import mock
656 
657 # This is the code that we're going to be testing.
658-from allmydata.storage.server import StorageServer
659+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
660 
661 # The following share file contents was generated with
662 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
663hunk ./src/allmydata/test/test_backends.py 21
664 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
665 
666 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
667+    @mock.patch('time.time')
668+    @mock.patch('os.mkdir')
669+    @mock.patch('__builtin__.open')
670+    @mock.patch('os.listdir')
671+    @mock.patch('os.path.isdir')
672+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
673+        """ This tests whether a server instance can be constructed
674+        with a null backend. The server instance fails the test if it
675+        tries to read or write to the file system. """
676+
677+        # Now begin the test.
678+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
679+
680+        self.failIf(mockisdir.called)
681+        self.failIf(mocklistdir.called)
682+        self.failIf(mockopen.called)
683+        self.failIf(mockmkdir.called)
684+
685+        # You passed!
686+
687+    @mock.patch('time.time')
688+    @mock.patch('os.mkdir')
689     @mock.patch('__builtin__.open')
690hunk ./src/allmydata/test/test_backends.py 44
691-    def test_create_server(self, mockopen):
692-        """ This tests whether a server instance can be constructed. """
693+    @mock.patch('os.listdir')
694+    @mock.patch('os.path.isdir')
695+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
696+        """ This tests whether a server instance can be constructed
697+        with a filesystem backend. To pass the test, it has to use the
698+        filesystem in only the prescribed ways. """
699 
700         def call_open(fname, mode):
701             if fname == 'testdir/bucket_counter.state':
702hunk ./src/allmydata/test/test_backends.py 58
703                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
704             elif fname == 'testdir/lease_checker.history':
705                 return StringIO()
706+            else:
707+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
708         mockopen.side_effect = call_open
709 
710         # Now begin the test.
711hunk ./src/allmydata/test/test_backends.py 63
712-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
713+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
714+
715+        self.failIf(mockisdir.called)
716+        self.failIf(mocklistdir.called)
717+        self.failIf(mockopen.called)
718+        self.failIf(mockmkdir.called)
719+        self.failIf(mocktime.called)
720 
721         # You passed!
722 
723hunk ./src/allmydata/test/test_backends.py 73
724-class TestServer(unittest.TestCase, ReallyEqualMixin):
725+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
726+    def setUp(self):
727+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
728+
729+    @mock.patch('os.mkdir')
730+    @mock.patch('__builtin__.open')
731+    @mock.patch('os.listdir')
732+    @mock.patch('os.path.isdir')
733+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
734+        """ Write a new share. """
735+
736+        # Now begin the test.
737+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
738+        bs[0].remote_write(0, 'a')
739+        self.failIf(mockisdir.called)
740+        self.failIf(mocklistdir.called)
741+        self.failIf(mockopen.called)
742+        self.failIf(mockmkdir.called)
743+
744+    @mock.patch('os.path.exists')
745+    @mock.patch('os.path.getsize')
746+    @mock.patch('__builtin__.open')
747+    @mock.patch('os.listdir')
748+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
749+        """ This tests whether the code correctly finds and reads
750+        shares written out by old (Tahoe-LAFS <= v1.8.2)
751+        servers. There is a similar test in test_download, but that one
752+        is from the perspective of the client and exercises a deeper
753+        stack of code. This one is for exercising just the
754+        StorageServer object. """
755+
756+        # Now begin the test.
757+        bs = self.s.remote_get_buckets('teststorage_index')
758+
759+        self.failUnlessEqual(len(bs), 0)
760+        self.failIf(mocklistdir.called)
761+        self.failIf(mockopen.called)
762+        self.failIf(mockgetsize.called)
763+        self.failIf(mockexists.called)
764+
765+
766+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
767     @mock.patch('__builtin__.open')
768     def setUp(self, mockopen):
769         def call_open(fname, mode):
770hunk ./src/allmydata/test/test_backends.py 126
771                 return StringIO()
772         mockopen.side_effect = call_open
773 
774-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
775-
776+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
777 
778     @mock.patch('time.time')
779     @mock.patch('os.mkdir')
780hunk ./src/allmydata/test/test_backends.py 134
781     @mock.patch('os.listdir')
782     @mock.patch('os.path.isdir')
783     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
784-        """Handle a report of corruption."""
785+        """ Write a new share. """
786 
787         def call_listdir(dirname):
788             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
789hunk ./src/allmydata/test/test_backends.py 173
790         mockopen.side_effect = call_open
791         # Now begin the test.
792         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
793-        print bs
794         bs[0].remote_write(0, 'a')
795         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
796 
797hunk ./src/allmydata/test/test_backends.py 176
798-
799     @mock.patch('os.path.exists')
800     @mock.patch('os.path.getsize')
801     @mock.patch('__builtin__.open')
802hunk ./src/allmydata/test/test_backends.py 218
803 
804         self.failUnlessEqual(len(bs), 1)
805         b = bs[0]
806+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
807         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
808         # If you try to read past the end you get the as much data as is there.
809         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
810hunk ./src/allmydata/test/test_backends.py 224
811         # If you start reading past the end of the file you get the empty string.
812         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
813+
814+
815}
816[a temp patch used as a snapshot
817wilcoxjg@gmail.com**20110626052732
818 Ignore-this: 95f05e314eaec870afa04c76d979aa44
819] {
820hunk ./docs/configuration.rst 637
821   [storage]
822   enabled = True
823   readonly = True
824-  sizelimit = 10000000000
825 
826 
827   [helper]
828hunk ./docs/garbage-collection.rst 16
829 
830 When a file or directory in the virtual filesystem is no longer referenced,
831 the space that its shares occupied on each storage server can be freed,
832-making room for other shares. Tahoe currently uses a garbage collection
833+making room for other shares. Tahoe uses a garbage collection
834 ("GC") mechanism to implement this space-reclamation process. Each share has
835 one or more "leases", which are managed by clients who want the
836 file/directory to be retained. The storage server accepts each share for a
837hunk ./docs/garbage-collection.rst 34
838 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
839 If lease renewal occurs quickly and with 100% reliability, than any renewal
840 time that is shorter than the lease duration will suffice, but a larger ratio
841-of duration-over-renewal-time will be more robust in the face of occasional
842+of lease duration to renewal time will be more robust in the face of occasional
843 delays or failures.
844 
845 The current recommended values for a small Tahoe grid are to renew the leases
846replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
847hunk ./src/allmydata/client.py 260
848             sharetypes.append("mutable")
849         expiration_sharetypes = tuple(sharetypes)
850 
851+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
852+            xyz
853+        xyz
854         ss = StorageServer(storedir, self.nodeid,
855                            reserved_space=reserved,
856                            discard_storage=discard,
857hunk ./src/allmydata/storage/crawler.py 234
858         f = open(tmpfile, "wb")
859         pickle.dump(self.state, f)
860         f.close()
861-        fileutil.move_into_place(tmpfile, self.statefile)
862+        fileutil.move_into_place(tmpfile, self.statefname)
863 
864     def startService(self):
865         # arrange things to look like we were just sleeping, so
866}
867[snapshot of progress on backend implementation (not suitable for trunk)
868wilcoxjg@gmail.com**20110626053244
869 Ignore-this: 50c764af791c2b99ada8289546806a0a
870] {
871adddir ./src/allmydata/storage/backends
872adddir ./src/allmydata/storage/backends/das
873move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
874adddir ./src/allmydata/storage/backends/null
875hunk ./src/allmydata/interfaces.py 270
876         store that on disk.
877         """
878 
879+class IStorageBackend(Interface):
880+    """
881+    Objects of this kind live on the server side and are used by the
882+    storage server object.
883+    """
884+    def get_available_space(self, reserved_space):
885+        """ Returns available space for share storage in bytes, or
886+        None if this information is not available or if the available
887+        space is unlimited.
888+
889+        If the backend is configured for read-only mode then this will
890+        return 0.
891+
892+        reserved_space is how many bytes to subtract from the answer, so
893+        you can pass how many bytes you would like to leave unused on this
894+        filesystem as reserved_space. """
895+
896+    def get_bucket_shares(self):
897+        """XXX"""
898+
899+    def get_share(self):
900+        """XXX"""
901+
902+    def make_bucket_writer(self):
903+        """XXX"""
904+
905+class IStorageBackendShare(Interface):
906+    """
907+    This object contains as much as all of the share data.  It is intended
908+    for lazy evaluation such that in many use cases substantially less than
909+    all of the share data will be accessed.
910+    """
911+    def is_complete(self):
912+        """
913+        Returns the share state, or None if the share does not exist.
914+        """
915+
916 class IStorageBucketWriter(Interface):
917     """
918     Objects of this kind live on the client side.
919hunk ./src/allmydata/interfaces.py 2492
920 
921 class EmptyPathnameComponentError(Exception):
922     """The webapi disallows empty pathname components."""
923+
924+class IShareStore(Interface):
925+    pass
926+
927addfile ./src/allmydata/storage/backends/__init__.py
928addfile ./src/allmydata/storage/backends/das/__init__.py
929addfile ./src/allmydata/storage/backends/das/core.py
930hunk ./src/allmydata/storage/backends/das/core.py 1
931+from allmydata.interfaces import IStorageBackend
932+from allmydata.storage.backends.base import Backend
933+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
934+from allmydata.util.assertutil import precondition
935+
936+import os, re, weakref, struct, time
937+
938+from foolscap.api import Referenceable
939+from twisted.application import service
940+
941+from zope.interface import implements
942+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
943+from allmydata.util import fileutil, idlib, log, time_format
944+import allmydata # for __full_version__
945+
946+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
947+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
948+from allmydata.storage.lease import LeaseInfo
949+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
950+     create_mutable_sharefile
951+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
952+from allmydata.storage.crawler import FSBucketCountingCrawler
953+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
954+
955+from zope.interface import implements
956+
957+class DASCore(Backend):
958+    implements(IStorageBackend)
959+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
960+        Backend.__init__(self)
961+
962+        self._setup_storage(storedir, readonly, reserved_space)
963+        self._setup_corruption_advisory()
964+        self._setup_bucket_counter()
965+        self._setup_lease_checkerf(expiration_policy)
966+
967+    def _setup_storage(self, storedir, readonly, reserved_space):
968+        self.storedir = storedir
969+        self.readonly = readonly
970+        self.reserved_space = int(reserved_space)
971+        if self.reserved_space:
972+            if self.get_available_space() is None:
973+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
974+                        umid="0wZ27w", level=log.UNUSUAL)
975+
976+        self.sharedir = os.path.join(self.storedir, "shares")
977+        fileutil.make_dirs(self.sharedir)
978+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
979+        self._clean_incomplete()
980+
981+    def _clean_incomplete(self):
982+        fileutil.rm_dir(self.incomingdir)
983+        fileutil.make_dirs(self.incomingdir)
984+
985+    def _setup_corruption_advisory(self):
986+        # we don't actually create the corruption-advisory dir until necessary
987+        self.corruption_advisory_dir = os.path.join(self.storedir,
988+                                                    "corruption-advisories")
989+
990+    def _setup_bucket_counter(self):
991+        statefname = os.path.join(self.storedir, "bucket_counter.state")
992+        self.bucket_counter = FSBucketCountingCrawler(statefname)
993+        self.bucket_counter.setServiceParent(self)
994+
995+    def _setup_lease_checkerf(self, expiration_policy):
996+        statefile = os.path.join(self.storedir, "lease_checker.state")
997+        historyfile = os.path.join(self.storedir, "lease_checker.history")
998+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
999+        self.lease_checker.setServiceParent(self)
1000+
1001+    def get_available_space(self):
1002+        if self.readonly:
1003+            return 0
1004+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1005+
1006+    def get_shares(self, storage_index):
1007+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1008+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1009+        try:
1010+            for f in os.listdir(finalstoragedir):
1011+                if NUM_RE.match(f):
1012+                    filename = os.path.join(finalstoragedir, f)
1013+                    yield FSBShare(filename, int(f))
1014+        except OSError:
1015+            # Commonly caused by there being no buckets at all.
1016+            pass
1017+       
1018+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1019+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1020+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1021+        return bw
1022+       
1023+
1024+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1025+# and share data. The share data is accessed by RIBucketWriter.write and
1026+# RIBucketReader.read . The lease information is not accessible through these
1027+# interfaces.
1028+
1029+# The share file has the following layout:
1030+#  0x00: share file version number, four bytes, current version is 1
1031+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1032+#  0x08: number of leases, four bytes big-endian
1033+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1034+#  A+0x0c = B: first lease. Lease format is:
1035+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1036+#   B+0x04: renew secret, 32 bytes (SHA256)
1037+#   B+0x24: cancel secret, 32 bytes (SHA256)
1038+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1039+#   B+0x48: next lease, or end of record
1040+
1041+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1042+# but it is still filled in by storage servers in case the storage server
1043+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1044+# share file is moved from one storage server to another. The value stored in
1045+# this field is truncated, so if the actual share data length is >= 2**32,
1046+# then the value stored in this field will be the actual share data length
1047+# modulo 2**32.
1048+
1049+class ImmutableShare:
1050+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1051+    sharetype = "immutable"
1052+
1053+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1054+        """ If max_size is not None then I won't allow more than
1055+        max_size to be written to me. If create=True then max_size
1056+        must not be None. """
1057+        precondition((max_size is not None) or (not create), max_size, create)
1058+        self.shnum = shnum
1059+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1060+        self._max_size = max_size
1061+        if create:
1062+            # touch the file, so later callers will see that we're working on
1063+            # it. Also construct the metadata.
1064+            assert not os.path.exists(self.fname)
1065+            fileutil.make_dirs(os.path.dirname(self.fname))
1066+            f = open(self.fname, 'wb')
1067+            # The second field -- the four-byte share data length -- is no
1068+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1069+            # there in case someone downgrades a storage server from >=
1070+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1071+            # server to another, etc. We do saturation -- a share data length
1072+            # larger than 2**32-1 (what can fit into the field) is marked as
1073+            # the largest length that can fit into the field. That way, even
1074+            # if this does happen, the old < v1.3.0 server will still allow
1075+            # clients to read the first part of the share.
1076+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1077+            f.close()
1078+            self._lease_offset = max_size + 0x0c
1079+            self._num_leases = 0
1080+        else:
1081+            f = open(self.fname, 'rb')
1082+            filesize = os.path.getsize(self.fname)
1083+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1084+            f.close()
1085+            if version != 1:
1086+                msg = "sharefile %s had version %d but we wanted 1" % \
1087+                      (self.fname, version)
1088+                raise UnknownImmutableContainerVersionError(msg)
1089+            self._num_leases = num_leases
1090+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1091+        self._data_offset = 0xc
1092+
1093+    def unlink(self):
1094+        os.unlink(self.fname)
1095+
1096+    def read_share_data(self, offset, length):
1097+        precondition(offset >= 0)
1098+        # Reads beyond the end of the data are truncated. Reads that start
1099+        # beyond the end of the data return an empty string.
1100+        seekpos = self._data_offset+offset
1101+        fsize = os.path.getsize(self.fname)
1102+        actuallength = max(0, min(length, fsize-seekpos))
1103+        if actuallength == 0:
1104+            return ""
1105+        f = open(self.fname, 'rb')
1106+        f.seek(seekpos)
1107+        return f.read(actuallength)
1108+
1109+    def write_share_data(self, offset, data):
1110+        length = len(data)
1111+        precondition(offset >= 0, offset)
1112+        if self._max_size is not None and offset+length > self._max_size:
1113+            raise DataTooLargeError(self._max_size, offset, length)
1114+        f = open(self.fname, 'rb+')
1115+        real_offset = self._data_offset+offset
1116+        f.seek(real_offset)
1117+        assert f.tell() == real_offset
1118+        f.write(data)
1119+        f.close()
1120+
1121+    def _write_lease_record(self, f, lease_number, lease_info):
1122+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1123+        f.seek(offset)
1124+        assert f.tell() == offset
1125+        f.write(lease_info.to_immutable_data())
1126+
1127+    def _read_num_leases(self, f):
1128+        f.seek(0x08)
1129+        (num_leases,) = struct.unpack(">L", f.read(4))
1130+        return num_leases
1131+
1132+    def _write_num_leases(self, f, num_leases):
1133+        f.seek(0x08)
1134+        f.write(struct.pack(">L", num_leases))
1135+
1136+    def _truncate_leases(self, f, num_leases):
1137+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1138+
1139+    def get_leases(self):
1140+        """Yields a LeaseInfo instance for all leases."""
1141+        f = open(self.fname, 'rb')
1142+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1143+        f.seek(self._lease_offset)
1144+        for i in range(num_leases):
1145+            data = f.read(self.LEASE_SIZE)
1146+            if data:
1147+                yield LeaseInfo().from_immutable_data(data)
1148+
1149+    def add_lease(self, lease_info):
1150+        f = open(self.fname, 'rb+')
1151+        num_leases = self._read_num_leases(f)
1152+        self._write_lease_record(f, num_leases, lease_info)
1153+        self._write_num_leases(f, num_leases+1)
1154+        f.close()
1155+
1156+    def renew_lease(self, renew_secret, new_expire_time):
1157+        for i,lease in enumerate(self.get_leases()):
1158+            if constant_time_compare(lease.renew_secret, renew_secret):
1159+                # yup. See if we need to update the owner time.
1160+                if new_expire_time > lease.expiration_time:
1161+                    # yes
1162+                    lease.expiration_time = new_expire_time
1163+                    f = open(self.fname, 'rb+')
1164+                    self._write_lease_record(f, i, lease)
1165+                    f.close()
1166+                return
1167+        raise IndexError("unable to renew non-existent lease")
1168+
1169+    def add_or_renew_lease(self, lease_info):
1170+        try:
1171+            self.renew_lease(lease_info.renew_secret,
1172+                             lease_info.expiration_time)
1173+        except IndexError:
1174+            self.add_lease(lease_info)
1175+
1176+
1177+    def cancel_lease(self, cancel_secret):
1178+        """Remove a lease with the given cancel_secret. If the last lease is
1179+        cancelled, the file will be removed. Return the number of bytes that
1180+        were freed (by truncating the list of leases, and possibly by
1181+        deleting the file. Raise IndexError if there was no lease with the
1182+        given cancel_secret.
1183+        """
1184+
1185+        leases = list(self.get_leases())
1186+        num_leases_removed = 0
1187+        for i,lease in enumerate(leases):
1188+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1189+                leases[i] = None
1190+                num_leases_removed += 1
1191+        if not num_leases_removed:
1192+            raise IndexError("unable to find matching lease to cancel")
1193+        if num_leases_removed:
1194+            # pack and write out the remaining leases. We write these out in
1195+            # the same order as they were added, so that if we crash while
1196+            # doing this, we won't lose any non-cancelled leases.
1197+            leases = [l for l in leases if l] # remove the cancelled leases
1198+            f = open(self.fname, 'rb+')
1199+            for i,lease in enumerate(leases):
1200+                self._write_lease_record(f, i, lease)
1201+            self._write_num_leases(f, len(leases))
1202+            self._truncate_leases(f, len(leases))
1203+            f.close()
1204+        space_freed = self.LEASE_SIZE * num_leases_removed
1205+        if not len(leases):
1206+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1207+            self.unlink()
1208+        return space_freed
1209hunk ./src/allmydata/storage/backends/das/expirer.py 2
1210 import time, os, pickle, struct
1211-from allmydata.storage.crawler import ShareCrawler
1212-from allmydata.storage.shares import get_share_file
1213+from allmydata.storage.crawler import FSShareCrawler
1214 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1215      UnknownImmutableContainerVersionError
1216 from twisted.python import log as twlog
1217hunk ./src/allmydata/storage/backends/das/expirer.py 7
1218 
1219-class LeaseCheckingCrawler(ShareCrawler):
1220+class FSLeaseCheckingCrawler(FSShareCrawler):
1221     """I examine the leases on all shares, determining which are still valid
1222     and which have expired. I can remove the expired leases (if so
1223     configured), and the share will be deleted when the last lease is
1224hunk ./src/allmydata/storage/backends/das/expirer.py 50
1225     slow_start = 360 # wait 6 minutes after startup
1226     minimum_cycle_time = 12*60*60 # not more than twice per day
1227 
1228-    def __init__(self, statefile, historyfile,
1229-                 expiration_enabled, mode,
1230-                 override_lease_duration, # used if expiration_mode=="age"
1231-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1232-                 sharetypes):
1233+    def __init__(self, statefile, historyfile, expiration_policy):
1234         self.historyfile = historyfile
1235hunk ./src/allmydata/storage/backends/das/expirer.py 52
1236-        self.expiration_enabled = expiration_enabled
1237-        self.mode = mode
1238+        self.expiration_enabled = expiration_policy['enabled']
1239+        self.mode = expiration_policy['mode']
1240         self.override_lease_duration = None
1241         self.cutoff_date = None
1242         if self.mode == "age":
1243hunk ./src/allmydata/storage/backends/das/expirer.py 57
1244-            assert isinstance(override_lease_duration, (int, type(None)))
1245-            self.override_lease_duration = override_lease_duration # seconds
1246+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1247+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1248         elif self.mode == "cutoff-date":
1249hunk ./src/allmydata/storage/backends/das/expirer.py 60
1250-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1251+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1252             assert cutoff_date is not None
1253hunk ./src/allmydata/storage/backends/das/expirer.py 62
1254-            self.cutoff_date = cutoff_date
1255+            self.cutoff_date = expiration_policy['cutoff_date']
1256         else:
1257hunk ./src/allmydata/storage/backends/das/expirer.py 64
1258-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1259-        self.sharetypes_to_expire = sharetypes
1260-        ShareCrawler.__init__(self, statefile)
1261+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1262+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1263+        FSShareCrawler.__init__(self, statefile)
1264 
1265     def add_initial_state(self):
1266         # we fill ["cycle-to-date"] here (even though they will be reset in
1267hunk ./src/allmydata/storage/backends/das/expirer.py 156
1268 
1269     def process_share(self, sharefilename):
1270         # first, find out what kind of a share it is
1271-        sf = get_share_file(sharefilename)
1272+        f = open(sharefilename, "rb")
1273+        prefix = f.read(32)
1274+        f.close()
1275+        if prefix == MutableShareFile.MAGIC:
1276+            sf = MutableShareFile(sharefilename)
1277+        else:
1278+            # otherwise assume it's immutable
1279+            sf = FSBShare(sharefilename)
1280         sharetype = sf.sharetype
1281         now = time.time()
1282         s = self.stat(sharefilename)
1283addfile ./src/allmydata/storage/backends/null/__init__.py
1284addfile ./src/allmydata/storage/backends/null/core.py
1285hunk ./src/allmydata/storage/backends/null/core.py 1
1286+from allmydata.storage.backends.base import Backend
1287+
1288+class NullCore(Backend):
1289+    def __init__(self):
1290+        Backend.__init__(self)
1291+
1292+    def get_available_space(self):
1293+        return None
1294+
1295+    def get_shares(self, storage_index):
1296+        return set()
1297+
1298+    def get_share(self, storage_index, sharenum):
1299+        return None
1300+
1301+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1302+        return NullBucketWriter()
1303hunk ./src/allmydata/storage/crawler.py 12
1304 class TimeSliceExceeded(Exception):
1305     pass
1306 
1307-class ShareCrawler(service.MultiService):
1308+class FSShareCrawler(service.MultiService):
1309     """A subcless of ShareCrawler is attached to a StorageServer, and
1310     periodically walks all of its shares, processing each one in some
1311     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1312hunk ./src/allmydata/storage/crawler.py 68
1313     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1314     minimum_cycle_time = 300 # don't run a cycle faster than this
1315 
1316-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1317+    def __init__(self, statefname, allowed_cpu_percentage=None):
1318         service.MultiService.__init__(self)
1319         if allowed_cpu_percentage is not None:
1320             self.allowed_cpu_percentage = allowed_cpu_percentage
1321hunk ./src/allmydata/storage/crawler.py 72
1322-        self.backend = backend
1323+        self.statefname = statefname
1324         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1325                          for i in range(2**10)]
1326         self.prefixes.sort()
1327hunk ./src/allmydata/storage/crawler.py 192
1328         #                            of the last bucket to be processed, or
1329         #                            None if we are sleeping between cycles
1330         try:
1331-            f = open(self.statefile, "rb")
1332+            f = open(self.statefname, "rb")
1333             state = pickle.load(f)
1334             f.close()
1335         except EnvironmentError:
1336hunk ./src/allmydata/storage/crawler.py 230
1337         else:
1338             last_complete_prefix = self.prefixes[lcpi]
1339         self.state["last-complete-prefix"] = last_complete_prefix
1340-        tmpfile = self.statefile + ".tmp"
1341+        tmpfile = self.statefname + ".tmp"
1342         f = open(tmpfile, "wb")
1343         pickle.dump(self.state, f)
1344         f.close()
1345hunk ./src/allmydata/storage/crawler.py 433
1346         pass
1347 
1348 
1349-class BucketCountingCrawler(ShareCrawler):
1350+class FSBucketCountingCrawler(FSShareCrawler):
1351     """I keep track of how many buckets are being managed by this server.
1352     This is equivalent to the number of distributed files and directories for
1353     which I am providing storage. The actual number of files+directories in
1354hunk ./src/allmydata/storage/crawler.py 446
1355 
1356     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1357 
1358-    def __init__(self, statefile, num_sample_prefixes=1):
1359-        ShareCrawler.__init__(self, statefile)
1360+    def __init__(self, statefname, num_sample_prefixes=1):
1361+        FSShareCrawler.__init__(self, statefname)
1362         self.num_sample_prefixes = num_sample_prefixes
1363 
1364     def add_initial_state(self):
1365hunk ./src/allmydata/storage/immutable.py 14
1366 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1367      DataTooLargeError
1368 
1369-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1370-# and share data. The share data is accessed by RIBucketWriter.write and
1371-# RIBucketReader.read . The lease information is not accessible through these
1372-# interfaces.
1373-
1374-# The share file has the following layout:
1375-#  0x00: share file version number, four bytes, current version is 1
1376-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1377-#  0x08: number of leases, four bytes big-endian
1378-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1379-#  A+0x0c = B: first lease. Lease format is:
1380-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1381-#   B+0x04: renew secret, 32 bytes (SHA256)
1382-#   B+0x24: cancel secret, 32 bytes (SHA256)
1383-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1384-#   B+0x48: next lease, or end of record
1385-
1386-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1387-# but it is still filled in by storage servers in case the storage server
1388-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1389-# share file is moved from one storage server to another. The value stored in
1390-# this field is truncated, so if the actual share data length is >= 2**32,
1391-# then the value stored in this field will be the actual share data length
1392-# modulo 2**32.
1393-
1394-class ShareFile:
1395-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1396-    sharetype = "immutable"
1397-
1398-    def __init__(self, filename, max_size=None, create=False):
1399-        """ If max_size is not None then I won't allow more than
1400-        max_size to be written to me. If create=True then max_size
1401-        must not be None. """
1402-        precondition((max_size is not None) or (not create), max_size, create)
1403-        self.home = filename
1404-        self._max_size = max_size
1405-        if create:
1406-            # touch the file, so later callers will see that we're working on
1407-            # it. Also construct the metadata.
1408-            assert not os.path.exists(self.home)
1409-            fileutil.make_dirs(os.path.dirname(self.home))
1410-            f = open(self.home, 'wb')
1411-            # The second field -- the four-byte share data length -- is no
1412-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1413-            # there in case someone downgrades a storage server from >=
1414-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1415-            # server to another, etc. We do saturation -- a share data length
1416-            # larger than 2**32-1 (what can fit into the field) is marked as
1417-            # the largest length that can fit into the field. That way, even
1418-            # if this does happen, the old < v1.3.0 server will still allow
1419-            # clients to read the first part of the share.
1420-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1421-            f.close()
1422-            self._lease_offset = max_size + 0x0c
1423-            self._num_leases = 0
1424-        else:
1425-            f = open(self.home, 'rb')
1426-            filesize = os.path.getsize(self.home)
1427-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1428-            f.close()
1429-            if version != 1:
1430-                msg = "sharefile %s had version %d but we wanted 1" % \
1431-                      (filename, version)
1432-                raise UnknownImmutableContainerVersionError(msg)
1433-            self._num_leases = num_leases
1434-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1435-        self._data_offset = 0xc
1436-
1437-    def unlink(self):
1438-        os.unlink(self.home)
1439-
1440-    def read_share_data(self, offset, length):
1441-        precondition(offset >= 0)
1442-        # Reads beyond the end of the data are truncated. Reads that start
1443-        # beyond the end of the data return an empty string.
1444-        seekpos = self._data_offset+offset
1445-        fsize = os.path.getsize(self.home)
1446-        actuallength = max(0, min(length, fsize-seekpos))
1447-        if actuallength == 0:
1448-            return ""
1449-        f = open(self.home, 'rb')
1450-        f.seek(seekpos)
1451-        return f.read(actuallength)
1452-
1453-    def write_share_data(self, offset, data):
1454-        length = len(data)
1455-        precondition(offset >= 0, offset)
1456-        if self._max_size is not None and offset+length > self._max_size:
1457-            raise DataTooLargeError(self._max_size, offset, length)
1458-        f = open(self.home, 'rb+')
1459-        real_offset = self._data_offset+offset
1460-        f.seek(real_offset)
1461-        assert f.tell() == real_offset
1462-        f.write(data)
1463-        f.close()
1464-
1465-    def _write_lease_record(self, f, lease_number, lease_info):
1466-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1467-        f.seek(offset)
1468-        assert f.tell() == offset
1469-        f.write(lease_info.to_immutable_data())
1470-
1471-    def _read_num_leases(self, f):
1472-        f.seek(0x08)
1473-        (num_leases,) = struct.unpack(">L", f.read(4))
1474-        return num_leases
1475-
1476-    def _write_num_leases(self, f, num_leases):
1477-        f.seek(0x08)
1478-        f.write(struct.pack(">L", num_leases))
1479-
1480-    def _truncate_leases(self, f, num_leases):
1481-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1482-
1483-    def get_leases(self):
1484-        """Yields a LeaseInfo instance for all leases."""
1485-        f = open(self.home, 'rb')
1486-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1487-        f.seek(self._lease_offset)
1488-        for i in range(num_leases):
1489-            data = f.read(self.LEASE_SIZE)
1490-            if data:
1491-                yield LeaseInfo().from_immutable_data(data)
1492-
1493-    def add_lease(self, lease_info):
1494-        f = open(self.home, 'rb+')
1495-        num_leases = self._read_num_leases(f)
1496-        self._write_lease_record(f, num_leases, lease_info)
1497-        self._write_num_leases(f, num_leases+1)
1498-        f.close()
1499-
1500-    def renew_lease(self, renew_secret, new_expire_time):
1501-        for i,lease in enumerate(self.get_leases()):
1502-            if constant_time_compare(lease.renew_secret, renew_secret):
1503-                # yup. See if we need to update the owner time.
1504-                if new_expire_time > lease.expiration_time:
1505-                    # yes
1506-                    lease.expiration_time = new_expire_time
1507-                    f = open(self.home, 'rb+')
1508-                    self._write_lease_record(f, i, lease)
1509-                    f.close()
1510-                return
1511-        raise IndexError("unable to renew non-existent lease")
1512-
1513-    def add_or_renew_lease(self, lease_info):
1514-        try:
1515-            self.renew_lease(lease_info.renew_secret,
1516-                             lease_info.expiration_time)
1517-        except IndexError:
1518-            self.add_lease(lease_info)
1519-
1520-
1521-    def cancel_lease(self, cancel_secret):
1522-        """Remove a lease with the given cancel_secret. If the last lease is
1523-        cancelled, the file will be removed. Return the number of bytes that
1524-        were freed (by truncating the list of leases, and possibly by
1525-        deleting the file. Raise IndexError if there was no lease with the
1526-        given cancel_secret.
1527-        """
1528-
1529-        leases = list(self.get_leases())
1530-        num_leases_removed = 0
1531-        for i,lease in enumerate(leases):
1532-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1533-                leases[i] = None
1534-                num_leases_removed += 1
1535-        if not num_leases_removed:
1536-            raise IndexError("unable to find matching lease to cancel")
1537-        if num_leases_removed:
1538-            # pack and write out the remaining leases. We write these out in
1539-            # the same order as they were added, so that if we crash while
1540-            # doing this, we won't lose any non-cancelled leases.
1541-            leases = [l for l in leases if l] # remove the cancelled leases
1542-            f = open(self.home, 'rb+')
1543-            for i,lease in enumerate(leases):
1544-                self._write_lease_record(f, i, lease)
1545-            self._write_num_leases(f, len(leases))
1546-            self._truncate_leases(f, len(leases))
1547-            f.close()
1548-        space_freed = self.LEASE_SIZE * num_leases_removed
1549-        if not len(leases):
1550-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1551-            self.unlink()
1552-        return space_freed
1553-class NullBucketWriter(Referenceable):
1554-    implements(RIBucketWriter)
1555-
1556-    def remote_write(self, offset, data):
1557-        return
1558-
1559 class BucketWriter(Referenceable):
1560     implements(RIBucketWriter)
1561 
1562hunk ./src/allmydata/storage/immutable.py 17
1563-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1564+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1565         self.ss = ss
1566hunk ./src/allmydata/storage/immutable.py 19
1567-        self.incominghome = incominghome
1568-        self.finalhome = finalhome
1569         self._max_size = max_size # don't allow the client to write more than this
1570         self._canary = canary
1571         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1572hunk ./src/allmydata/storage/immutable.py 24
1573         self.closed = False
1574         self.throw_out_all_data = False
1575-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1576+        self._sharefile = immutableshare
1577         # also, add our lease to the file now, so that other ones can be
1578         # added by simultaneous uploaders
1579         self._sharefile.add_lease(lease_info)
1580hunk ./src/allmydata/storage/server.py 16
1581 from allmydata.storage.lease import LeaseInfo
1582 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1583      create_mutable_sharefile
1584-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1585-from allmydata.storage.crawler import BucketCountingCrawler
1586-from allmydata.storage.expirer import LeaseCheckingCrawler
1587 
1588 from zope.interface import implements
1589 
1590hunk ./src/allmydata/storage/server.py 19
1591-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1592-# be started and stopped.
1593-class Backend(service.MultiService):
1594-    implements(IStatsProducer)
1595-    def __init__(self):
1596-        service.MultiService.__init__(self)
1597-
1598-    def get_bucket_shares(self):
1599-        """XXX"""
1600-        raise NotImplementedError
1601-
1602-    def get_share(self):
1603-        """XXX"""
1604-        raise NotImplementedError
1605-
1606-    def make_bucket_writer(self):
1607-        """XXX"""
1608-        raise NotImplementedError
1609-
1610-class NullBackend(Backend):
1611-    def __init__(self):
1612-        Backend.__init__(self)
1613-
1614-    def get_available_space(self):
1615-        return None
1616-
1617-    def get_bucket_shares(self, storage_index):
1618-        return set()
1619-
1620-    def get_share(self, storage_index, sharenum):
1621-        return None
1622-
1623-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1624-        return NullBucketWriter()
1625-
1626-class FSBackend(Backend):
1627-    def __init__(self, storedir, readonly=False, reserved_space=0):
1628-        Backend.__init__(self)
1629-
1630-        self._setup_storage(storedir, readonly, reserved_space)
1631-        self._setup_corruption_advisory()
1632-        self._setup_bucket_counter()
1633-        self._setup_lease_checkerf()
1634-
1635-    def _setup_storage(self, storedir, readonly, reserved_space):
1636-        self.storedir = storedir
1637-        self.readonly = readonly
1638-        self.reserved_space = int(reserved_space)
1639-        if self.reserved_space:
1640-            if self.get_available_space() is None:
1641-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1642-                        umid="0wZ27w", level=log.UNUSUAL)
1643-
1644-        self.sharedir = os.path.join(self.storedir, "shares")
1645-        fileutil.make_dirs(self.sharedir)
1646-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1647-        self._clean_incomplete()
1648-
1649-    def _clean_incomplete(self):
1650-        fileutil.rm_dir(self.incomingdir)
1651-        fileutil.make_dirs(self.incomingdir)
1652-
1653-    def _setup_corruption_advisory(self):
1654-        # we don't actually create the corruption-advisory dir until necessary
1655-        self.corruption_advisory_dir = os.path.join(self.storedir,
1656-                                                    "corruption-advisories")
1657-
1658-    def _setup_bucket_counter(self):
1659-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1660-        self.bucket_counter = BucketCountingCrawler(statefile)
1661-        self.bucket_counter.setServiceParent(self)
1662-
1663-    def _setup_lease_checkerf(self):
1664-        statefile = os.path.join(self.storedir, "lease_checker.state")
1665-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1666-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1667-                                   expiration_enabled, expiration_mode,
1668-                                   expiration_override_lease_duration,
1669-                                   expiration_cutoff_date,
1670-                                   expiration_sharetypes)
1671-        self.lease_checker.setServiceParent(self)
1672-
1673-    def get_available_space(self):
1674-        if self.readonly:
1675-            return 0
1676-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1677-
1678-    def get_bucket_shares(self, storage_index):
1679-        """Return a list of (shnum, pathname) tuples for files that hold
1680-        shares for this storage_index. In each tuple, 'shnum' will always be
1681-        the integer form of the last component of 'pathname'."""
1682-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1683-        try:
1684-            for f in os.listdir(storagedir):
1685-                if NUM_RE.match(f):
1686-                    filename = os.path.join(storagedir, f)
1687-                    yield (int(f), filename)
1688-        except OSError:
1689-            # Commonly caused by there being no buckets at all.
1690-            pass
1691-
1692 # storage/
1693 # storage/shares/incoming
1694 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1695hunk ./src/allmydata/storage/server.py 32
1696 # $SHARENUM matches this regex:
1697 NUM_RE=re.compile("^[0-9]+$")
1698 
1699-
1700-
1701 class StorageServer(service.MultiService, Referenceable):
1702     implements(RIStorageServer, IStatsProducer)
1703     name = 'storage'
1704hunk ./src/allmydata/storage/server.py 35
1705-    LeaseCheckerClass = LeaseCheckingCrawler
1706 
1707     def __init__(self, nodeid, backend, reserved_space=0,
1708                  readonly_storage=False,
1709hunk ./src/allmydata/storage/server.py 38
1710-                 stats_provider=None,
1711-                 expiration_enabled=False,
1712-                 expiration_mode="age",
1713-                 expiration_override_lease_duration=None,
1714-                 expiration_cutoff_date=None,
1715-                 expiration_sharetypes=("mutable", "immutable")):
1716+                 stats_provider=None ):
1717         service.MultiService.__init__(self)
1718         assert isinstance(nodeid, str)
1719         assert len(nodeid) == 20
1720hunk ./src/allmydata/storage/server.py 217
1721         # they asked about: this will save them a lot of work. Add or update
1722         # leases for all of them: if they want us to hold shares for this
1723         # file, they'll want us to hold leases for this file.
1724-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1725-            alreadygot.add(shnum)
1726-            sf = ShareFile(fn)
1727-            sf.add_or_renew_lease(lease_info)
1728-
1729-        for shnum in sharenums:
1730-            share = self.backend.get_share(storage_index, shnum)
1731+        for share in self.backend.get_shares(storage_index):
1732+            alreadygot.add(share.shnum)
1733+            share.add_or_renew_lease(lease_info)
1734 
1735hunk ./src/allmydata/storage/server.py 221
1736-            if not share:
1737-                if (not limited) or (remaining_space >= max_space_per_bucket):
1738-                    # ok! we need to create the new share file.
1739-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1740-                                      max_space_per_bucket, lease_info, canary)
1741-                    bucketwriters[shnum] = bw
1742-                    self._active_writers[bw] = 1
1743-                    if limited:
1744-                        remaining_space -= max_space_per_bucket
1745-                else:
1746-                    # bummer! not enough space to accept this bucket
1747-                    pass
1748+        for shnum in (sharenums - alreadygot):
1749+            if (not limited) or (remaining_space >= max_space_per_bucket):
1750+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1751+                self.backend.set_storage_server(self)
1752+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1753+                                                     max_space_per_bucket, lease_info, canary)
1754+                bucketwriters[shnum] = bw
1755+                self._active_writers[bw] = 1
1756+                if limited:
1757+                    remaining_space -= max_space_per_bucket
1758 
1759hunk ./src/allmydata/storage/server.py 232
1760-            elif share.is_complete():
1761-                # great! we already have it. easy.
1762-                pass
1763-            elif not share.is_complete():
1764-                # Note that we don't create BucketWriters for shnums that
1765-                # have a partial share (in incoming/), so if a second upload
1766-                # occurs while the first is still in progress, the second
1767-                # uploader will use different storage servers.
1768-                pass
1769+        #XXX We SHOULD DOCUMENT LATER.
1770 
1771         self.add_latency("allocate", time.time() - start)
1772         return alreadygot, bucketwriters
1773hunk ./src/allmydata/storage/server.py 238
1774 
1775     def _iter_share_files(self, storage_index):
1776-        for shnum, filename in self._get_bucket_shares(storage_index):
1777+        for shnum, filename in self._get_shares(storage_index):
1778             f = open(filename, 'rb')
1779             header = f.read(32)
1780             f.close()
1781hunk ./src/allmydata/storage/server.py 318
1782         si_s = si_b2a(storage_index)
1783         log.msg("storage: get_buckets %s" % si_s)
1784         bucketreaders = {} # k: sharenum, v: BucketReader
1785-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1786+        for shnum, filename in self.backend.get_shares(storage_index):
1787             bucketreaders[shnum] = BucketReader(self, filename,
1788                                                 storage_index, shnum)
1789         self.add_latency("get", time.time() - start)
1790hunk ./src/allmydata/storage/server.py 334
1791         # since all shares get the same lease data, we just grab the leases
1792         # from the first share
1793         try:
1794-            shnum, filename = self._get_bucket_shares(storage_index).next()
1795+            shnum, filename = self._get_shares(storage_index).next()
1796             sf = ShareFile(filename)
1797             return sf.get_leases()
1798         except StopIteration:
1799hunk ./src/allmydata/storage/shares.py 1
1800-#! /usr/bin/python
1801-
1802-from allmydata.storage.mutable import MutableShareFile
1803-from allmydata.storage.immutable import ShareFile
1804-
1805-def get_share_file(filename):
1806-    f = open(filename, "rb")
1807-    prefix = f.read(32)
1808-    f.close()
1809-    if prefix == MutableShareFile.MAGIC:
1810-        return MutableShareFile(filename)
1811-    # otherwise assume it's immutable
1812-    return ShareFile(filename)
1813-
1814rmfile ./src/allmydata/storage/shares.py
1815hunk ./src/allmydata/test/common_util.py 20
1816 
1817 def flip_one_bit(s, offset=0, size=None):
1818     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1819-    than offset+size. """
1820+    than offset+size. Return the new string. """
1821     if size is None:
1822         size=len(s)-offset
1823     i = randrange(offset, offset+size)
1824hunk ./src/allmydata/test/test_backends.py 7
1825 
1826 from allmydata.test.common_util import ReallyEqualMixin
1827 
1828-import mock
1829+import mock, os
1830 
1831 # This is the code that we're going to be testing.
1832hunk ./src/allmydata/test/test_backends.py 10
1833-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1834+from allmydata.storage.server import StorageServer
1835+
1836+from allmydata.storage.backends.das.core import DASCore
1837+from allmydata.storage.backends.null.core import NullCore
1838+
1839 
1840 # The following share file contents was generated with
1841 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1842hunk ./src/allmydata/test/test_backends.py 22
1843 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1844 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1845 
1846-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1847+tempdir = 'teststoredir'
1848+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1849+sharefname = os.path.join(sharedirname, '0')
1850 
1851 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1852     @mock.patch('time.time')
1853hunk ./src/allmydata/test/test_backends.py 58
1854         filesystem in only the prescribed ways. """
1855 
1856         def call_open(fname, mode):
1857-            if fname == 'testdir/bucket_counter.state':
1858-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1859-            elif fname == 'testdir/lease_checker.state':
1860-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1861-            elif fname == 'testdir/lease_checker.history':
1862+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1863+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1864+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1865+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1866+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1867                 return StringIO()
1868             else:
1869                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1870hunk ./src/allmydata/test/test_backends.py 124
1871     @mock.patch('__builtin__.open')
1872     def setUp(self, mockopen):
1873         def call_open(fname, mode):
1874-            if fname == 'testdir/bucket_counter.state':
1875-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1876-            elif fname == 'testdir/lease_checker.state':
1877-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1878-            elif fname == 'testdir/lease_checker.history':
1879+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1880+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1881+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1882+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1883+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1884                 return StringIO()
1885         mockopen.side_effect = call_open
1886hunk ./src/allmydata/test/test_backends.py 131
1887-
1888-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1889+        expiration_policy = {'enabled' : False,
1890+                             'mode' : 'age',
1891+                             'override_lease_duration' : None,
1892+                             'cutoff_date' : None,
1893+                             'sharetypes' : None}
1894+        testbackend = DASCore(tempdir, expiration_policy)
1895+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1896 
1897     @mock.patch('time.time')
1898     @mock.patch('os.mkdir')
1899hunk ./src/allmydata/test/test_backends.py 148
1900         """ Write a new share. """
1901 
1902         def call_listdir(dirname):
1903-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1904-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1905+            self.failUnlessReallyEqual(dirname, sharedirname)
1906+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1907 
1908         mocklistdir.side_effect = call_listdir
1909 
1910hunk ./src/allmydata/test/test_backends.py 178
1911 
1912         sharefile = MockFile()
1913         def call_open(fname, mode):
1914-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1915+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1916             return sharefile
1917 
1918         mockopen.side_effect = call_open
1919hunk ./src/allmydata/test/test_backends.py 200
1920         StorageServer object. """
1921 
1922         def call_listdir(dirname):
1923-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1924+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1925             return ['0']
1926 
1927         mocklistdir.side_effect = call_listdir
1928}
1929[checkpoint patch
1930wilcoxjg@gmail.com**20110626165715
1931 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1932] {
1933hunk ./src/allmydata/storage/backends/das/core.py 21
1934 from allmydata.storage.lease import LeaseInfo
1935 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1936      create_mutable_sharefile
1937-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1938+from allmydata.storage.immutable import BucketWriter, BucketReader
1939 from allmydata.storage.crawler import FSBucketCountingCrawler
1940 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1941 
1942hunk ./src/allmydata/storage/backends/das/core.py 27
1943 from zope.interface import implements
1944 
1945+# $SHARENUM matches this regex:
1946+NUM_RE=re.compile("^[0-9]+$")
1947+
1948 class DASCore(Backend):
1949     implements(IStorageBackend)
1950     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1951hunk ./src/allmydata/storage/backends/das/core.py 80
1952         return fileutil.get_available_space(self.storedir, self.reserved_space)
1953 
1954     def get_shares(self, storage_index):
1955-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1956+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
1957         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1958         try:
1959             for f in os.listdir(finalstoragedir):
1960hunk ./src/allmydata/storage/backends/das/core.py 86
1961                 if NUM_RE.match(f):
1962                     filename = os.path.join(finalstoragedir, f)
1963-                    yield FSBShare(filename, int(f))
1964+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
1965         except OSError:
1966             # Commonly caused by there being no buckets at all.
1967             pass
1968hunk ./src/allmydata/storage/backends/das/core.py 95
1969         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1970         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1971         return bw
1972+
1973+    def set_storage_server(self, ss):
1974+        self.ss = ss
1975         
1976 
1977 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
1978hunk ./src/allmydata/storage/server.py 29
1979 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
1980 # base-32 chars).
1981 
1982-# $SHARENUM matches this regex:
1983-NUM_RE=re.compile("^[0-9]+$")
1984 
1985 class StorageServer(service.MultiService, Referenceable):
1986     implements(RIStorageServer, IStatsProducer)
1987}
1988[checkpoint4
1989wilcoxjg@gmail.com**20110628202202
1990 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
1991] {
1992hunk ./src/allmydata/storage/backends/das/core.py 96
1993         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1994         return bw
1995 
1996+    def make_bucket_reader(self, share):
1997+        return BucketReader(self.ss, share)
1998+
1999     def set_storage_server(self, ss):
2000         self.ss = ss
2001         
2002hunk ./src/allmydata/storage/backends/das/core.py 138
2003         must not be None. """
2004         precondition((max_size is not None) or (not create), max_size, create)
2005         self.shnum = shnum
2006+        self.storage_index = storageindex
2007         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2008         self._max_size = max_size
2009         if create:
2010hunk ./src/allmydata/storage/backends/das/core.py 173
2011             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2012         self._data_offset = 0xc
2013 
2014+    def get_shnum(self):
2015+        return self.shnum
2016+
2017     def unlink(self):
2018         os.unlink(self.fname)
2019 
2020hunk ./src/allmydata/storage/backends/null/core.py 2
2021 from allmydata.storage.backends.base import Backend
2022+from allmydata.storage.immutable import BucketWriter, BucketReader
2023 
2024 class NullCore(Backend):
2025     def __init__(self):
2026hunk ./src/allmydata/storage/backends/null/core.py 17
2027     def get_share(self, storage_index, sharenum):
2028         return None
2029 
2030-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2031-        return NullBucketWriter()
2032+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2033+       
2034+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2035+
2036+    def set_storage_server(self, ss):
2037+        self.ss = ss
2038+
2039+class ImmutableShare:
2040+    sharetype = "immutable"
2041+
2042+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2043+        """ If max_size is not None then I won't allow more than
2044+        max_size to be written to me. If create=True then max_size
2045+        must not be None. """
2046+        precondition((max_size is not None) or (not create), max_size, create)
2047+        self.shnum = shnum
2048+        self.storage_index = storageindex
2049+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2050+        self._max_size = max_size
2051+        if create:
2052+            # touch the file, so later callers will see that we're working on
2053+            # it. Also construct the metadata.
2054+            assert not os.path.exists(self.fname)
2055+            fileutil.make_dirs(os.path.dirname(self.fname))
2056+            f = open(self.fname, 'wb')
2057+            # The second field -- the four-byte share data length -- is no
2058+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2059+            # there in case someone downgrades a storage server from >=
2060+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2061+            # server to another, etc. We do saturation -- a share data length
2062+            # larger than 2**32-1 (what can fit into the field) is marked as
2063+            # the largest length that can fit into the field. That way, even
2064+            # if this does happen, the old < v1.3.0 server will still allow
2065+            # clients to read the first part of the share.
2066+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2067+            f.close()
2068+            self._lease_offset = max_size + 0x0c
2069+            self._num_leases = 0
2070+        else:
2071+            f = open(self.fname, 'rb')
2072+            filesize = os.path.getsize(self.fname)
2073+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2074+            f.close()
2075+            if version != 1:
2076+                msg = "sharefile %s had version %d but we wanted 1" % \
2077+                      (self.fname, version)
2078+                raise UnknownImmutableContainerVersionError(msg)
2079+            self._num_leases = num_leases
2080+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2081+        self._data_offset = 0xc
2082+
2083+    def get_shnum(self):
2084+        return self.shnum
2085+
2086+    def unlink(self):
2087+        os.unlink(self.fname)
2088+
2089+    def read_share_data(self, offset, length):
2090+        precondition(offset >= 0)
2091+        # Reads beyond the end of the data are truncated. Reads that start
2092+        # beyond the end of the data return an empty string.
2093+        seekpos = self._data_offset+offset
2094+        fsize = os.path.getsize(self.fname)
2095+        actuallength = max(0, min(length, fsize-seekpos))
2096+        if actuallength == 0:
2097+            return ""
2098+        f = open(self.fname, 'rb')
2099+        f.seek(seekpos)
2100+        return f.read(actuallength)
2101+
2102+    def write_share_data(self, offset, data):
2103+        length = len(data)
2104+        precondition(offset >= 0, offset)
2105+        if self._max_size is not None and offset+length > self._max_size:
2106+            raise DataTooLargeError(self._max_size, offset, length)
2107+        f = open(self.fname, 'rb+')
2108+        real_offset = self._data_offset+offset
2109+        f.seek(real_offset)
2110+        assert f.tell() == real_offset
2111+        f.write(data)
2112+        f.close()
2113+
2114+    def _write_lease_record(self, f, lease_number, lease_info):
2115+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2116+        f.seek(offset)
2117+        assert f.tell() == offset
2118+        f.write(lease_info.to_immutable_data())
2119+
2120+    def _read_num_leases(self, f):
2121+        f.seek(0x08)
2122+        (num_leases,) = struct.unpack(">L", f.read(4))
2123+        return num_leases
2124+
2125+    def _write_num_leases(self, f, num_leases):
2126+        f.seek(0x08)
2127+        f.write(struct.pack(">L", num_leases))
2128+
2129+    def _truncate_leases(self, f, num_leases):
2130+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2131+
2132+    def get_leases(self):
2133+        """Yields a LeaseInfo instance for all leases."""
2134+        f = open(self.fname, 'rb')
2135+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2136+        f.seek(self._lease_offset)
2137+        for i in range(num_leases):
2138+            data = f.read(self.LEASE_SIZE)
2139+            if data:
2140+                yield LeaseInfo().from_immutable_data(data)
2141+
2142+    def add_lease(self, lease_info):
2143+        f = open(self.fname, 'rb+')
2144+        num_leases = self._read_num_leases(f)
2145+        self._write_lease_record(f, num_leases, lease_info)
2146+        self._write_num_leases(f, num_leases+1)
2147+        f.close()
2148+
2149+    def renew_lease(self, renew_secret, new_expire_time):
2150+        for i,lease in enumerate(self.get_leases()):
2151+            if constant_time_compare(lease.renew_secret, renew_secret):
2152+                # yup. See if we need to update the owner time.
2153+                if new_expire_time > lease.expiration_time:
2154+                    # yes
2155+                    lease.expiration_time = new_expire_time
2156+                    f = open(self.fname, 'rb+')
2157+                    self._write_lease_record(f, i, lease)
2158+                    f.close()
2159+                return
2160+        raise IndexError("unable to renew non-existent lease")
2161+
2162+    def add_or_renew_lease(self, lease_info):
2163+        try:
2164+            self.renew_lease(lease_info.renew_secret,
2165+                             lease_info.expiration_time)
2166+        except IndexError:
2167+            self.add_lease(lease_info)
2168+
2169+
2170+    def cancel_lease(self, cancel_secret):
2171+        """Remove a lease with the given cancel_secret. If the last lease is
2172+        cancelled, the file will be removed. Return the number of bytes that
2173+        were freed (by truncating the list of leases, and possibly by
2174+        deleting the file. Raise IndexError if there was no lease with the
2175+        given cancel_secret.
2176+        """
2177+
2178+        leases = list(self.get_leases())
2179+        num_leases_removed = 0
2180+        for i,lease in enumerate(leases):
2181+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2182+                leases[i] = None
2183+                num_leases_removed += 1
2184+        if not num_leases_removed:
2185+            raise IndexError("unable to find matching lease to cancel")
2186+        if num_leases_removed:
2187+            # pack and write out the remaining leases. We write these out in
2188+            # the same order as they were added, so that if we crash while
2189+            # doing this, we won't lose any non-cancelled leases.
2190+            leases = [l for l in leases if l] # remove the cancelled leases
2191+            f = open(self.fname, 'rb+')
2192+            for i,lease in enumerate(leases):
2193+                self._write_lease_record(f, i, lease)
2194+            self._write_num_leases(f, len(leases))
2195+            self._truncate_leases(f, len(leases))
2196+            f.close()
2197+        space_freed = self.LEASE_SIZE * num_leases_removed
2198+        if not len(leases):
2199+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2200+            self.unlink()
2201+        return space_freed
2202hunk ./src/allmydata/storage/immutable.py 114
2203 class BucketReader(Referenceable):
2204     implements(RIBucketReader)
2205 
2206-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2207+    def __init__(self, ss, share):
2208         self.ss = ss
2209hunk ./src/allmydata/storage/immutable.py 116
2210-        self._share_file = ShareFile(sharefname)
2211-        self.storage_index = storage_index
2212-        self.shnum = shnum
2213+        self._share_file = share
2214+        self.storage_index = share.storage_index
2215+        self.shnum = share.shnum
2216 
2217     def __repr__(self):
2218         return "<%s %s %s>" % (self.__class__.__name__,
2219hunk ./src/allmydata/storage/server.py 316
2220         si_s = si_b2a(storage_index)
2221         log.msg("storage: get_buckets %s" % si_s)
2222         bucketreaders = {} # k: sharenum, v: BucketReader
2223-        for shnum, filename in self.backend.get_shares(storage_index):
2224-            bucketreaders[shnum] = BucketReader(self, filename,
2225-                                                storage_index, shnum)
2226+        self.backend.set_storage_server(self)
2227+        for share in self.backend.get_shares(storage_index):
2228+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2229         self.add_latency("get", time.time() - start)
2230         return bucketreaders
2231 
2232hunk ./src/allmydata/test/test_backends.py 25
2233 tempdir = 'teststoredir'
2234 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2235 sharefname = os.path.join(sharedirname, '0')
2236+expiration_policy = {'enabled' : False,
2237+                     'mode' : 'age',
2238+                     'override_lease_duration' : None,
2239+                     'cutoff_date' : None,
2240+                     'sharetypes' : None}
2241 
2242 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2243     @mock.patch('time.time')
2244hunk ./src/allmydata/test/test_backends.py 43
2245         tries to read or write to the file system. """
2246 
2247         # Now begin the test.
2248-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2249+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2250 
2251         self.failIf(mockisdir.called)
2252         self.failIf(mocklistdir.called)
2253hunk ./src/allmydata/test/test_backends.py 74
2254         mockopen.side_effect = call_open
2255 
2256         # Now begin the test.
2257-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2258+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2259 
2260         self.failIf(mockisdir.called)
2261         self.failIf(mocklistdir.called)
2262hunk ./src/allmydata/test/test_backends.py 86
2263 
2264 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2265     def setUp(self):
2266-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2267+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2268 
2269     @mock.patch('os.mkdir')
2270     @mock.patch('__builtin__.open')
2271hunk ./src/allmydata/test/test_backends.py 136
2272             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2273                 return StringIO()
2274         mockopen.side_effect = call_open
2275-        expiration_policy = {'enabled' : False,
2276-                             'mode' : 'age',
2277-                             'override_lease_duration' : None,
2278-                             'cutoff_date' : None,
2279-                             'sharetypes' : None}
2280         testbackend = DASCore(tempdir, expiration_policy)
2281         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2282 
2283}
2284[checkpoint5
2285wilcoxjg@gmail.com**20110705034626
2286 Ignore-this: 255780bd58299b0aa33c027e9d008262
2287] {
2288addfile ./src/allmydata/storage/backends/base.py
2289hunk ./src/allmydata/storage/backends/base.py 1
2290+from twisted.application import service
2291+
2292+class Backend(service.MultiService):
2293+    def __init__(self):
2294+        service.MultiService.__init__(self)
2295hunk ./src/allmydata/storage/backends/null/core.py 19
2296 
2297     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2298         
2299+        immutableshare = ImmutableShare()
2300         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2301 
2302     def set_storage_server(self, ss):
2303hunk ./src/allmydata/storage/backends/null/core.py 28
2304 class ImmutableShare:
2305     sharetype = "immutable"
2306 
2307-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2308+    def __init__(self):
2309         """ If max_size is not None then I won't allow more than
2310         max_size to be written to me. If create=True then max_size
2311         must not be None. """
2312hunk ./src/allmydata/storage/backends/null/core.py 32
2313-        precondition((max_size is not None) or (not create), max_size, create)
2314-        self.shnum = shnum
2315-        self.storage_index = storageindex
2316-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2317-        self._max_size = max_size
2318-        if create:
2319-            # touch the file, so later callers will see that we're working on
2320-            # it. Also construct the metadata.
2321-            assert not os.path.exists(self.fname)
2322-            fileutil.make_dirs(os.path.dirname(self.fname))
2323-            f = open(self.fname, 'wb')
2324-            # The second field -- the four-byte share data length -- is no
2325-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2326-            # there in case someone downgrades a storage server from >=
2327-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2328-            # server to another, etc. We do saturation -- a share data length
2329-            # larger than 2**32-1 (what can fit into the field) is marked as
2330-            # the largest length that can fit into the field. That way, even
2331-            # if this does happen, the old < v1.3.0 server will still allow
2332-            # clients to read the first part of the share.
2333-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2334-            f.close()
2335-            self._lease_offset = max_size + 0x0c
2336-            self._num_leases = 0
2337-        else:
2338-            f = open(self.fname, 'rb')
2339-            filesize = os.path.getsize(self.fname)
2340-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2341-            f.close()
2342-            if version != 1:
2343-                msg = "sharefile %s had version %d but we wanted 1" % \
2344-                      (self.fname, version)
2345-                raise UnknownImmutableContainerVersionError(msg)
2346-            self._num_leases = num_leases
2347-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2348-        self._data_offset = 0xc
2349+        pass
2350 
2351     def get_shnum(self):
2352         return self.shnum
2353hunk ./src/allmydata/storage/backends/null/core.py 54
2354         return f.read(actuallength)
2355 
2356     def write_share_data(self, offset, data):
2357-        length = len(data)
2358-        precondition(offset >= 0, offset)
2359-        if self._max_size is not None and offset+length > self._max_size:
2360-            raise DataTooLargeError(self._max_size, offset, length)
2361-        f = open(self.fname, 'rb+')
2362-        real_offset = self._data_offset+offset
2363-        f.seek(real_offset)
2364-        assert f.tell() == real_offset
2365-        f.write(data)
2366-        f.close()
2367+        pass
2368 
2369     def _write_lease_record(self, f, lease_number, lease_info):
2370         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2371hunk ./src/allmydata/storage/backends/null/core.py 84
2372             if data:
2373                 yield LeaseInfo().from_immutable_data(data)
2374 
2375-    def add_lease(self, lease_info):
2376-        f = open(self.fname, 'rb+')
2377-        num_leases = self._read_num_leases(f)
2378-        self._write_lease_record(f, num_leases, lease_info)
2379-        self._write_num_leases(f, num_leases+1)
2380-        f.close()
2381+    def add_lease(self, lease):
2382+        pass
2383 
2384     def renew_lease(self, renew_secret, new_expire_time):
2385         for i,lease in enumerate(self.get_leases()):
2386hunk ./src/allmydata/test/test_backends.py 32
2387                      'sharetypes' : None}
2388 
2389 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2390-    @mock.patch('time.time')
2391-    @mock.patch('os.mkdir')
2392-    @mock.patch('__builtin__.open')
2393-    @mock.patch('os.listdir')
2394-    @mock.patch('os.path.isdir')
2395-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2396-        """ This tests whether a server instance can be constructed
2397-        with a null backend. The server instance fails the test if it
2398-        tries to read or write to the file system. """
2399-
2400-        # Now begin the test.
2401-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2402-
2403-        self.failIf(mockisdir.called)
2404-        self.failIf(mocklistdir.called)
2405-        self.failIf(mockopen.called)
2406-        self.failIf(mockmkdir.called)
2407-
2408-        # You passed!
2409-
2410     @mock.patch('time.time')
2411     @mock.patch('os.mkdir')
2412     @mock.patch('__builtin__.open')
2413hunk ./src/allmydata/test/test_backends.py 53
2414                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2415         mockopen.side_effect = call_open
2416 
2417-        # Now begin the test.
2418-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2419-
2420-        self.failIf(mockisdir.called)
2421-        self.failIf(mocklistdir.called)
2422-        self.failIf(mockopen.called)
2423-        self.failIf(mockmkdir.called)
2424-        self.failIf(mocktime.called)
2425-
2426-        # You passed!
2427-
2428-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2429-    def setUp(self):
2430-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2431-
2432-    @mock.patch('os.mkdir')
2433-    @mock.patch('__builtin__.open')
2434-    @mock.patch('os.listdir')
2435-    @mock.patch('os.path.isdir')
2436-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2437-        """ Write a new share. """
2438-
2439-        # Now begin the test.
2440-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2441-        bs[0].remote_write(0, 'a')
2442-        self.failIf(mockisdir.called)
2443-        self.failIf(mocklistdir.called)
2444-        self.failIf(mockopen.called)
2445-        self.failIf(mockmkdir.called)
2446+        def call_isdir(fname):
2447+            if fname == os.path.join(tempdir,'shares'):
2448+                return True
2449+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2450+                return True
2451+            else:
2452+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2453+        mockisdir.side_effect = call_isdir
2454 
2455hunk ./src/allmydata/test/test_backends.py 62
2456-    @mock.patch('os.path.exists')
2457-    @mock.patch('os.path.getsize')
2458-    @mock.patch('__builtin__.open')
2459-    @mock.patch('os.listdir')
2460-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2461-        """ This tests whether the code correctly finds and reads
2462-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2463-        servers. There is a similar test in test_download, but that one
2464-        is from the perspective of the client and exercises a deeper
2465-        stack of code. This one is for exercising just the
2466-        StorageServer object. """
2467+        def call_mkdir(fname, mode):
2468+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2469+            self.failUnlessEqual(0777, mode)
2470+            if fname == tempdir:
2471+                return None
2472+            elif fname == os.path.join(tempdir,'shares'):
2473+                return None
2474+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2475+                return None
2476+            else:
2477+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2478+        mockmkdir.side_effect = call_mkdir
2479 
2480         # Now begin the test.
2481hunk ./src/allmydata/test/test_backends.py 76
2482-        bs = self.s.remote_get_buckets('teststorage_index')
2483+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2484 
2485hunk ./src/allmydata/test/test_backends.py 78
2486-        self.failUnlessEqual(len(bs), 0)
2487-        self.failIf(mocklistdir.called)
2488-        self.failIf(mockopen.called)
2489-        self.failIf(mockgetsize.called)
2490-        self.failIf(mockexists.called)
2491+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2492 
2493 
2494 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2495hunk ./src/allmydata/test/test_backends.py 193
2496         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2497 
2498 
2499+
2500+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2501+    @mock.patch('time.time')
2502+    @mock.patch('os.mkdir')
2503+    @mock.patch('__builtin__.open')
2504+    @mock.patch('os.listdir')
2505+    @mock.patch('os.path.isdir')
2506+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2507+        """ This tests whether a file system backend instance can be
2508+        constructed. To pass the test, it has to use the
2509+        filesystem in only the prescribed ways. """
2510+
2511+        def call_open(fname, mode):
2512+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2513+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2514+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2515+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2516+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2517+                return StringIO()
2518+            else:
2519+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2520+        mockopen.side_effect = call_open
2521+
2522+        def call_isdir(fname):
2523+            if fname == os.path.join(tempdir,'shares'):
2524+                return True
2525+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2526+                return True
2527+            else:
2528+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2529+        mockisdir.side_effect = call_isdir
2530+
2531+        def call_mkdir(fname, mode):
2532+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2533+            self.failUnlessEqual(0777, mode)
2534+            if fname == tempdir:
2535+                return None
2536+            elif fname == os.path.join(tempdir,'shares'):
2537+                return None
2538+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2539+                return None
2540+            else:
2541+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2542+        mockmkdir.side_effect = call_mkdir
2543+
2544+        # Now begin the test.
2545+        DASCore('teststoredir', expiration_policy)
2546+
2547+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2548}
2549[checkpoint 6
2550wilcoxjg@gmail.com**20110706190824
2551 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2552] {
2553hunk ./src/allmydata/interfaces.py 100
2554                          renew_secret=LeaseRenewSecret,
2555                          cancel_secret=LeaseCancelSecret,
2556                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2557-                         allocated_size=Offset, canary=Referenceable):
2558+                         allocated_size=Offset,
2559+                         canary=Referenceable):
2560         """
2561hunk ./src/allmydata/interfaces.py 103
2562-        @param storage_index: the index of the bucket to be created or
2563+        @param storage_index: the index of the shares to be created or
2564                               increfed.
2565hunk ./src/allmydata/interfaces.py 105
2566-        @param sharenums: these are the share numbers (probably between 0 and
2567-                          99) that the sender is proposing to store on this
2568-                          server.
2569-        @param renew_secret: This is the secret used to protect bucket refresh
2570+        @param renew_secret: This is the secret used to protect shares refresh
2571                              This secret is generated by the client and
2572                              stored for later comparison by the server. Each
2573                              server is given a different secret.
2574hunk ./src/allmydata/interfaces.py 109
2575-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2576-        @param canary: If the canary is lost before close(), the bucket is
2577+        @param cancel_secret: Like renew_secret, but protects shares decref.
2578+        @param sharenums: these are the share numbers (probably between 0 and
2579+                          99) that the sender is proposing to store on this
2580+                          server.
2581+        @param allocated_size: XXX The size of the shares the client wishes to store.
2582+        @param canary: If the canary is lost before close(), the shares are
2583                        deleted.
2584hunk ./src/allmydata/interfaces.py 116
2585+
2586         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2587                  already have and allocated is what we hereby agree to accept.
2588                  New leases are added for shares in both lists.
2589hunk ./src/allmydata/interfaces.py 128
2590                   renew_secret=LeaseRenewSecret,
2591                   cancel_secret=LeaseCancelSecret):
2592         """
2593-        Add a new lease on the given bucket. If the renew_secret matches an
2594+        Add a new lease on the given shares. If the renew_secret matches an
2595         existing lease, that lease will be renewed instead. If there is no
2596         bucket for the given storage_index, return silently. (note that in
2597         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2598hunk ./src/allmydata/storage/server.py 17
2599 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2600      create_mutable_sharefile
2601 
2602-from zope.interface import implements
2603-
2604 # storage/
2605 # storage/shares/incoming
2606 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2607hunk ./src/allmydata/test/test_backends.py 6
2608 from StringIO import StringIO
2609 
2610 from allmydata.test.common_util import ReallyEqualMixin
2611+from allmydata.util.assertutil import _assert
2612 
2613 import mock, os
2614 
2615hunk ./src/allmydata/test/test_backends.py 92
2616                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2617             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2618                 return StringIO()
2619+            else:
2620+                _assert(False, "The tester code doesn't recognize this case.") 
2621+
2622         mockopen.side_effect = call_open
2623         testbackend = DASCore(tempdir, expiration_policy)
2624         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2625hunk ./src/allmydata/test/test_backends.py 109
2626 
2627         def call_listdir(dirname):
2628             self.failUnlessReallyEqual(dirname, sharedirname)
2629-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2630+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2631 
2632         mocklistdir.side_effect = call_listdir
2633 
2634hunk ./src/allmydata/test/test_backends.py 113
2635+        def call_isdir(dirname):
2636+            self.failUnlessReallyEqual(dirname, sharedirname)
2637+            return True
2638+
2639+        mockisdir.side_effect = call_isdir
2640+
2641+        def call_mkdir(dirname, permissions):
2642+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2643+                self.Fail
2644+            else:
2645+                return True
2646+
2647+        mockmkdir.side_effect = call_mkdir
2648+
2649         class MockFile:
2650             def __init__(self):
2651                 self.buffer = ''
2652hunk ./src/allmydata/test/test_backends.py 156
2653             return sharefile
2654 
2655         mockopen.side_effect = call_open
2656+
2657         # Now begin the test.
2658         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2659         bs[0].remote_write(0, 'a')
2660hunk ./src/allmydata/test/test_backends.py 161
2661         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2662+       
2663+        # Now test the allocated_size method.
2664+        spaceint = self.s.allocated_size()
2665 
2666     @mock.patch('os.path.exists')
2667     @mock.patch('os.path.getsize')
2668}
2669[checkpoint 7
2670wilcoxjg@gmail.com**20110706200820
2671 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2672] hunk ./src/allmydata/test/test_backends.py 164
2673         
2674         # Now test the allocated_size method.
2675         spaceint = self.s.allocated_size()
2676+        self.failUnlessReallyEqual(spaceint, 1)
2677 
2678     @mock.patch('os.path.exists')
2679     @mock.patch('os.path.getsize')
2680[checkpoint8
2681wilcoxjg@gmail.com**20110706223126
2682 Ignore-this: 97336180883cb798b16f15411179f827
2683   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2684] hunk ./src/allmydata/test/test_backends.py 32
2685                      'cutoff_date' : None,
2686                      'sharetypes' : None}
2687 
2688+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2689+    def setUp(self):
2690+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2691+
2692+    @mock.patch('os.mkdir')
2693+    @mock.patch('__builtin__.open')
2694+    @mock.patch('os.listdir')
2695+    @mock.patch('os.path.isdir')
2696+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2697+        """ Write a new share. """
2698+
2699+        # Now begin the test.
2700+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2701+        bs[0].remote_write(0, 'a')
2702+        self.failIf(mockisdir.called)
2703+        self.failIf(mocklistdir.called)
2704+        self.failIf(mockopen.called)
2705+        self.failIf(mockmkdir.called)
2706+
2707 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2708     @mock.patch('time.time')
2709     @mock.patch('os.mkdir')
2710
2711Context:
2712
2713[add Protovis.js-based download-status timeline visualization
2714Brian Warner <warner@lothar.com>**20110629222606
2715 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
2716 
2717 provide status overlap info on the webapi t=json output, add decode/decrypt
2718 rate tooltips, add zoomin/zoomout buttons
2719]
2720[add more download-status data, fix tests
2721Brian Warner <warner@lothar.com>**20110629222555
2722 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
2723]
2724[prepare for viz: improve DownloadStatus events
2725Brian Warner <warner@lothar.com>**20110629222542
2726 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
2727 
2728 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
2729]
2730[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
2731zooko@zooko.com**20110629185711
2732 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
2733]
2734[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
2735david-sarah@jacaranda.org**20110130235809
2736 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
2737]
2738[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
2739david-sarah@jacaranda.org**20110626054124
2740 Ignore-this: abb864427a1b91bd10d5132b4589fd90
2741]
2742[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
2743david-sarah@jacaranda.org**20110623205528
2744 Ignore-this: c63e23146c39195de52fb17c7c49b2da
2745]
2746[Rename test_package_initialization.py to (much shorter) test_import.py .
2747Brian Warner <warner@lothar.com>**20110611190234
2748 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
2749 
2750 The former name was making my 'ls' listings hard to read, by forcing them
2751 down to just two columns.
2752]
2753[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
2754zooko@zooko.com**20110611163741
2755 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
2756 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
2757 fixes #1412
2758]
2759[wui: right-align the size column in the WUI
2760zooko@zooko.com**20110611153758
2761 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
2762 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
2763 fixes #1412
2764]
2765[docs: three minor fixes
2766zooko@zooko.com**20110610121656
2767 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
2768 CREDITS for arc for stats tweak
2769 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
2770 English usage tweak
2771]
2772[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
2773david-sarah@jacaranda.org**20110609223719
2774 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
2775]
2776[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
2777wilcoxjg@gmail.com**20110527120135
2778 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
2779 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
2780 NEWS.rst, stats.py: documentation of change to get_latencies
2781 stats.rst: now documents percentile modification in get_latencies
2782 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
2783 fixes #1392
2784]
2785[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
2786david-sarah@jacaranda.org**20110517011214
2787 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
2788]
2789[docs: convert NEWS to NEWS.rst and change all references to it.
2790david-sarah@jacaranda.org**20110517010255
2791 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
2792]
2793[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
2794david-sarah@jacaranda.org**20110512140559
2795 Ignore-this: 784548fc5367fac5450df1c46890876d
2796]
2797[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
2798david-sarah@jacaranda.org**20110130164923
2799 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
2800]
2801[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
2802zooko@zooko.com**20110128142006
2803 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
2804 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
2805]
2806[M-x whitespace-cleanup
2807zooko@zooko.com**20110510193653
2808 Ignore-this: dea02f831298c0f65ad096960e7df5c7
2809]
2810[docs: fix typo in running.rst, thanks to arch_o_median
2811zooko@zooko.com**20110510193633
2812 Ignore-this: ca06de166a46abbc61140513918e79e8
2813]
2814[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
2815david-sarah@jacaranda.org**20110204204902
2816 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
2817]
2818[relnotes.txt: forseeable -> foreseeable. refs #1342
2819david-sarah@jacaranda.org**20110204204116
2820 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
2821]
2822[replace remaining .html docs with .rst docs
2823zooko@zooko.com**20110510191650
2824 Ignore-this: d557d960a986d4ac8216d1677d236399
2825 Remove install.html (long since deprecated).
2826 Also replace some obsolete references to install.html with references to quickstart.rst.
2827 Fix some broken internal references within docs/historical/historical_known_issues.txt.
2828 Thanks to Ravi Pinjala and Patrick McDonald.
2829 refs #1227
2830]
2831[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
2832zooko@zooko.com**20110428055232
2833 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
2834]
2835[munin tahoe_files plugin: fix incorrect file count
2836francois@ctrlaltdel.ch**20110428055312
2837 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
2838 fixes #1391
2839]
2840[corrected "k must never be smaller than N" to "k must never be greater than N"
2841secorp@allmydata.org**20110425010308
2842 Ignore-this: 233129505d6c70860087f22541805eac
2843]
2844[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
2845david-sarah@jacaranda.org**20110411190738
2846 Ignore-this: 7847d26bc117c328c679f08a7baee519
2847]
2848[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
2849david-sarah@jacaranda.org**20110410155844
2850 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
2851]
2852[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
2853david-sarah@jacaranda.org**20110410155705
2854 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
2855]
2856[remove unused variable detected by pyflakes
2857zooko@zooko.com**20110407172231
2858 Ignore-this: 7344652d5e0720af822070d91f03daf9
2859]
2860[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
2861david-sarah@jacaranda.org**20110401202750
2862 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
2863]
2864[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
2865Brian Warner <warner@lothar.com>**20110325232511
2866 Ignore-this: d5307faa6900f143193bfbe14e0f01a
2867]
2868[control.py: remove all uses of s.get_serverid()
2869warner@lothar.com**20110227011203
2870 Ignore-this: f80a787953bd7fa3d40e828bde00e855
2871]
2872[web: remove some uses of s.get_serverid(), not all
2873warner@lothar.com**20110227011159
2874 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
2875]
2876[immutable/downloader/fetcher.py: remove all get_serverid() calls
2877warner@lothar.com**20110227011156
2878 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
2879]
2880[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
2881warner@lothar.com**20110227011153
2882 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
2883 
2884 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
2885 _shares_from_server dict was being popped incorrectly (using shnum as the
2886 index instead of serverid). I'm still thinking through the consequences of
2887 this bug. It was probably benign and really hard to detect. I think it would
2888 cause us to incorrectly believe that we're pulling too many shares from a
2889 server, and thus prefer a different server rather than asking for a second
2890 share from the first server. The diversity code is intended to spread out the
2891 number of shares simultaneously being requested from each server, but with
2892 this bug, it might be spreading out the total number of shares requested at
2893 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
2894 segment, so the effect doesn't last very long).
2895]
2896[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
2897warner@lothar.com**20110227011150
2898 Ignore-this: d8d56dd8e7b280792b40105e13664554
2899 
2900 test_download.py: create+check MyShare instances better, make sure they share
2901 Server objects, now that finder.py cares
2902]
2903[immutable/downloader/finder.py: reduce use of get_serverid(), one left
2904warner@lothar.com**20110227011146
2905 Ignore-this: 5785be173b491ae8a78faf5142892020
2906]
2907[immutable/offloaded.py: reduce use of get_serverid() a bit more
2908warner@lothar.com**20110227011142
2909 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
2910]
2911[immutable/upload.py: reduce use of get_serverid()
2912warner@lothar.com**20110227011138
2913 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
2914]
2915[immutable/checker.py: remove some uses of s.get_serverid(), not all
2916warner@lothar.com**20110227011134
2917 Ignore-this: e480a37efa9e94e8016d826c492f626e
2918]
2919[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
2920warner@lothar.com**20110227011132
2921 Ignore-this: 6078279ddf42b179996a4b53bee8c421
2922 MockIServer stubs
2923]
2924[upload.py: rearrange _make_trackers a bit, no behavior changes
2925warner@lothar.com**20110227011128
2926 Ignore-this: 296d4819e2af452b107177aef6ebb40f
2927]
2928[happinessutil.py: finally rename merge_peers to merge_servers
2929warner@lothar.com**20110227011124
2930 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
2931]
2932[test_upload.py: factor out FakeServerTracker
2933warner@lothar.com**20110227011120
2934 Ignore-this: 6c182cba90e908221099472cc159325b
2935]
2936[test_upload.py: server-vs-tracker cleanup
2937warner@lothar.com**20110227011115
2938 Ignore-this: 2915133be1a3ba456e8603885437e03
2939]
2940[happinessutil.py: server-vs-tracker cleanup
2941warner@lothar.com**20110227011111
2942 Ignore-this: b856c84033562d7d718cae7cb01085a9
2943]
2944[upload.py: more tracker-vs-server cleanup
2945warner@lothar.com**20110227011107
2946 Ignore-this: bb75ed2afef55e47c085b35def2de315
2947]
2948[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
2949warner@lothar.com**20110227011103
2950 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
2951]
2952[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
2953warner@lothar.com**20110227011100
2954 Ignore-this: 7ea858755cbe5896ac212a925840fe68
2955 
2956 No behavioral changes, just updating variable/method names and log messages.
2957 The effects outside these three files should be minimal: some exception
2958 messages changed (to say "server" instead of "peer"), and some internal class
2959 names were changed. A few things still use "peer" to minimize external
2960 changes, like UploadResults.timings["peer_selection"] and
2961 happinessutil.merge_peers, which can be changed later.
2962]
2963[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
2964warner@lothar.com**20110227011056
2965 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
2966]
2967[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
2968warner@lothar.com**20110227011051
2969 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
2970]
2971[test: increase timeout on a network test because Francois's ARM machine hit that timeout
2972zooko@zooko.com**20110317165909
2973 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
2974 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
2975]
2976[docs/configuration.rst: add a "Frontend Configuration" section
2977Brian Warner <warner@lothar.com>**20110222014323
2978 Ignore-this: 657018aa501fe4f0efef9851628444ca
2979 
2980 this points to docs/frontends/*.rst, which were previously underlinked
2981]
2982[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
2983"Brian Warner <warner@lothar.com>"**20110221061544
2984 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
2985]
2986[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
2987david-sarah@jacaranda.org**20110221015817
2988 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
2989]
2990[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
2991david-sarah@jacaranda.org**20110221020125
2992 Ignore-this: b0744ed58f161bf188e037bad077fc48
2993]
2994[Refactor StorageFarmBroker handling of servers
2995Brian Warner <warner@lothar.com>**20110221015804
2996 Ignore-this: 842144ed92f5717699b8f580eab32a51
2997 
2998 Pass around IServer instance instead of (peerid, rref) tuple. Replace
2999 "descriptor" with "server". Other replacements:
3000 
3001  get_all_servers -> get_connected_servers/get_known_servers
3002  get_servers_for_index -> get_servers_for_psi (now returns IServers)
3003 
3004 This change still needs to be pushed further down: lots of code is now
3005 getting the IServer and then distributing (peerid, rref) internally.
3006 Instead, it ought to distribute the IServer internally and delay
3007 extracting a serverid or rref until the last moment.
3008 
3009 no_network.py was updated to retain parallelism.
3010]
3011[TAG allmydata-tahoe-1.8.2
3012warner@lothar.com**20110131020101]
3013Patch bundle hash:
3014d78adbbb6a55d1872ab518b4525b94b8415df205