Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
  * storage: new mocking tests of storage server read and write
  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.

Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
  sloppy not for production

Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
  * a temp patch used as a snapshot

Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
  * snapshot of progress on backend implementation (not suitable for trunk)

Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
  * checkpoint patch

Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
  * checkpoint4

Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
  * checkpoint5

Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
  * checkpoint 6

Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
  * checkpoint 7

Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
  * checkpoint8
    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.

Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
  * checkpoint 9

Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
  * checkpoint10

Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
  * jacp 11

Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
  * checkpoint12 testing correct behavior with regard to incoming and final

Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
  * fix inconsistent naming of storage_index vs storageindex in storage/server.py

Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
  * adding comments to clarify what I'm about to do.

Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
  * branching back, no longer attempting to mock inside TestServerFSBackend

Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
  * checkpoint12 TestServerFSBackend no longer mocks filesystem

Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
  * JACP

Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
  * testing get incoming

Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
  * ImmutableShareFile does not know its StorageIndex

Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
  * get_incoming correctly reports the 0 share after it has arrived

New patches:

[storage: new mocking tests of storage server read and write
wilcoxjg@gmail.com**20110325203514
 Ignore-this: df65c3c4f061dd1516f88662023fdb41
 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
] {
addfile ./src/allmydata/test/test_server.py
hunk ./src/allmydata/test/test_server.py 1
+from twisted.trial import unittest
+
+from StringIO import StringIO
+
+from allmydata.test.common_util import ReallyEqualMixin
+
+import mock
+
+# This is the code that we're going to be testing.
+from allmydata.storage.server import StorageServer
+
+# The following share file contents was generated with
+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
+# with share data == 'a'.
+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
+
+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
+
+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
+    @mock.patch('__builtin__.open')
+    def test_create_server(self, mockopen):
+        """ This tests whether a server instance can be constructed. """
+
+        def call_open(fname, mode):
+            if fname == 'testdir/bucket_counter.state':
+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
+            elif fname == 'testdir/lease_checker.state':
+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
+            elif fname == 'testdir/lease_checker.history':
+                return StringIO()
+        mockopen.side_effect = call_open
+
+        # Now begin the test.
+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
+
+        # You passed!
+
+class TestServer(unittest.TestCase, ReallyEqualMixin):
+    @mock.patch('__builtin__.open')
+    def setUp(self, mockopen):
+        def call_open(fname, mode):
+            if fname == 'testdir/bucket_counter.state':
+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
+            elif fname == 'testdir/lease_checker.state':
+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
+            elif fname == 'testdir/lease_checker.history':
+                return StringIO()
+        mockopen.side_effect = call_open
+
+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
+
+
+    @mock.patch('time.time')
+    @mock.patch('os.mkdir')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
+        """Handle a report of corruption."""
+
+        def call_listdir(dirname):
+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
+
+        mocklistdir.side_effect = call_listdir
+
+        class MockFile:
+            def __init__(self):
+                self.buffer = ''
+                self.pos = 0
+            def write(self, instring):
+                begin = self.pos
+                padlen = begin - len(self.buffer)
+                if padlen > 0:
+                    self.buffer += '\x00' * padlen
+                end = self.pos + len(instring)
+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
+                self.pos = end
+            def close(self):
+                pass
+            def seek(self, pos):
+                self.pos = pos
+            def read(self, numberbytes):
+                return self.buffer[self.pos:self.pos+numberbytes]
+            def tell(self):
+                return self.pos
+
+        mocktime.return_value = 0
+
+        sharefile = MockFile()
+        def call_open(fname, mode):
+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
+            return sharefile
+
+        mockopen.side_effect = call_open
+        # Now begin the test.
+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        print bs
+        bs[0].remote_write(0, 'a')
+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
+
+
+    @mock.patch('os.path.exists')
+    @mock.patch('os.path.getsize')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
+        """ This tests whether the code correctly finds and reads
+        shares written out by old (Tahoe-LAFS <= v1.8.2)
+        servers. There is a similar test in test_download, but that one
+        is from the perspective of the client and exercises a deeper
+        stack of code. This one is for exercising just the
+        StorageServer object. """
+
+        def call_listdir(dirname):
+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
+            return ['0']
+
+        mocklistdir.side_effect = call_listdir
+
+        def call_open(fname, mode):
+            self.failUnlessReallyEqual(fname, sharefname)
+            self.failUnless('r' in mode, mode)
+            self.failUnless('b' in mode, mode)
+
+            return StringIO(share_file_data)
+        mockopen.side_effect = call_open
+
+        datalen = len(share_file_data)
+        def call_getsize(fname):
+            self.failUnlessReallyEqual(fname, sharefname)
+            return datalen
+        mockgetsize.side_effect = call_getsize
+
+        def call_exists(fname):
+            self.failUnlessReallyEqual(fname, sharefname)
+            return True
+        mockexists.side_effect = call_exists
+
+        # Now begin the test.
+        bs = self.s.remote_get_buckets('teststorage_index')
+
+        self.failUnlessEqual(len(bs), 1)
+        b = bs[0]
+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
+        # If you try to read past the end you get the as much data as is there.
+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
+        # If you start reading past the end of the file you get the empty string.
+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
}
[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
wilcoxjg@gmail.com**20110624202850
 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
 sloppy not for production
] {
move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
hunk ./src/allmydata/storage/crawler.py 13
     pass
 
 class ShareCrawler(service.MultiService):
-    """A ShareCrawler subclass is attached to a StorageServer, and
+    """A subcless of ShareCrawler is attached to a StorageServer, and
     periodically walks all of its shares, processing each one in some
     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
     since large servers can easily have a terabyte of shares, in several
hunk ./src/allmydata/storage/crawler.py 31
     We assume that the normal upload/download/get_buckets traffic of a tahoe
     grid will cause the prefixdir contents to be mostly cached in the kernel,
     or that the number of buckets in each prefixdir will be small enough to
-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
     prefix. On this server, each prefixdir took 130ms-200ms to list the first
     time, and 17ms to list the second time.
hunk ./src/allmydata/storage/crawler.py 68
     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
     minimum_cycle_time = 300 # don't run a cycle faster than this
 
-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
         service.MultiService.__init__(self)
         if allowed_cpu_percentage is not None:
             self.allowed_cpu_percentage = allowed_cpu_percentage
hunk ./src/allmydata/storage/crawler.py 72
-        self.server = server
-        self.sharedir = server.sharedir
-        self.statefile = statefile
+        self.backend = backend
         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
                          for i in range(2**10)]
         self.prefixes.sort()
hunk ./src/allmydata/storage/crawler.py 446
 
     minimum_cycle_time = 60*60 # we don't need this more than once an hour
 
-    def __init__(self, server, statefile, num_sample_prefixes=1):
-        ShareCrawler.__init__(self, server, statefile)
+    def __init__(self, statefile, num_sample_prefixes=1):
+        ShareCrawler.__init__(self, statefile)
         self.num_sample_prefixes = num_sample_prefixes
 
     def add_initial_state(self):
hunk ./src/allmydata/storage/expirer.py 15
     removed.
 
     I collect statistics on the leases and make these available to a web
-    status page, including::
+    status page, including:
 
     Space recovered during this cycle-so-far:
      actual (only if expiration_enabled=True):
hunk ./src/allmydata/storage/expirer.py 51
     slow_start = 360 # wait 6 minutes after startup
     minimum_cycle_time = 12*60*60 # not more than twice per day
 
-    def __init__(self, server, statefile, historyfile,
+    def __init__(self, statefile, historyfile,
                  expiration_enabled, mode,
                  override_lease_duration, # used if expiration_mode=="age"
                  cutoff_date, # used if expiration_mode=="cutoff-date"
hunk ./src/allmydata/storage/expirer.py 71
         else:
             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
         self.sharetypes_to_expire = sharetypes
-        ShareCrawler.__init__(self, server, statefile)
+        ShareCrawler.__init__(self, statefile)
 
     def add_initial_state(self):
         # we fill ["cycle-to-date"] here (even though they will be reset in
hunk ./src/allmydata/storage/immutable.py 44
     sharetype = "immutable"
 
     def __init__(self, filename, max_size=None, create=False):
-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
+        """ If max_size is not None then I won't allow more than
+        max_size to be written to me. If create=True then max_size
+        must not be None. """
         precondition((max_size is not None) or (not create), max_size, create)
         self.home = filename
         self._max_size = max_size
hunk ./src/allmydata/storage/immutable.py 87
 
     def read_share_data(self, offset, length):
         precondition(offset >= 0)
-        # reads beyond the end of the data are truncated. Reads that start
-        # beyond the end of the data return an empty string. I wonder why
-        # Python doesn't do the following computation for me?
+        # Reads beyond the end of the data are truncated. Reads that start
+        # beyond the end of the data return an empty string.
         seekpos = self._data_offset+offset
         fsize = os.path.getsize(self.home)
         actuallength = max(0, min(length, fsize-seekpos))
hunk ./src/allmydata/storage/immutable.py 198
             space_freed += os.stat(self.home)[stat.ST_SIZE]
             self.unlink()
         return space_freed
+class NullBucketWriter(Referenceable):
+    implements(RIBucketWriter)
 
hunk ./src/allmydata/storage/immutable.py 201
+    def remote_write(self, offset, data):
+        return
 
 class BucketWriter(Referenceable):
     implements(RIBucketWriter)
hunk ./src/allmydata/storage/server.py 7
 from twisted.application import service
 
 from zope.interface import implements
-from allmydata.interfaces import RIStorageServer, IStatsProducer
+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
 from allmydata.util import fileutil, idlib, log, time_format
 import allmydata # for __full_version__
 
hunk ./src/allmydata/storage/server.py 16
 from allmydata.storage.lease import LeaseInfo
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
 from allmydata.storage.crawler import BucketCountingCrawler
 from allmydata.storage.expirer import LeaseCheckingCrawler
 
hunk ./src/allmydata/storage/server.py 20
+from zope.interface import implements
+
+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
+# be started and stopped.
+class Backend(service.MultiService):
+    implements(IStatsProducer)
+    def __init__(self):
+        service.MultiService.__init__(self)
+
+    def get_bucket_shares(self):
+        """XXX"""
+        raise NotImplementedError
+
+    def get_share(self):
+        """XXX"""
+        raise NotImplementedError
+
+    def make_bucket_writer(self):
+        """XXX"""
+        raise NotImplementedError
+
+class NullBackend(Backend):
+    def __init__(self):
+        Backend.__init__(self)
+
+    def get_available_space(self):
+        return None
+
+    def get_bucket_shares(self, storage_index):
+        return set()
+
+    def get_share(self, storage_index, sharenum):
+        return None
+
+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
+        return NullBucketWriter()
+
+class FSBackend(Backend):
+    def __init__(self, storedir, readonly=False, reserved_space=0):
+        Backend.__init__(self)
+
+        self._setup_storage(storedir, readonly, reserved_space)
+        self._setup_corruption_advisory()
+        self._setup_bucket_counter()
+        self._setup_lease_checkerf()
+
+    def _setup_storage(self, storedir, readonly, reserved_space):
+        self.storedir = storedir
+        self.readonly = readonly
+        self.reserved_space = int(reserved_space)
+        if self.reserved_space:
+            if self.get_available_space() is None:
+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
+                        umid="0wZ27w", level=log.UNUSUAL)
+
+        self.sharedir = os.path.join(self.storedir, "shares")
+        fileutil.make_dirs(self.sharedir)
+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
+        self._clean_incomplete()
+
+    def _clean_incomplete(self):
+        fileutil.rm_dir(self.incomingdir)
+        fileutil.make_dirs(self.incomingdir)
+
+    def _setup_corruption_advisory(self):
+        # we don't actually create the corruption-advisory dir until necessary
+        self.corruption_advisory_dir = os.path.join(self.storedir,
+                                                    "corruption-advisories")
+
+    def _setup_bucket_counter(self):
+        statefile = os.path.join(self.storedir, "bucket_counter.state")
+        self.bucket_counter = BucketCountingCrawler(statefile)
+        self.bucket_counter.setServiceParent(self)
+
+    def _setup_lease_checkerf(self):
+        statefile = os.path.join(self.storedir, "lease_checker.state")
+        historyfile = os.path.join(self.storedir, "lease_checker.history")
+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
+                                   expiration_enabled, expiration_mode,
+                                   expiration_override_lease_duration,
+                                   expiration_cutoff_date,
+                                   expiration_sharetypes)
+        self.lease_checker.setServiceParent(self)
+
+    def get_available_space(self):
+        if self.readonly:
+            return 0
+        return fileutil.get_available_space(self.storedir, self.reserved_space)
+
+    def get_bucket_shares(self, storage_index):
+        """Return a list of (shnum, pathname) tuples for files that hold
+        shares for this storage_index. In each tuple, 'shnum' will always be
+        the integer form of the last component of 'pathname'."""
+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
+        try:
+            for f in os.listdir(storagedir):
+                if NUM_RE.match(f):
+                    filename = os.path.join(storagedir, f)
+                    yield (int(f), filename)
+        except OSError:
+            # Commonly caused by there being no buckets at all.
+            pass
+
 # storage/
 # storage/shares/incoming
 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
hunk ./src/allmydata/storage/server.py 143
     name = 'storage'
     LeaseCheckerClass = LeaseCheckingCrawler
 
-    def __init__(self, storedir, nodeid, reserved_space=0,
-                 discard_storage=False, readonly_storage=False,
+    def __init__(self, nodeid, backend, reserved_space=0,
+                 readonly_storage=False,
                  stats_provider=None,
                  expiration_enabled=False,
                  expiration_mode="age",
hunk ./src/allmydata/storage/server.py 155
         assert isinstance(nodeid, str)
         assert len(nodeid) == 20
         self.my_nodeid = nodeid
-        self.storedir = storedir
-        sharedir = os.path.join(storedir, "shares")
-        fileutil.make_dirs(sharedir)
-        self.sharedir = sharedir
-        # we don't actually create the corruption-advisory dir until necessary
-        self.corruption_advisory_dir = os.path.join(storedir,
-                                                    "corruption-advisories")
-        self.reserved_space = int(reserved_space)
-        self.no_storage = discard_storage
-        self.readonly_storage = readonly_storage
         self.stats_provider = stats_provider
         if self.stats_provider:
             self.stats_provider.register_producer(self)
hunk ./src/allmydata/storage/server.py 158
-        self.incomingdir = os.path.join(sharedir, 'incoming')
-        self._clean_incomplete()
-        fileutil.make_dirs(self.incomingdir)
         self._active_writers = weakref.WeakKeyDictionary()
hunk ./src/allmydata/storage/server.py 159
+        self.backend = backend
+        self.backend.setServiceParent(self)
         log.msg("StorageServer created", facility="tahoe.storage")
 
hunk ./src/allmydata/storage/server.py 163
-        if reserved_space:
-            if self.get_available_space() is None:
-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
-                        umin="0wZ27w", level=log.UNUSUAL)
-
         self.latencies = {"allocate": [], # immutable
                           "write": [],
                           "close": [],
hunk ./src/allmydata/storage/server.py 174
                           "renew": [],
                           "cancel": [],
                           }
-        self.add_bucket_counter()
-
-        statefile = os.path.join(self.storedir, "lease_checker.state")
-        historyfile = os.path.join(self.storedir, "lease_checker.history")
-        klass = self.LeaseCheckerClass
-        self.lease_checker = klass(self, statefile, historyfile,
-                                   expiration_enabled, expiration_mode,
-                                   expiration_override_lease_duration,
-                                   expiration_cutoff_date,
-                                   expiration_sharetypes)
-        self.lease_checker.setServiceParent(self)
 
     def __repr__(self):
         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
hunk ./src/allmydata/storage/server.py 178
 
-    def add_bucket_counter(self):
-        statefile = os.path.join(self.storedir, "bucket_counter.state")
-        self.bucket_counter = BucketCountingCrawler(self, statefile)
-        self.bucket_counter.setServiceParent(self)
-
     def count(self, name, delta=1):
         if self.stats_provider:
             self.stats_provider.count("storage_server." + name, delta)
hunk ./src/allmydata/storage/server.py 233
             kwargs["facility"] = "tahoe.storage"
         return log.msg(*args, **kwargs)
 
-    def _clean_incomplete(self):
-        fileutil.rm_dir(self.incomingdir)
-
     def get_stats(self):
         # remember: RIStatsProvider requires that our return dict
         # contains numeric values.
hunk ./src/allmydata/storage/server.py 269
             stats['storage_server.total_bucket_count'] = bucket_count
         return stats
 
-    def get_available_space(self):
-        """Returns available space for share storage in bytes, or None if no
-        API to get this information is available."""
-
-        if self.readonly_storage:
-            return 0
-        return fileutil.get_available_space(self.storedir, self.reserved_space)
-
     def allocated_size(self):
         space = 0
         for bw in self._active_writers:
hunk ./src/allmydata/storage/server.py 276
         return space
 
     def remote_get_version(self):
-        remaining_space = self.get_available_space()
+        remaining_space = self.backend.get_available_space()
         if remaining_space is None:
             # We're on a platform that has no API to get disk stats.
             remaining_space = 2**64
hunk ./src/allmydata/storage/server.py 301
         self.count("allocate")
         alreadygot = set()
         bucketwriters = {} # k: shnum, v: BucketWriter
-        si_dir = storage_index_to_dir(storage_index)
-        si_s = si_b2a(storage_index)
 
hunk ./src/allmydata/storage/server.py 302
+        si_s = si_b2a(storage_index)
         log.msg("storage: allocate_buckets %s" % si_s)
 
         # in this implementation, the lease information (including secrets)
hunk ./src/allmydata/storage/server.py 316
 
         max_space_per_bucket = allocated_size
 
-        remaining_space = self.get_available_space()
+        remaining_space = self.backend.get_available_space()
         limited = remaining_space is not None
         if limited:
             # this is a bit conservative, since some of this allocated_size()
hunk ./src/allmydata/storage/server.py 329
         # they asked about: this will save them a lot of work. Add or update
         # leases for all of them: if they want us to hold shares for this
         # file, they'll want us to hold leases for this file.
-        for (shnum, fn) in self._get_bucket_shares(storage_index):
+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
             alreadygot.add(shnum)
             sf = ShareFile(fn)
             sf.add_or_renew_lease(lease_info)
hunk ./src/allmydata/storage/server.py 335
 
         for shnum in sharenums:
-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
-            if os.path.exists(finalhome):
+            share = self.backend.get_share(storage_index, shnum)
+
+            if not share:
+                if (not limited) or (remaining_space >= max_space_per_bucket):
+                    # ok! we need to create the new share file.
+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
+                                      max_space_per_bucket, lease_info, canary)
+                    bucketwriters[shnum] = bw
+                    self._active_writers[bw] = 1
+                    if limited:
+                        remaining_space -= max_space_per_bucket
+                else:
+                    # bummer! not enough space to accept this bucket
+                    pass
+
+            elif share.is_complete():
                 # great! we already have it. easy.
                 pass
hunk ./src/allmydata/storage/server.py 353
-            elif os.path.exists(incominghome):
+            elif not share.is_complete():
                 # Note that we don't create BucketWriters for shnums that
                 # have a partial share (in incoming/), so if a second upload
                 # occurs while the first is still in progress, the second
hunk ./src/allmydata/storage/server.py 359
                 # uploader will use different storage servers.
                 pass
-            elif (not limited) or (remaining_space >= max_space_per_bucket):
-                # ok! we need to create the new share file.
-                bw = BucketWriter(self, incominghome, finalhome,
-                                  max_space_per_bucket, lease_info, canary)
-                if self.no_storage:
-                    bw.throw_out_all_data = True
-                bucketwriters[shnum] = bw
-                self._active_writers[bw] = 1
-                if limited:
-                    remaining_space -= max_space_per_bucket
-            else:
-                # bummer! not enough space to accept this bucket
-                pass
-
-        if bucketwriters:
-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
 
         self.add_latency("allocate", time.time() - start)
         return alreadygot, bucketwriters
hunk ./src/allmydata/storage/server.py 437
             self.stats_provider.count('storage_server.bytes_added', consumed_size)
         del self._active_writers[bw]
 
-    def _get_bucket_shares(self, storage_index):
-        """Return a list of (shnum, pathname) tuples for files that hold
-        shares for this storage_index. In each tuple, 'shnum' will always be
-        the integer form of the last component of 'pathname'."""
-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
-        try:
-            for f in os.listdir(storagedir):
-                if NUM_RE.match(f):
-                    filename = os.path.join(storagedir, f)
-                    yield (int(f), filename)
-        except OSError:
-            # Commonly caused by there being no buckets at all.
-            pass
 
     def remote_get_buckets(self, storage_index):
         start = time.time()
hunk ./src/allmydata/storage/server.py 444
         si_s = si_b2a(storage_index)
         log.msg("storage: get_buckets %s" % si_s)
         bucketreaders = {} # k: sharenum, v: BucketReader
-        for shnum, filename in self._get_bucket_shares(storage_index):
+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
             bucketreaders[shnum] = BucketReader(self, filename,
                                                 storage_index, shnum)
         self.add_latency("get", time.time() - start)
hunk ./src/allmydata/test/test_backends.py 10
 import mock
 
 # This is the code that we're going to be testing.
-from allmydata.storage.server import StorageServer
+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
 
 # The following share file contents was generated with
 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
hunk ./src/allmydata/test/test_backends.py 21
 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
 
 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
+    @mock.patch('time.time')
+    @mock.patch('os.mkdir')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
+        """ This tests whether a server instance can be constructed
+        with a null backend. The server instance fails the test if it
+        tries to read or write to the file system. """
+
+        # Now begin the test.
+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
+
+        self.failIf(mockisdir.called)
+        self.failIf(mocklistdir.called)
+        self.failIf(mockopen.called)
+        self.failIf(mockmkdir.called)
+
+        # You passed!
+
+    @mock.patch('time.time')
+    @mock.patch('os.mkdir')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 44
-    def test_create_server(self, mockopen):
-        """ This tests whether a server instance can be constructed. """
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
+        """ This tests whether a server instance can be constructed
+        with a filesystem backend. To pass the test, it has to use the
+        filesystem in only the prescribed ways. """
 
         def call_open(fname, mode):
             if fname == 'testdir/bucket_counter.state':
hunk ./src/allmydata/test/test_backends.py 58
                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
             elif fname == 'testdir/lease_checker.history':
                 return StringIO()
+            else:
+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
         mockopen.side_effect = call_open
 
         # Now begin the test.
hunk ./src/allmydata/test/test_backends.py 63
-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
+
+        self.failIf(mockisdir.called)
+        self.failIf(mocklistdir.called)
+        self.failIf(mockopen.called)
+        self.failIf(mockmkdir.called)
+        self.failIf(mocktime.called)
 
         # You passed!
 
hunk ./src/allmydata/test/test_backends.py 73
-class TestServer(unittest.TestCase, ReallyEqualMixin):
+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
+    def setUp(self):
+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
+
+    @mock.patch('os.mkdir')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
+        """ Write a new share. """
+
+        # Now begin the test.
+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        bs[0].remote_write(0, 'a')
+        self.failIf(mockisdir.called)
+        self.failIf(mocklistdir.called)
+        self.failIf(mockopen.called)
+        self.failIf(mockmkdir.called)
+
+    @mock.patch('os.path.exists')
+    @mock.patch('os.path.getsize')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
+        """ This tests whether the code correctly finds and reads
+        shares written out by old (Tahoe-LAFS <= v1.8.2)
+        servers. There is a similar test in test_download, but that one
+        is from the perspective of the client and exercises a deeper
+        stack of code. This one is for exercising just the
+        StorageServer object. """
+
+        # Now begin the test.
+        bs = self.s.remote_get_buckets('teststorage_index')
+
+        self.failUnlessEqual(len(bs), 0)
+        self.failIf(mocklistdir.called)
+        self.failIf(mockopen.called)
+        self.failIf(mockgetsize.called)
+        self.failIf(mockexists.called)
+
+
+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
     @mock.patch('__builtin__.open')
     def setUp(self, mockopen):
         def call_open(fname, mode):
hunk ./src/allmydata/test/test_backends.py 126
                 return StringIO()
         mockopen.side_effect = call_open
 
-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
-
+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
 
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
hunk ./src/allmydata/test/test_backends.py 134
     @mock.patch('os.listdir')
     @mock.patch('os.path.isdir')
     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
-        """Handle a report of corruption."""
+        """ Write a new share. """
 
         def call_listdir(dirname):
             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
hunk ./src/allmydata/test/test_backends.py 173
         mockopen.side_effect = call_open
         # Now begin the test.
         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
-        print bs
         bs[0].remote_write(0, 'a')
         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
 
hunk ./src/allmydata/test/test_backends.py 176
-
     @mock.patch('os.path.exists')
     @mock.patch('os.path.getsize')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 218
 
         self.failUnlessEqual(len(bs), 1)
         b = bs[0]
+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
         # If you try to read past the end you get the as much data as is there.
         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
hunk ./src/allmydata/test/test_backends.py 224
         # If you start reading past the end of the file you get the empty string.
         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
+
+
}
[a temp patch used as a snapshot
wilcoxjg@gmail.com**20110626052732
 Ignore-this: 95f05e314eaec870afa04c76d979aa44
] {
hunk ./docs/configuration.rst 637
   [storage]
   enabled = True
   readonly = True
-  sizelimit = 10000000000
 
 
   [helper]
hunk ./docs/garbage-collection.rst 16
 
 When a file or directory in the virtual filesystem is no longer referenced,
 the space that its shares occupied on each storage server can be freed,
-making room for other shares. Tahoe currently uses a garbage collection
+making room for other shares. Tahoe uses a garbage collection
 ("GC") mechanism to implement this space-reclamation process. Each share has
 one or more "leases", which are managed by clients who want the
 file/directory to be retained. The storage server accepts each share for a
hunk ./docs/garbage-collection.rst 34
 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
 If lease renewal occurs quickly and with 100% reliability, than any renewal
 time that is shorter than the lease duration will suffice, but a larger ratio
-of duration-over-renewal-time will be more robust in the face of occasional
+of lease duration to renewal time will be more robust in the face of occasional
 delays or failures.
 
 The current recommended values for a small Tahoe grid are to renew the leases
replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
hunk ./src/allmydata/client.py 260
             sharetypes.append("mutable")
         expiration_sharetypes = tuple(sharetypes)
 
+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
+            xyz 
+        xyz
         ss = StorageServer(storedir, self.nodeid,
                            reserved_space=reserved,
                            discard_storage=discard,
hunk ./src/allmydata/storage/crawler.py 234
         f = open(tmpfile, "wb")
         pickle.dump(self.state, f)
         f.close()
-        fileutil.move_into_place(tmpfile, self.statefile)
+        fileutil.move_into_place(tmpfile, self.statefname)
 
     def startService(self):
         # arrange things to look like we were just sleeping, so
}
[snapshot of progress on backend implementation (not suitable for trunk)
wilcoxjg@gmail.com**20110626053244
 Ignore-this: 50c764af791c2b99ada8289546806a0a
] {
adddir ./src/allmydata/storage/backends
adddir ./src/allmydata/storage/backends/das
move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
adddir ./src/allmydata/storage/backends/null
hunk ./src/allmydata/interfaces.py 270
         store that on disk.
         """
 
+class IStorageBackend(Interface):
+    """
+    Objects of this kind live on the server side and are used by the
+    storage server object.
+    """
+    def get_available_space(self, reserved_space):
+        """ Returns available space for share storage in bytes, or
+        None if this information is not available or if the available
+        space is unlimited.
+
+        If the backend is configured for read-only mode then this will
+        return 0.
+
+        reserved_space is how many bytes to subtract from the answer, so
+        you can pass how many bytes you would like to leave unused on this
+        filesystem as reserved_space. """
+
+    def get_bucket_shares(self):
+        """XXX"""
+
+    def get_share(self):
+        """XXX"""
+
+    def make_bucket_writer(self):
+        """XXX"""
+
+class IStorageBackendShare(Interface):
+    """
+    This object contains as much as all of the share data.  It is intended
+    for lazy evaluation such that in many use cases substantially less than
+    all of the share data will be accessed.
+    """
+    def is_complete(self):
+        """
+        Returns the share state, or None if the share does not exist.
+        """
+
 class IStorageBucketWriter(Interface):
     """
     Objects of this kind live on the client side.
hunk ./src/allmydata/interfaces.py 2492
 
 class EmptyPathnameComponentError(Exception):
     """The webapi disallows empty pathname components."""
+
+class IShareStore(Interface):
+    pass
+
addfile ./src/allmydata/storage/backends/__init__.py
addfile ./src/allmydata/storage/backends/das/__init__.py
addfile ./src/allmydata/storage/backends/das/core.py
hunk ./src/allmydata/storage/backends/das/core.py 1
+from allmydata.interfaces import IStorageBackend
+from allmydata.storage.backends.base import Backend
+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
+from allmydata.util.assertutil import precondition
+
+import os, re, weakref, struct, time
+
+from foolscap.api import Referenceable
+from twisted.application import service
+
+from zope.interface import implements
+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
+from allmydata.util import fileutil, idlib, log, time_format
+import allmydata # for __full_version__
+
+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
+from allmydata.storage.lease import LeaseInfo
+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
+     create_mutable_sharefile
+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
+from allmydata.storage.crawler import FSBucketCountingCrawler
+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
+
+from zope.interface import implements
+
+class DASCore(Backend):
+    implements(IStorageBackend)
+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
+        Backend.__init__(self)
+
+        self._setup_storage(storedir, readonly, reserved_space)
+        self._setup_corruption_advisory()
+        self._setup_bucket_counter()
+        self._setup_lease_checkerf(expiration_policy)
+
+    def _setup_storage(self, storedir, readonly, reserved_space):
+        self.storedir = storedir
+        self.readonly = readonly
+        self.reserved_space = int(reserved_space)
+        if self.reserved_space:
+            if self.get_available_space() is None:
+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
+                        umid="0wZ27w", level=log.UNUSUAL)
+
+        self.sharedir = os.path.join(self.storedir, "shares")
+        fileutil.make_dirs(self.sharedir)
+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
+        self._clean_incomplete()
+
+    def _clean_incomplete(self):
+        fileutil.rm_dir(self.incomingdir)
+        fileutil.make_dirs(self.incomingdir)
+
+    def _setup_corruption_advisory(self):
+        # we don't actually create the corruption-advisory dir until necessary
+        self.corruption_advisory_dir = os.path.join(self.storedir,
+                                                    "corruption-advisories")
+
+    def _setup_bucket_counter(self):
+        statefname = os.path.join(self.storedir, "bucket_counter.state")
+        self.bucket_counter = FSBucketCountingCrawler(statefname)
+        self.bucket_counter.setServiceParent(self)
+
+    def _setup_lease_checkerf(self, expiration_policy):
+        statefile = os.path.join(self.storedir, "lease_checker.state")
+        historyfile = os.path.join(self.storedir, "lease_checker.history")
+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
+        self.lease_checker.setServiceParent(self)
+
+    def get_available_space(self):
+        if self.readonly:
+            return 0
+        return fileutil.get_available_space(self.storedir, self.reserved_space)
+
+    def get_shares(self, storage_index):
+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
+        try:
+            for f in os.listdir(finalstoragedir):
+                if NUM_RE.match(f):
+                    filename = os.path.join(finalstoragedir, f)
+                    yield FSBShare(filename, int(f))
+        except OSError:
+            # Commonly caused by there being no buckets at all.
+            pass
+        
+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
+        return bw
+        
+
+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
+# and share data. The share data is accessed by RIBucketWriter.write and
+# RIBucketReader.read . The lease information is not accessible through these
+# interfaces.
+
+# The share file has the following layout:
+#  0x00: share file version number, four bytes, current version is 1
+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
+#  0x08: number of leases, four bytes big-endian
+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
+#  A+0x0c = B: first lease. Lease format is:
+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
+#   B+0x04: renew secret, 32 bytes (SHA256)
+#   B+0x24: cancel secret, 32 bytes (SHA256)
+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
+#   B+0x48: next lease, or end of record
+
+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
+# but it is still filled in by storage servers in case the storage server
+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
+# share file is moved from one storage server to another. The value stored in
+# this field is truncated, so if the actual share data length is >= 2**32,
+# then the value stored in this field will be the actual share data length
+# modulo 2**32.
+
+class ImmutableShare:
+    LEASE_SIZE = struct.calcsize(">L32s32sL")
+    sharetype = "immutable"
+
+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
+        """ If max_size is not None then I won't allow more than
+        max_size to be written to me. If create=True then max_size
+        must not be None. """
+        precondition((max_size is not None) or (not create), max_size, create)
+        self.shnum = shnum 
+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
+        self._max_size = max_size
+        if create:
+            # touch the file, so later callers will see that we're working on
+            # it. Also construct the metadata.
+            assert not os.path.exists(self.fname)
+            fileutil.make_dirs(os.path.dirname(self.fname))
+            f = open(self.fname, 'wb')
+            # The second field -- the four-byte share data length -- is no
+            # longer used as of Tahoe v1.3.0, but we continue to write it in
+            # there in case someone downgrades a storage server from >=
+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
+            # server to another, etc. We do saturation -- a share data length
+            # larger than 2**32-1 (what can fit into the field) is marked as
+            # the largest length that can fit into the field. That way, even
+            # if this does happen, the old < v1.3.0 server will still allow
+            # clients to read the first part of the share.
+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
+            f.close()
+            self._lease_offset = max_size + 0x0c
+            self._num_leases = 0
+        else:
+            f = open(self.fname, 'rb')
+            filesize = os.path.getsize(self.fname)
+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
+            f.close()
+            if version != 1:
+                msg = "sharefile %s had version %d but we wanted 1" % \
+                      (self.fname, version)
+                raise UnknownImmutableContainerVersionError(msg)
+            self._num_leases = num_leases
+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
+        self._data_offset = 0xc
+
+    def unlink(self):
+        os.unlink(self.fname)
+
+    def read_share_data(self, offset, length):
+        precondition(offset >= 0)
+        # Reads beyond the end of the data are truncated. Reads that start
+        # beyond the end of the data return an empty string.
+        seekpos = self._data_offset+offset
+        fsize = os.path.getsize(self.fname)
+        actuallength = max(0, min(length, fsize-seekpos))
+        if actuallength == 0:
+            return ""
+        f = open(self.fname, 'rb')
+        f.seek(seekpos)
+        return f.read(actuallength)
+
+    def write_share_data(self, offset, data):
+        length = len(data)
+        precondition(offset >= 0, offset)
+        if self._max_size is not None and offset+length > self._max_size:
+            raise DataTooLargeError(self._max_size, offset, length)
+        f = open(self.fname, 'rb+')
+        real_offset = self._data_offset+offset
+        f.seek(real_offset)
+        assert f.tell() == real_offset
+        f.write(data)
+        f.close()
+
+    def _write_lease_record(self, f, lease_number, lease_info):
+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
+        f.seek(offset)
+        assert f.tell() == offset
+        f.write(lease_info.to_immutable_data())
+
+    def _read_num_leases(self, f):
+        f.seek(0x08)
+        (num_leases,) = struct.unpack(">L", f.read(4))
+        return num_leases
+
+    def _write_num_leases(self, f, num_leases):
+        f.seek(0x08)
+        f.write(struct.pack(">L", num_leases))
+
+    def _truncate_leases(self, f, num_leases):
+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
+
+    def get_leases(self):
+        """Yields a LeaseInfo instance for all leases."""
+        f = open(self.fname, 'rb')
+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
+        f.seek(self._lease_offset)
+        for i in range(num_leases):
+            data = f.read(self.LEASE_SIZE)
+            if data:
+                yield LeaseInfo().from_immutable_data(data)
+
+    def add_lease(self, lease_info):
+        f = open(self.fname, 'rb+')
+        num_leases = self._read_num_leases(f)
+        self._write_lease_record(f, num_leases, lease_info)
+        self._write_num_leases(f, num_leases+1)
+        f.close()
+
+    def renew_lease(self, renew_secret, new_expire_time):
+        for i,lease in enumerate(self.get_leases()):
+            if constant_time_compare(lease.renew_secret, renew_secret):
+                # yup. See if we need to update the owner time.
+                if new_expire_time > lease.expiration_time:
+                    # yes
+                    lease.expiration_time = new_expire_time
+                    f = open(self.fname, 'rb+')
+                    self._write_lease_record(f, i, lease)
+                    f.close()
+                return
+        raise IndexError("unable to renew non-existent lease")
+
+    def add_or_renew_lease(self, lease_info):
+        try:
+            self.renew_lease(lease_info.renew_secret,
+                             lease_info.expiration_time)
+        except IndexError:
+            self.add_lease(lease_info)
+
+
+    def cancel_lease(self, cancel_secret):
+        """Remove a lease with the given cancel_secret. If the last lease is
+        cancelled, the file will be removed. Return the number of bytes that
+        were freed (by truncating the list of leases, and possibly by
+        deleting the file. Raise IndexError if there was no lease with the
+        given cancel_secret.
+        """
+
+        leases = list(self.get_leases())
+        num_leases_removed = 0
+        for i,lease in enumerate(leases):
+            if constant_time_compare(lease.cancel_secret, cancel_secret):
+                leases[i] = None
+                num_leases_removed += 1
+        if not num_leases_removed:
+            raise IndexError("unable to find matching lease to cancel")
+        if num_leases_removed:
+            # pack and write out the remaining leases. We write these out in
+            # the same order as they were added, so that if we crash while
+            # doing this, we won't lose any non-cancelled leases.
+            leases = [l for l in leases if l] # remove the cancelled leases
+            f = open(self.fname, 'rb+')
+            for i,lease in enumerate(leases):
+                self._write_lease_record(f, i, lease)
+            self._write_num_leases(f, len(leases))
+            self._truncate_leases(f, len(leases))
+            f.close()
+        space_freed = self.LEASE_SIZE * num_leases_removed
+        if not len(leases):
+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
+            self.unlink()
+        return space_freed
hunk ./src/allmydata/storage/backends/das/expirer.py 2
 import time, os, pickle, struct
-from allmydata.storage.crawler import ShareCrawler
-from allmydata.storage.shares import get_share_file
+from allmydata.storage.crawler import FSShareCrawler
 from allmydata.storage.common import UnknownMutableContainerVersionError, \
      UnknownImmutableContainerVersionError
 from twisted.python import log as twlog
hunk ./src/allmydata/storage/backends/das/expirer.py 7
 
-class LeaseCheckingCrawler(ShareCrawler):
+class FSLeaseCheckingCrawler(FSShareCrawler):
     """I examine the leases on all shares, determining which are still valid
     and which have expired. I can remove the expired leases (if so
     configured), and the share will be deleted when the last lease is
hunk ./src/allmydata/storage/backends/das/expirer.py 50
     slow_start = 360 # wait 6 minutes after startup
     minimum_cycle_time = 12*60*60 # not more than twice per day
 
-    def __init__(self, statefile, historyfile,
-                 expiration_enabled, mode,
-                 override_lease_duration, # used if expiration_mode=="age"
-                 cutoff_date, # used if expiration_mode=="cutoff-date"
-                 sharetypes):
+    def __init__(self, statefile, historyfile, expiration_policy):
         self.historyfile = historyfile
hunk ./src/allmydata/storage/backends/das/expirer.py 52
-        self.expiration_enabled = expiration_enabled
-        self.mode = mode
+        self.expiration_enabled = expiration_policy['enabled']
+        self.mode = expiration_policy['mode']
         self.override_lease_duration = None
         self.cutoff_date = None
         if self.mode == "age":
hunk ./src/allmydata/storage/backends/das/expirer.py 57
-            assert isinstance(override_lease_duration, (int, type(None)))
-            self.override_lease_duration = override_lease_duration # seconds
+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
         elif self.mode == "cutoff-date":
hunk ./src/allmydata/storage/backends/das/expirer.py 60
-            assert isinstance(cutoff_date, int) # seconds-since-epoch
+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
             assert cutoff_date is not None
hunk ./src/allmydata/storage/backends/das/expirer.py 62
-            self.cutoff_date = cutoff_date
+            self.cutoff_date = expiration_policy['cutoff_date']
         else:
hunk ./src/allmydata/storage/backends/das/expirer.py 64
-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
-        self.sharetypes_to_expire = sharetypes
-        ShareCrawler.__init__(self, statefile)
+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
+        self.sharetypes_to_expire = expiration_policy['sharetypes']
+        FSShareCrawler.__init__(self, statefile)
 
     def add_initial_state(self):
         # we fill ["cycle-to-date"] here (even though they will be reset in
hunk ./src/allmydata/storage/backends/das/expirer.py 156
 
     def process_share(self, sharefilename):
         # first, find out what kind of a share it is
-        sf = get_share_file(sharefilename)
+        f = open(sharefilename, "rb")
+        prefix = f.read(32)
+        f.close()
+        if prefix == MutableShareFile.MAGIC:
+            sf = MutableShareFile(sharefilename)
+        else:
+            # otherwise assume it's immutable
+            sf = FSBShare(sharefilename)
         sharetype = sf.sharetype
         now = time.time()
         s = self.stat(sharefilename)
addfile ./src/allmydata/storage/backends/null/__init__.py
addfile ./src/allmydata/storage/backends/null/core.py
hunk ./src/allmydata/storage/backends/null/core.py 1
+from allmydata.storage.backends.base import Backend
+
+class NullCore(Backend):
+    def __init__(self):
+        Backend.__init__(self)
+
+    def get_available_space(self):
+        return None
+
+    def get_shares(self, storage_index):
+        return set()
+
+    def get_share(self, storage_index, sharenum):
+        return None
+
+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
+        return NullBucketWriter()
hunk ./src/allmydata/storage/crawler.py 12
 class TimeSliceExceeded(Exception):
     pass
 
-class ShareCrawler(service.MultiService):
+class FSShareCrawler(service.MultiService):
     """A subcless of ShareCrawler is attached to a StorageServer, and
     periodically walks all of its shares, processing each one in some
     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
hunk ./src/allmydata/storage/crawler.py 68
     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
     minimum_cycle_time = 300 # don't run a cycle faster than this
 
-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
+    def __init__(self, statefname, allowed_cpu_percentage=None):
         service.MultiService.__init__(self)
         if allowed_cpu_percentage is not None:
             self.allowed_cpu_percentage = allowed_cpu_percentage
hunk ./src/allmydata/storage/crawler.py 72
-        self.backend = backend
+        self.statefname = statefname
         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
                          for i in range(2**10)]
         self.prefixes.sort()
hunk ./src/allmydata/storage/crawler.py 192
         #                            of the last bucket to be processed, or
         #                            None if we are sleeping between cycles
         try:
-            f = open(self.statefile, "rb")
+            f = open(self.statefname, "rb")
             state = pickle.load(f)
             f.close()
         except EnvironmentError:
hunk ./src/allmydata/storage/crawler.py 230
         else:
             last_complete_prefix = self.prefixes[lcpi]
         self.state["last-complete-prefix"] = last_complete_prefix
-        tmpfile = self.statefile + ".tmp"
+        tmpfile = self.statefname + ".tmp"
         f = open(tmpfile, "wb")
         pickle.dump(self.state, f)
         f.close()
hunk ./src/allmydata/storage/crawler.py 433
         pass
 
 
-class BucketCountingCrawler(ShareCrawler):
+class FSBucketCountingCrawler(FSShareCrawler):
     """I keep track of how many buckets are being managed by this server.
     This is equivalent to the number of distributed files and directories for
     which I am providing storage. The actual number of files+directories in
hunk ./src/allmydata/storage/crawler.py 446
 
     minimum_cycle_time = 60*60 # we don't need this more than once an hour
 
-    def __init__(self, statefile, num_sample_prefixes=1):
-        ShareCrawler.__init__(self, statefile)
+    def __init__(self, statefname, num_sample_prefixes=1):
+        FSShareCrawler.__init__(self, statefname)
         self.num_sample_prefixes = num_sample_prefixes
 
     def add_initial_state(self):
hunk ./src/allmydata/storage/immutable.py 14
 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
      DataTooLargeError
 
-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
-# and share data. The share data is accessed by RIBucketWriter.write and
-# RIBucketReader.read . The lease information is not accessible through these
-# interfaces.
-
-# The share file has the following layout:
-#  0x00: share file version number, four bytes, current version is 1
-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
-#  0x08: number of leases, four bytes big-endian
-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
-#  A+0x0c = B: first lease. Lease format is:
-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
-#   B+0x04: renew secret, 32 bytes (SHA256)
-#   B+0x24: cancel secret, 32 bytes (SHA256)
-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
-#   B+0x48: next lease, or end of record
-
-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
-# but it is still filled in by storage servers in case the storage server
-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
-# share file is moved from one storage server to another. The value stored in
-# this field is truncated, so if the actual share data length is >= 2**32,
-# then the value stored in this field will be the actual share data length
-# modulo 2**32.
-
-class ShareFile:
-    LEASE_SIZE = struct.calcsize(">L32s32sL")
-    sharetype = "immutable"
-
-    def __init__(self, filename, max_size=None, create=False):
-        """ If max_size is not None then I won't allow more than
-        max_size to be written to me. If create=True then max_size
-        must not be None. """
-        precondition((max_size is not None) or (not create), max_size, create)
-        self.home = filename
-        self._max_size = max_size
-        if create:
-            # touch the file, so later callers will see that we're working on
-            # it. Also construct the metadata.
-            assert not os.path.exists(self.home)
-            fileutil.make_dirs(os.path.dirname(self.home))
-            f = open(self.home, 'wb')
-            # The second field -- the four-byte share data length -- is no
-            # longer used as of Tahoe v1.3.0, but we continue to write it in
-            # there in case someone downgrades a storage server from >=
-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
-            # server to another, etc. We do saturation -- a share data length
-            # larger than 2**32-1 (what can fit into the field) is marked as
-            # the largest length that can fit into the field. That way, even
-            # if this does happen, the old < v1.3.0 server will still allow
-            # clients to read the first part of the share.
-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
-            f.close()
-            self._lease_offset = max_size + 0x0c
-            self._num_leases = 0
-        else:
-            f = open(self.home, 'rb')
-            filesize = os.path.getsize(self.home)
-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
-            f.close()
-            if version != 1:
-                msg = "sharefile %s had version %d but we wanted 1" % \
-                      (filename, version)
-                raise UnknownImmutableContainerVersionError(msg)
-            self._num_leases = num_leases
-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
-        self._data_offset = 0xc
-
-    def unlink(self):
-        os.unlink(self.home)
-
-    def read_share_data(self, offset, length):
-        precondition(offset >= 0)
-        # Reads beyond the end of the data are truncated. Reads that start
-        # beyond the end of the data return an empty string.
-        seekpos = self._data_offset+offset
-        fsize = os.path.getsize(self.home)
-        actuallength = max(0, min(length, fsize-seekpos))
-        if actuallength == 0:
-            return ""
-        f = open(self.home, 'rb')
-        f.seek(seekpos)
-        return f.read(actuallength)
-
-    def write_share_data(self, offset, data):
-        length = len(data)
-        precondition(offset >= 0, offset)
-        if self._max_size is not None and offset+length > self._max_size:
-            raise DataTooLargeError(self._max_size, offset, length)
-        f = open(self.home, 'rb+')
-        real_offset = self._data_offset+offset
-        f.seek(real_offset)
-        assert f.tell() == real_offset
-        f.write(data)
-        f.close()
-
-    def _write_lease_record(self, f, lease_number, lease_info):
-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
-        f.seek(offset)
-        assert f.tell() == offset
-        f.write(lease_info.to_immutable_data())
-
-    def _read_num_leases(self, f):
-        f.seek(0x08)
-        (num_leases,) = struct.unpack(">L", f.read(4))
-        return num_leases
-
-    def _write_num_leases(self, f, num_leases):
-        f.seek(0x08)
-        f.write(struct.pack(">L", num_leases))
-
-    def _truncate_leases(self, f, num_leases):
-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
-
-    def get_leases(self):
-        """Yields a LeaseInfo instance for all leases."""
-        f = open(self.home, 'rb')
-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
-        f.seek(self._lease_offset)
-        for i in range(num_leases):
-            data = f.read(self.LEASE_SIZE)
-            if data:
-                yield LeaseInfo().from_immutable_data(data)
-
-    def add_lease(self, lease_info):
-        f = open(self.home, 'rb+')
-        num_leases = self._read_num_leases(f)
-        self._write_lease_record(f, num_leases, lease_info)
-        self._write_num_leases(f, num_leases+1)
-        f.close()
-
-    def renew_lease(self, renew_secret, new_expire_time):
-        for i,lease in enumerate(self.get_leases()):
-            if constant_time_compare(lease.renew_secret, renew_secret):
-                # yup. See if we need to update the owner time.
-                if new_expire_time > lease.expiration_time:
-                    # yes
-                    lease.expiration_time = new_expire_time
-                    f = open(self.home, 'rb+')
-                    self._write_lease_record(f, i, lease)
-                    f.close()
-                return
-        raise IndexError("unable to renew non-existent lease")
-
-    def add_or_renew_lease(self, lease_info):
-        try:
-            self.renew_lease(lease_info.renew_secret,
-                             lease_info.expiration_time)
-        except IndexError:
-            self.add_lease(lease_info)
-
-
-    def cancel_lease(self, cancel_secret):
-        """Remove a lease with the given cancel_secret. If the last lease is
-        cancelled, the file will be removed. Return the number of bytes that
-        were freed (by truncating the list of leases, and possibly by
-        deleting the file. Raise IndexError if there was no lease with the
-        given cancel_secret.
-        """
-
-        leases = list(self.get_leases())
-        num_leases_removed = 0
-        for i,lease in enumerate(leases):
-            if constant_time_compare(lease.cancel_secret, cancel_secret):
-                leases[i] = None
-                num_leases_removed += 1
-        if not num_leases_removed:
-            raise IndexError("unable to find matching lease to cancel")
-        if num_leases_removed:
-            # pack and write out the remaining leases. We write these out in
-            # the same order as they were added, so that if we crash while
-            # doing this, we won't lose any non-cancelled leases.
-            leases = [l for l in leases if l] # remove the cancelled leases
-            f = open(self.home, 'rb+')
-            for i,lease in enumerate(leases):
-                self._write_lease_record(f, i, lease)
-            self._write_num_leases(f, len(leases))
-            self._truncate_leases(f, len(leases))
-            f.close()
-        space_freed = self.LEASE_SIZE * num_leases_removed
-        if not len(leases):
-            space_freed += os.stat(self.home)[stat.ST_SIZE]
-            self.unlink()
-        return space_freed
-class NullBucketWriter(Referenceable):
-    implements(RIBucketWriter)
-
-    def remote_write(self, offset, data):
-        return
-
 class BucketWriter(Referenceable):
     implements(RIBucketWriter)
 
hunk ./src/allmydata/storage/immutable.py 17
-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
         self.ss = ss
hunk ./src/allmydata/storage/immutable.py 19
-        self.incominghome = incominghome
-        self.finalhome = finalhome
         self._max_size = max_size # don't allow the client to write more than this
         self._canary = canary
         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
hunk ./src/allmydata/storage/immutable.py 24
         self.closed = False
         self.throw_out_all_data = False
-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
+        self._sharefile = immutableshare
         # also, add our lease to the file now, so that other ones can be
         # added by simultaneous uploaders
         self._sharefile.add_lease(lease_info)
hunk ./src/allmydata/storage/server.py 16
 from allmydata.storage.lease import LeaseInfo
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
-from allmydata.storage.crawler import BucketCountingCrawler
-from allmydata.storage.expirer import LeaseCheckingCrawler
 
 from zope.interface import implements
 
hunk ./src/allmydata/storage/server.py 19
-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
-# be started and stopped.
-class Backend(service.MultiService):
-    implements(IStatsProducer)
-    def __init__(self):
-        service.MultiService.__init__(self)
-
-    def get_bucket_shares(self):
-        """XXX"""
-        raise NotImplementedError
-
-    def get_share(self):
-        """XXX"""
-        raise NotImplementedError
-
-    def make_bucket_writer(self):
-        """XXX"""
-        raise NotImplementedError
-
-class NullBackend(Backend):
-    def __init__(self):
-        Backend.__init__(self)
-
-    def get_available_space(self):
-        return None
-
-    def get_bucket_shares(self, storage_index):
-        return set()
-
-    def get_share(self, storage_index, sharenum):
-        return None
-
-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
-        return NullBucketWriter()
-
-class FSBackend(Backend):
-    def __init__(self, storedir, readonly=False, reserved_space=0):
-        Backend.__init__(self)
-
-        self._setup_storage(storedir, readonly, reserved_space)
-        self._setup_corruption_advisory()
-        self._setup_bucket_counter()
-        self._setup_lease_checkerf()
-
-    def _setup_storage(self, storedir, readonly, reserved_space):
-        self.storedir = storedir
-        self.readonly = readonly
-        self.reserved_space = int(reserved_space)
-        if self.reserved_space:
-            if self.get_available_space() is None:
-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
-                        umid="0wZ27w", level=log.UNUSUAL)
-
-        self.sharedir = os.path.join(self.storedir, "shares")
-        fileutil.make_dirs(self.sharedir)
-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
-        self._clean_incomplete()
-
-    def _clean_incomplete(self):
-        fileutil.rm_dir(self.incomingdir)
-        fileutil.make_dirs(self.incomingdir)
-
-    def _setup_corruption_advisory(self):
-        # we don't actually create the corruption-advisory dir until necessary
-        self.corruption_advisory_dir = os.path.join(self.storedir,
-                                                    "corruption-advisories")
-
-    def _setup_bucket_counter(self):
-        statefile = os.path.join(self.storedir, "bucket_counter.state")
-        self.bucket_counter = BucketCountingCrawler(statefile)
-        self.bucket_counter.setServiceParent(self)
-
-    def _setup_lease_checkerf(self):
-        statefile = os.path.join(self.storedir, "lease_checker.state")
-        historyfile = os.path.join(self.storedir, "lease_checker.history")
-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
-                                   expiration_enabled, expiration_mode,
-                                   expiration_override_lease_duration,
-                                   expiration_cutoff_date,
-                                   expiration_sharetypes)
-        self.lease_checker.setServiceParent(self)
-
-    def get_available_space(self):
-        if self.readonly:
-            return 0
-        return fileutil.get_available_space(self.storedir, self.reserved_space)
-
-    def get_bucket_shares(self, storage_index):
-        """Return a list of (shnum, pathname) tuples for files that hold
-        shares for this storage_index. In each tuple, 'shnum' will always be
-        the integer form of the last component of 'pathname'."""
-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
-        try:
-            for f in os.listdir(storagedir):
-                if NUM_RE.match(f):
-                    filename = os.path.join(storagedir, f)
-                    yield (int(f), filename)
-        except OSError:
-            # Commonly caused by there being no buckets at all.
-            pass
-
 # storage/
 # storage/shares/incoming
 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
hunk ./src/allmydata/storage/server.py 32
 # $SHARENUM matches this regex:
 NUM_RE=re.compile("^[0-9]+$")
 
-
-
 class StorageServer(service.MultiService, Referenceable):
     implements(RIStorageServer, IStatsProducer)
     name = 'storage'
hunk ./src/allmydata/storage/server.py 35
-    LeaseCheckerClass = LeaseCheckingCrawler
 
     def __init__(self, nodeid, backend, reserved_space=0,
                  readonly_storage=False,
hunk ./src/allmydata/storage/server.py 38
-                 stats_provider=None,
-                 expiration_enabled=False,
-                 expiration_mode="age",
-                 expiration_override_lease_duration=None,
-                 expiration_cutoff_date=None,
-                 expiration_sharetypes=("mutable", "immutable")):
+                 stats_provider=None ):
         service.MultiService.__init__(self)
         assert isinstance(nodeid, str)
         assert len(nodeid) == 20
hunk ./src/allmydata/storage/server.py 217
         # they asked about: this will save them a lot of work. Add or update
         # leases for all of them: if they want us to hold shares for this
         # file, they'll want us to hold leases for this file.
-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
-            alreadygot.add(shnum)
-            sf = ShareFile(fn)
-            sf.add_or_renew_lease(lease_info)
-
-        for shnum in sharenums:
-            share = self.backend.get_share(storage_index, shnum)
+        for share in self.backend.get_shares(storage_index):
+            alreadygot.add(share.shnum)
+            share.add_or_renew_lease(lease_info)
 
hunk ./src/allmydata/storage/server.py 221
-            if not share:
-                if (not limited) or (remaining_space >= max_space_per_bucket):
-                    # ok! we need to create the new share file.
-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
-                                      max_space_per_bucket, lease_info, canary)
-                    bucketwriters[shnum] = bw
-                    self._active_writers[bw] = 1
-                    if limited:
-                        remaining_space -= max_space_per_bucket
-                else:
-                    # bummer! not enough space to accept this bucket
-                    pass
+        for shnum in (sharenums - alreadygot):
+            if (not limited) or (remaining_space >= max_space_per_bucket):
+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
+                self.backend.set_storage_server(self)
+                bw = self.backend.make_bucket_writer(storage_index, shnum,
+                                                     max_space_per_bucket, lease_info, canary)
+                bucketwriters[shnum] = bw
+                self._active_writers[bw] = 1
+                if limited:
+                    remaining_space -= max_space_per_bucket
 
hunk ./src/allmydata/storage/server.py 232
-            elif share.is_complete():
-                # great! we already have it. easy.
-                pass
-            elif not share.is_complete():
-                # Note that we don't create BucketWriters for shnums that
-                # have a partial share (in incoming/), so if a second upload
-                # occurs while the first is still in progress, the second
-                # uploader will use different storage servers.
-                pass
+        #XXX We SHOULD DOCUMENT LATER.
 
         self.add_latency("allocate", time.time() - start)
         return alreadygot, bucketwriters
hunk ./src/allmydata/storage/server.py 238
 
     def _iter_share_files(self, storage_index):
-        for shnum, filename in self._get_bucket_shares(storage_index):
+        for shnum, filename in self._get_shares(storage_index):
             f = open(filename, 'rb')
             header = f.read(32)
             f.close()
hunk ./src/allmydata/storage/server.py 318
         si_s = si_b2a(storage_index)
         log.msg("storage: get_buckets %s" % si_s)
         bucketreaders = {} # k: sharenum, v: BucketReader
-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
+        for shnum, filename in self.backend.get_shares(storage_index):
             bucketreaders[shnum] = BucketReader(self, filename,
                                                 storage_index, shnum)
         self.add_latency("get", time.time() - start)
hunk ./src/allmydata/storage/server.py 334
         # since all shares get the same lease data, we just grab the leases
         # from the first share
         try:
-            shnum, filename = self._get_bucket_shares(storage_index).next()
+            shnum, filename = self._get_shares(storage_index).next()
             sf = ShareFile(filename)
             return sf.get_leases()
         except StopIteration:
hunk ./src/allmydata/storage/shares.py 1
-#! /usr/bin/python
-
-from allmydata.storage.mutable import MutableShareFile
-from allmydata.storage.immutable import ShareFile
-
-def get_share_file(filename):
-    f = open(filename, "rb")
-    prefix = f.read(32)
-    f.close()
-    if prefix == MutableShareFile.MAGIC:
-        return MutableShareFile(filename)
-    # otherwise assume it's immutable
-    return ShareFile(filename)
-
rmfile ./src/allmydata/storage/shares.py
hunk ./src/allmydata/test/common_util.py 20
 
 def flip_one_bit(s, offset=0, size=None):
     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
-    than offset+size. """
+    than offset+size. Return the new string. """
     if size is None:
         size=len(s)-offset
     i = randrange(offset, offset+size)
hunk ./src/allmydata/test/test_backends.py 7
 
 from allmydata.test.common_util import ReallyEqualMixin
 
-import mock
+import mock, os
 
 # This is the code that we're going to be testing.
hunk ./src/allmydata/test/test_backends.py 10
-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
+from allmydata.storage.server import StorageServer
+
+from allmydata.storage.backends.das.core import DASCore
+from allmydata.storage.backends.null.core import NullCore
+
 
 # The following share file contents was generated with
 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
hunk ./src/allmydata/test/test_backends.py 22
 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
 
-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
+tempdir = 'teststoredir'
+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
+sharefname = os.path.join(sharedirname, '0')
 
 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
     @mock.patch('time.time')
hunk ./src/allmydata/test/test_backends.py 58
         filesystem in only the prescribed ways. """
 
         def call_open(fname, mode):
-            if fname == 'testdir/bucket_counter.state':
-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
-            elif fname == 'testdir/lease_checker.state':
-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
-            elif fname == 'testdir/lease_checker.history':
+            if fname == os.path.join(tempdir,'bucket_counter.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
                 return StringIO()
             else:
                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
hunk ./src/allmydata/test/test_backends.py 124
     @mock.patch('__builtin__.open')
     def setUp(self, mockopen):
         def call_open(fname, mode):
-            if fname == 'testdir/bucket_counter.state':
-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
-            elif fname == 'testdir/lease_checker.state':
-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
-            elif fname == 'testdir/lease_checker.history':
+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
                 return StringIO()
         mockopen.side_effect = call_open
hunk ./src/allmydata/test/test_backends.py 131
-
-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
+        expiration_policy = {'enabled' : False, 
+                             'mode' : 'age',
+                             'override_lease_duration' : None,
+                             'cutoff_date' : None,
+                             'sharetypes' : None}
+        testbackend = DASCore(tempdir, expiration_policy)
+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
 
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
hunk ./src/allmydata/test/test_backends.py 148
         """ Write a new share. """
 
         def call_listdir(dirname):
-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
+            self.failUnlessReallyEqual(dirname, sharedirname)
+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
 
         mocklistdir.side_effect = call_listdir
 
hunk ./src/allmydata/test/test_backends.py 178
 
         sharefile = MockFile()
         def call_open(fname, mode):
-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
             return sharefile
 
         mockopen.side_effect = call_open
hunk ./src/allmydata/test/test_backends.py 200
         StorageServer object. """
 
         def call_listdir(dirname):
-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
             return ['0']
 
         mocklistdir.side_effect = call_listdir
}
[checkpoint patch
wilcoxjg@gmail.com**20110626165715
 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
] {
hunk ./src/allmydata/storage/backends/das/core.py 21
 from allmydata.storage.lease import LeaseInfo
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
+from allmydata.storage.immutable import BucketWriter, BucketReader
 from allmydata.storage.crawler import FSBucketCountingCrawler
 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
 
hunk ./src/allmydata/storage/backends/das/core.py 27
 from zope.interface import implements
 
+# $SHARENUM matches this regex:
+NUM_RE=re.compile("^[0-9]+$")
+
 class DASCore(Backend):
     implements(IStorageBackend)
     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
hunk ./src/allmydata/storage/backends/das/core.py 80
         return fileutil.get_available_space(self.storedir, self.reserved_space)
 
     def get_shares(self, storage_index):
-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
         try:
             for f in os.listdir(finalstoragedir):
hunk ./src/allmydata/storage/backends/das/core.py 86
                 if NUM_RE.match(f):
                     filename = os.path.join(finalstoragedir, f)
-                    yield FSBShare(filename, int(f))
+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
         except OSError:
             # Commonly caused by there being no buckets at all.
             pass
hunk ./src/allmydata/storage/backends/das/core.py 95
         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
         return bw
+
+    def set_storage_server(self, ss):
+        self.ss = ss
         
 
 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
hunk ./src/allmydata/storage/server.py 29
 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
 # base-32 chars).
 
-# $SHARENUM matches this regex:
-NUM_RE=re.compile("^[0-9]+$")
 
 class StorageServer(service.MultiService, Referenceable):
     implements(RIStorageServer, IStatsProducer)
}
[checkpoint4
wilcoxjg@gmail.com**20110628202202
 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
] {
hunk ./src/allmydata/storage/backends/das/core.py 96
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
         return bw
 
+    def make_bucket_reader(self, share):
+        return BucketReader(self.ss, share)
+
     def set_storage_server(self, ss):
         self.ss = ss
         
hunk ./src/allmydata/storage/backends/das/core.py 138
         must not be None. """
         precondition((max_size is not None) or (not create), max_size, create)
         self.shnum = shnum 
+        self.storage_index = storageindex
         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
         self._max_size = max_size
         if create:
hunk ./src/allmydata/storage/backends/das/core.py 173
             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
         self._data_offset = 0xc
 
+    def get_shnum(self):
+        return self.shnum
+
     def unlink(self):
         os.unlink(self.fname)
 
hunk ./src/allmydata/storage/backends/null/core.py 2
 from allmydata.storage.backends.base import Backend
+from allmydata.storage.immutable import BucketWriter, BucketReader
 
 class NullCore(Backend):
     def __init__(self):
hunk ./src/allmydata/storage/backends/null/core.py 17
     def get_share(self, storage_index, sharenum):
         return None
 
-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
-        return NullBucketWriter()
+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
+        
+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
+
+    def set_storage_server(self, ss):
+        self.ss = ss
+
+class ImmutableShare:
+    sharetype = "immutable"
+
+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
+        """ If max_size is not None then I won't allow more than
+        max_size to be written to me. If create=True then max_size
+        must not be None. """
+        precondition((max_size is not None) or (not create), max_size, create)
+        self.shnum = shnum 
+        self.storage_index = storageindex
+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
+        self._max_size = max_size
+        if create:
+            # touch the file, so later callers will see that we're working on
+            # it. Also construct the metadata.
+            assert not os.path.exists(self.fname)
+            fileutil.make_dirs(os.path.dirname(self.fname))
+            f = open(self.fname, 'wb')
+            # The second field -- the four-byte share data length -- is no
+            # longer used as of Tahoe v1.3.0, but we continue to write it in
+            # there in case someone downgrades a storage server from >=
+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
+            # server to another, etc. We do saturation -- a share data length
+            # larger than 2**32-1 (what can fit into the field) is marked as
+            # the largest length that can fit into the field. That way, even
+            # if this does happen, the old < v1.3.0 server will still allow
+            # clients to read the first part of the share.
+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
+            f.close()
+            self._lease_offset = max_size + 0x0c
+            self._num_leases = 0
+        else:
+            f = open(self.fname, 'rb')
+            filesize = os.path.getsize(self.fname)
+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
+            f.close()
+            if version != 1:
+                msg = "sharefile %s had version %d but we wanted 1" % \
+                      (self.fname, version)
+                raise UnknownImmutableContainerVersionError(msg)
+            self._num_leases = num_leases
+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
+        self._data_offset = 0xc
+
+    def get_shnum(self):
+        return self.shnum
+
+    def unlink(self):
+        os.unlink(self.fname)
+
+    def read_share_data(self, offset, length):
+        precondition(offset >= 0)
+        # Reads beyond the end of the data are truncated. Reads that start
+        # beyond the end of the data return an empty string.
+        seekpos = self._data_offset+offset
+        fsize = os.path.getsize(self.fname)
+        actuallength = max(0, min(length, fsize-seekpos))
+        if actuallength == 0:
+            return ""
+        f = open(self.fname, 'rb')
+        f.seek(seekpos)
+        return f.read(actuallength)
+
+    def write_share_data(self, offset, data):
+        length = len(data)
+        precondition(offset >= 0, offset)
+        if self._max_size is not None and offset+length > self._max_size:
+            raise DataTooLargeError(self._max_size, offset, length)
+        f = open(self.fname, 'rb+')
+        real_offset = self._data_offset+offset
+        f.seek(real_offset)
+        assert f.tell() == real_offset
+        f.write(data)
+        f.close()
+
+    def _write_lease_record(self, f, lease_number, lease_info):
+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
+        f.seek(offset)
+        assert f.tell() == offset
+        f.write(lease_info.to_immutable_data())
+
+    def _read_num_leases(self, f):
+        f.seek(0x08)
+        (num_leases,) = struct.unpack(">L", f.read(4))
+        return num_leases
+
+    def _write_num_leases(self, f, num_leases):
+        f.seek(0x08)
+        f.write(struct.pack(">L", num_leases))
+
+    def _truncate_leases(self, f, num_leases):
+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
+
+    def get_leases(self):
+        """Yields a LeaseInfo instance for all leases."""
+        f = open(self.fname, 'rb')
+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
+        f.seek(self._lease_offset)
+        for i in range(num_leases):
+            data = f.read(self.LEASE_SIZE)
+            if data:
+                yield LeaseInfo().from_immutable_data(data)
+
+    def add_lease(self, lease_info):
+        f = open(self.fname, 'rb+')
+        num_leases = self._read_num_leases(f)
+        self._write_lease_record(f, num_leases, lease_info)
+        self._write_num_leases(f, num_leases+1)
+        f.close()
+
+    def renew_lease(self, renew_secret, new_expire_time):
+        for i,lease in enumerate(self.get_leases()):
+            if constant_time_compare(lease.renew_secret, renew_secret):
+                # yup. See if we need to update the owner time.
+                if new_expire_time > lease.expiration_time:
+                    # yes
+                    lease.expiration_time = new_expire_time
+                    f = open(self.fname, 'rb+')
+                    self._write_lease_record(f, i, lease)
+                    f.close()
+                return
+        raise IndexError("unable to renew non-existent lease")
+
+    def add_or_renew_lease(self, lease_info):
+        try:
+            self.renew_lease(lease_info.renew_secret,
+                             lease_info.expiration_time)
+        except IndexError:
+            self.add_lease(lease_info)
+
+
+    def cancel_lease(self, cancel_secret):
+        """Remove a lease with the given cancel_secret. If the last lease is
+        cancelled, the file will be removed. Return the number of bytes that
+        were freed (by truncating the list of leases, and possibly by
+        deleting the file. Raise IndexError if there was no lease with the
+        given cancel_secret.
+        """
+
+        leases = list(self.get_leases())
+        num_leases_removed = 0
+        for i,lease in enumerate(leases):
+            if constant_time_compare(lease.cancel_secret, cancel_secret):
+                leases[i] = None
+                num_leases_removed += 1
+        if not num_leases_removed:
+            raise IndexError("unable to find matching lease to cancel")
+        if num_leases_removed:
+            # pack and write out the remaining leases. We write these out in
+            # the same order as they were added, so that if we crash while
+            # doing this, we won't lose any non-cancelled leases.
+            leases = [l for l in leases if l] # remove the cancelled leases
+            f = open(self.fname, 'rb+')
+            for i,lease in enumerate(leases):
+                self._write_lease_record(f, i, lease)
+            self._write_num_leases(f, len(leases))
+            self._truncate_leases(f, len(leases))
+            f.close()
+        space_freed = self.LEASE_SIZE * num_leases_removed
+        if not len(leases):
+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
+            self.unlink()
+        return space_freed
hunk ./src/allmydata/storage/immutable.py 114
 class BucketReader(Referenceable):
     implements(RIBucketReader)
 
-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
+    def __init__(self, ss, share):
         self.ss = ss
hunk ./src/allmydata/storage/immutable.py 116
-        self._share_file = ShareFile(sharefname)
-        self.storage_index = storage_index
-        self.shnum = shnum
+        self._share_file = share
+        self.storage_index = share.storage_index
+        self.shnum = share.shnum
 
     def __repr__(self):
         return "<%s %s %s>" % (self.__class__.__name__,
hunk ./src/allmydata/storage/server.py 316
         si_s = si_b2a(storage_index)
         log.msg("storage: get_buckets %s" % si_s)
         bucketreaders = {} # k: sharenum, v: BucketReader
-        for shnum, filename in self.backend.get_shares(storage_index):
-            bucketreaders[shnum] = BucketReader(self, filename,
-                                                storage_index, shnum)
+        self.backend.set_storage_server(self)
+        for share in self.backend.get_shares(storage_index):
+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
         self.add_latency("get", time.time() - start)
         return bucketreaders
 
hunk ./src/allmydata/test/test_backends.py 25
 tempdir = 'teststoredir'
 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
 sharefname = os.path.join(sharedirname, '0')
+expiration_policy = {'enabled' : False, 
+                     'mode' : 'age',
+                     'override_lease_duration' : None,
+                     'cutoff_date' : None,
+                     'sharetypes' : None}
 
 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
     @mock.patch('time.time')
hunk ./src/allmydata/test/test_backends.py 43
         tries to read or write to the file system. """
 
         # Now begin the test.
-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
 
         self.failIf(mockisdir.called)
         self.failIf(mocklistdir.called)
hunk ./src/allmydata/test/test_backends.py 74
         mockopen.side_effect = call_open
 
         # Now begin the test.
-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
 
         self.failIf(mockisdir.called)
         self.failIf(mocklistdir.called)
hunk ./src/allmydata/test/test_backends.py 86
 
 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
     def setUp(self):
-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
 
     @mock.patch('os.mkdir')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 136
             elif fname == os.path.join(tempdir, 'lease_checker.history'):
                 return StringIO()
         mockopen.side_effect = call_open
-        expiration_policy = {'enabled' : False, 
-                             'mode' : 'age',
-                             'override_lease_duration' : None,
-                             'cutoff_date' : None,
-                             'sharetypes' : None}
         testbackend = DASCore(tempdir, expiration_policy)
         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
 
}
[checkpoint5
wilcoxjg@gmail.com**20110705034626
 Ignore-this: 255780bd58299b0aa33c027e9d008262
] {
addfile ./src/allmydata/storage/backends/base.py
hunk ./src/allmydata/storage/backends/base.py 1
+from twisted.application import service
+
+class Backend(service.MultiService):
+    def __init__(self):
+        service.MultiService.__init__(self)
hunk ./src/allmydata/storage/backends/null/core.py 19
 
     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
         
+        immutableshare = ImmutableShare() 
         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
 
     def set_storage_server(self, ss):
hunk ./src/allmydata/storage/backends/null/core.py 28
 class ImmutableShare:
     sharetype = "immutable"
 
-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
+    def __init__(self):
         """ If max_size is not None then I won't allow more than
         max_size to be written to me. If create=True then max_size
         must not be None. """
hunk ./src/allmydata/storage/backends/null/core.py 32
-        precondition((max_size is not None) or (not create), max_size, create)
-        self.shnum = shnum 
-        self.storage_index = storageindex
-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
-        self._max_size = max_size
-        if create:
-            # touch the file, so later callers will see that we're working on
-            # it. Also construct the metadata.
-            assert not os.path.exists(self.fname)
-            fileutil.make_dirs(os.path.dirname(self.fname))
-            f = open(self.fname, 'wb')
-            # The second field -- the four-byte share data length -- is no
-            # longer used as of Tahoe v1.3.0, but we continue to write it in
-            # there in case someone downgrades a storage server from >=
-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
-            # server to another, etc. We do saturation -- a share data length
-            # larger than 2**32-1 (what can fit into the field) is marked as
-            # the largest length that can fit into the field. That way, even
-            # if this does happen, the old < v1.3.0 server will still allow
-            # clients to read the first part of the share.
-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
-            f.close()
-            self._lease_offset = max_size + 0x0c
-            self._num_leases = 0
-        else:
-            f = open(self.fname, 'rb')
-            filesize = os.path.getsize(self.fname)
-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
-            f.close()
-            if version != 1:
-                msg = "sharefile %s had version %d but we wanted 1" % \
-                      (self.fname, version)
-                raise UnknownImmutableContainerVersionError(msg)
-            self._num_leases = num_leases
-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
-        self._data_offset = 0xc
+        pass
 
     def get_shnum(self):
         return self.shnum
hunk ./src/allmydata/storage/backends/null/core.py 54
         return f.read(actuallength)
 
     def write_share_data(self, offset, data):
-        length = len(data)
-        precondition(offset >= 0, offset)
-        if self._max_size is not None and offset+length > self._max_size:
-            raise DataTooLargeError(self._max_size, offset, length)
-        f = open(self.fname, 'rb+')
-        real_offset = self._data_offset+offset
-        f.seek(real_offset)
-        assert f.tell() == real_offset
-        f.write(data)
-        f.close()
+        pass
 
     def _write_lease_record(self, f, lease_number, lease_info):
         offset = self._lease_offset + lease_number * self.LEASE_SIZE
hunk ./src/allmydata/storage/backends/null/core.py 84
             if data:
                 yield LeaseInfo().from_immutable_data(data)
 
-    def add_lease(self, lease_info):
-        f = open(self.fname, 'rb+')
-        num_leases = self._read_num_leases(f)
-        self._write_lease_record(f, num_leases, lease_info)
-        self._write_num_leases(f, num_leases+1)
-        f.close()
+    def add_lease(self, lease):
+        pass
 
     def renew_lease(self, renew_secret, new_expire_time):
         for i,lease in enumerate(self.get_leases()):
hunk ./src/allmydata/test/test_backends.py 32
                      'sharetypes' : None}
 
 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
-    @mock.patch('time.time')
-    @mock.patch('os.mkdir')
-    @mock.patch('__builtin__.open')
-    @mock.patch('os.listdir')
-    @mock.patch('os.path.isdir')
-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
-        """ This tests whether a server instance can be constructed
-        with a null backend. The server instance fails the test if it
-        tries to read or write to the file system. """
-
-        # Now begin the test.
-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
-
-        self.failIf(mockisdir.called)
-        self.failIf(mocklistdir.called)
-        self.failIf(mockopen.called)
-        self.failIf(mockmkdir.called)
-
-        # You passed!
-
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 53
                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
         mockopen.side_effect = call_open
 
-        # Now begin the test.
-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
-
-        self.failIf(mockisdir.called)
-        self.failIf(mocklistdir.called)
-        self.failIf(mockopen.called)
-        self.failIf(mockmkdir.called)
-        self.failIf(mocktime.called)
-
-        # You passed!
-
-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
-    def setUp(self):
-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
-
-    @mock.patch('os.mkdir')
-    @mock.patch('__builtin__.open')
-    @mock.patch('os.listdir')
-    @mock.patch('os.path.isdir')
-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
-        """ Write a new share. """
-
-        # Now begin the test.
-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
-        bs[0].remote_write(0, 'a')
-        self.failIf(mockisdir.called)
-        self.failIf(mocklistdir.called)
-        self.failIf(mockopen.called)
-        self.failIf(mockmkdir.called)
+        def call_isdir(fname):
+            if fname == os.path.join(tempdir,'shares'):
+                return True
+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
+                return True
+            else:
+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
+        mockisdir.side_effect = call_isdir
 
hunk ./src/allmydata/test/test_backends.py 62
-    @mock.patch('os.path.exists')
-    @mock.patch('os.path.getsize')
-    @mock.patch('__builtin__.open')
-    @mock.patch('os.listdir')
-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
-        """ This tests whether the code correctly finds and reads
-        shares written out by old (Tahoe-LAFS <= v1.8.2)
-        servers. There is a similar test in test_download, but that one
-        is from the perspective of the client and exercises a deeper
-        stack of code. This one is for exercising just the
-        StorageServer object. """
+        def call_mkdir(fname, mode):
+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
+            self.failUnlessEqual(0777, mode)
+            if fname == tempdir:
+                return None
+            elif fname == os.path.join(tempdir,'shares'):
+                return None
+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
+                return None
+            else:
+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
+        mockmkdir.side_effect = call_mkdir
 
         # Now begin the test.
hunk ./src/allmydata/test/test_backends.py 76
-        bs = self.s.remote_get_buckets('teststorage_index')
+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
 
hunk ./src/allmydata/test/test_backends.py 78
-        self.failUnlessEqual(len(bs), 0)
-        self.failIf(mocklistdir.called)
-        self.failIf(mockopen.called)
-        self.failIf(mockgetsize.called)
-        self.failIf(mockexists.called)
+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
 
 
 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
hunk ./src/allmydata/test/test_backends.py 193
         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
 
 
+
+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
+    @mock.patch('time.time')
+    @mock.patch('os.mkdir')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
+        """ This tests whether a file system backend instance can be
+        constructed. To pass the test, it has to use the
+        filesystem in only the prescribed ways. """
+
+        def call_open(fname, mode):
+            if fname == os.path.join(tempdir,'bucket_counter.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
+                return StringIO()
+            else:
+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
+        mockopen.side_effect = call_open
+
+        def call_isdir(fname):
+            if fname == os.path.join(tempdir,'shares'):
+                return True
+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
+                return True
+            else:
+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
+        mockisdir.side_effect = call_isdir
+
+        def call_mkdir(fname, mode):
+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
+            self.failUnlessEqual(0777, mode)
+            if fname == tempdir:
+                return None
+            elif fname == os.path.join(tempdir,'shares'):
+                return None
+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
+                return None
+            else:
+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
+        mockmkdir.side_effect = call_mkdir
+
+        # Now begin the test.
+        DASCore('teststoredir', expiration_policy)
+
+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
}
[checkpoint 6
wilcoxjg@gmail.com**20110706190824
 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
] {
hunk ./src/allmydata/interfaces.py 100
                          renew_secret=LeaseRenewSecret,
                          cancel_secret=LeaseCancelSecret,
                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
-                         allocated_size=Offset, canary=Referenceable):
+                         allocated_size=Offset, 
+                         canary=Referenceable):
         """
hunk ./src/allmydata/interfaces.py 103
-        @param storage_index: the index of the bucket to be created or
+        @param storage_index: the index of the shares to be created or
                               increfed.
hunk ./src/allmydata/interfaces.py 105
-        @param sharenums: these are the share numbers (probably between 0 and
-                          99) that the sender is proposing to store on this
-                          server.
-        @param renew_secret: This is the secret used to protect bucket refresh
+        @param renew_secret: This is the secret used to protect shares refresh
                              This secret is generated by the client and
                              stored for later comparison by the server. Each
                              server is given a different secret.
hunk ./src/allmydata/interfaces.py 109
-        @param cancel_secret: Like renew_secret, but protects bucket decref.
-        @param canary: If the canary is lost before close(), the bucket is
+        @param cancel_secret: Like renew_secret, but protects shares decref.
+        @param sharenums: these are the share numbers (probably between 0 and
+                          99) that the sender is proposing to store on this
+                          server.
+        @param allocated_size: XXX The size of the shares the client wishes to store.
+        @param canary: If the canary is lost before close(), the shares are
                        deleted.
hunk ./src/allmydata/interfaces.py 116
+
         @return: tuple of (alreadygot, allocated), where alreadygot is what we
                  already have and allocated is what we hereby agree to accept.
                  New leases are added for shares in both lists.
hunk ./src/allmydata/interfaces.py 128
                   renew_secret=LeaseRenewSecret,
                   cancel_secret=LeaseCancelSecret):
         """
-        Add a new lease on the given bucket. If the renew_secret matches an
+        Add a new lease on the given shares. If the renew_secret matches an
         existing lease, that lease will be renewed instead. If there is no
         bucket for the given storage_index, return silently. (note that in
         tahoe-1.3.0 and earlier, IndexError was raised if there was no
hunk ./src/allmydata/storage/server.py 17
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
 
-from zope.interface import implements
-
 # storage/
 # storage/shares/incoming
 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
hunk ./src/allmydata/test/test_backends.py 6
 from StringIO import StringIO
 
 from allmydata.test.common_util import ReallyEqualMixin
+from allmydata.util.assertutil import _assert
 
 import mock, os
 
hunk ./src/allmydata/test/test_backends.py 92
                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
             elif fname == os.path.join(tempdir, 'lease_checker.history'):
                 return StringIO()
+            else:
+                _assert(False, "The tester code doesn't recognize this case.")  
+
         mockopen.side_effect = call_open
         testbackend = DASCore(tempdir, expiration_policy)
         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
hunk ./src/allmydata/test/test_backends.py 109
 
         def call_listdir(dirname):
             self.failUnlessReallyEqual(dirname, sharedirname)
-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
 
         mocklistdir.side_effect = call_listdir
 
hunk ./src/allmydata/test/test_backends.py 113
+        def call_isdir(dirname):
+            self.failUnlessReallyEqual(dirname, sharedirname)
+            return True
+
+        mockisdir.side_effect = call_isdir
+
+        def call_mkdir(dirname, permissions):
+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
+                self.Fail
+            else:
+                return True
+
+        mockmkdir.side_effect = call_mkdir
+
         class MockFile:
             def __init__(self):
                 self.buffer = ''
hunk ./src/allmydata/test/test_backends.py 156
             return sharefile
 
         mockopen.side_effect = call_open
+
         # Now begin the test.
         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
         bs[0].remote_write(0, 'a')
hunk ./src/allmydata/test/test_backends.py 161
         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
+        
+        # Now test the allocated_size method.
+        spaceint = self.s.allocated_size()
 
     @mock.patch('os.path.exists')
     @mock.patch('os.path.getsize')
}
[checkpoint 7
wilcoxjg@gmail.com**20110706200820
 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
] hunk ./src/allmydata/test/test_backends.py 164
         
         # Now test the allocated_size method.
         spaceint = self.s.allocated_size()
+        self.failUnlessReallyEqual(spaceint, 1)
 
     @mock.patch('os.path.exists')
     @mock.patch('os.path.getsize')
[checkpoint8
wilcoxjg@gmail.com**20110706223126
 Ignore-this: 97336180883cb798b16f15411179f827
   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
] hunk ./src/allmydata/test/test_backends.py 32
                      'cutoff_date' : None,
                      'sharetypes' : None}
 
+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
+    def setUp(self):
+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
+
+    @mock.patch('os.mkdir')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
+        """ Write a new share. """
+
+        # Now begin the test.
+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        bs[0].remote_write(0, 'a')
+        self.failIf(mockisdir.called)
+        self.failIf(mocklistdir.called)
+        self.failIf(mockopen.called)
+        self.failIf(mockmkdir.called)
+
 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
[checkpoint 9
wilcoxjg@gmail.com**20110707042942
 Ignore-this: 75396571fd05944755a104a8fc38aaf6
] {
hunk ./src/allmydata/storage/backends/das/core.py 88
                     filename = os.path.join(finalstoragedir, f)
                     yield ImmutableShare(self.sharedir, storage_index, int(f))
         except OSError:
-            # Commonly caused by there being no buckets at all.
+            # Commonly caused by there being no shares at all.
             pass
         
     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
hunk ./src/allmydata/storage/backends/das/core.py 141
         self.storage_index = storageindex
         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
         self._max_size = max_size
+        self.incomingdir = os.path.join(sharedir, 'incoming') 
+        si_dir = storage_index_to_dir(storageindex)
+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
         if create:
             # touch the file, so later callers will see that we're working on
             # it. Also construct the metadata.
hunk ./src/allmydata/storage/backends/das/core.py 177
             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
         self._data_offset = 0xc
 
+    def close(self):
+        fileutil.make_dirs(os.path.dirname(self.finalhome))
+        fileutil.rename(self.incominghome, self.finalhome)
+        try:
+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
+            # We try to delete the parent (.../ab/abcde) to avoid leaving
+            # these directories lying around forever, but the delete might
+            # fail if we're working on another share for the same storage
+            # index (like ab/abcde/5). The alternative approach would be to
+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
+            # ShareWriter), each of which is responsible for a single
+            # directory on disk, and have them use reference counting of
+            # their children to know when they should do the rmdir. This
+            # approach is simpler, but relies on os.rmdir refusing to delete
+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
+            os.rmdir(os.path.dirname(self.incominghome))
+            # we also delete the grandparent (prefix) directory, .../ab ,
+            # again to avoid leaving directories lying around. This might
+            # fail if there is another bucket open that shares a prefix (like
+            # ab/abfff).
+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
+            # we leave the great-grandparent (incoming/) directory in place.
+        except EnvironmentError:
+            # ignore the "can't rmdir because the directory is not empty"
+            # exceptions, those are normal consequences of the
+            # above-mentioned conditions.
+            pass
+        pass
+        
+    def stat(self):
+        return os.stat(self.finalhome)[stat.ST_SIZE]
+
     def get_shnum(self):
         return self.shnum
 
hunk ./src/allmydata/storage/immutable.py 7
 
 from zope.interface import implements
 from allmydata.interfaces import RIBucketWriter, RIBucketReader
-from allmydata.util import base32, fileutil, log
+from allmydata.util import base32, log
 from allmydata.util.assertutil import precondition
 from allmydata.util.hashutil import constant_time_compare
 from allmydata.storage.lease import LeaseInfo
hunk ./src/allmydata/storage/immutable.py 44
     def remote_close(self):
         precondition(not self.closed)
         start = time.time()
-
-        fileutil.make_dirs(os.path.dirname(self.finalhome))
-        fileutil.rename(self.incominghome, self.finalhome)
-        try:
-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
-            # We try to delete the parent (.../ab/abcde) to avoid leaving
-            # these directories lying around forever, but the delete might
-            # fail if we're working on another share for the same storage
-            # index (like ab/abcde/5). The alternative approach would be to
-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
-            # ShareWriter), each of which is responsible for a single
-            # directory on disk, and have them use reference counting of
-            # their children to know when they should do the rmdir. This
-            # approach is simpler, but relies on os.rmdir refusing to delete
-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
-            os.rmdir(os.path.dirname(self.incominghome))
-            # we also delete the grandparent (prefix) directory, .../ab ,
-            # again to avoid leaving directories lying around. This might
-            # fail if there is another bucket open that shares a prefix (like
-            # ab/abfff).
-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
-            # we leave the great-grandparent (incoming/) directory in place.
-        except EnvironmentError:
-            # ignore the "can't rmdir because the directory is not empty"
-            # exceptions, those are normal consequences of the
-            # above-mentioned conditions.
-            pass
+        self._sharefile.close()
         self._sharefile = None
         self.closed = True
         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
hunk ./src/allmydata/storage/immutable.py 49
 
-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
+        filelen = self._sharefile.stat()
         self.ss.bucket_writer_closed(self, filelen)
         self.ss.add_latency("close", time.time() - start)
         self.ss.count("close")
hunk ./src/allmydata/storage/server.py 45
         self._active_writers = weakref.WeakKeyDictionary()
         self.backend = backend
         self.backend.setServiceParent(self)
+        self.backend.set_storage_server(self)
         log.msg("StorageServer created", facility="tahoe.storage")
 
         self.latencies = {"allocate": [], # immutable
hunk ./src/allmydata/storage/server.py 220
 
         for shnum in (sharenums - alreadygot):
             if (not limited) or (remaining_space >= max_space_per_bucket):
-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
-                self.backend.set_storage_server(self)
                 bw = self.backend.make_bucket_writer(storage_index, shnum,
                                                      max_space_per_bucket, lease_info, canary)
                 bucketwriters[shnum] = bw
hunk ./src/allmydata/test/test_backends.py 117
         mockopen.side_effect = call_open
         testbackend = DASCore(tempdir, expiration_policy)
         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
-
+    
+    @mock.patch('allmydata.util.fileutil.get_available_space')
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 124
     @mock.patch('os.listdir')
     @mock.patch('os.path.isdir')
-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
+                             mockget_available_space):
         """ Write a new share. """
 
         def call_listdir(dirname):
hunk ./src/allmydata/test/test_backends.py 148
 
         mockmkdir.side_effect = call_mkdir
 
+        def call_get_available_space(storedir, reserved_space):
+            self.failUnlessReallyEqual(storedir, tempdir)
+            return 1
+
+        mockget_available_space.side_effect = call_get_available_space
+
         class MockFile:
             def __init__(self):
                 self.buffer = ''
hunk ./src/allmydata/test/test_backends.py 188
         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
         bs[0].remote_write(0, 'a')
         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
-        
+
+        # What happens when there's not enough space for the client's request?
+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
+
         # Now test the allocated_size method.
         spaceint = self.s.allocated_size()
         self.failUnlessReallyEqual(spaceint, 1)
}
[checkpoint10
wilcoxjg@gmail.com**20110707172049
 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
] {
hunk ./src/allmydata/test/test_backends.py 20
 # The following share file contents was generated with
 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
 # with share data == 'a'.
-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
 
hunk ./src/allmydata/test/test_backends.py 25
+testnodeid = 'testnodeidxxxxxxxxxx'
 tempdir = 'teststoredir'
 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
 sharefname = os.path.join(sharedirname, '0')
hunk ./src/allmydata/test/test_backends.py 37
 
 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
     def setUp(self):
-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
+        self.s = StorageServer(testnodeid, backend=NullCore())
 
     @mock.patch('os.mkdir')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 99
         mockmkdir.side_effect = call_mkdir
 
         # Now begin the test.
-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
 
         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
 
hunk ./src/allmydata/test/test_backends.py 119
 
         mockopen.side_effect = call_open
         testbackend = DASCore(tempdir, expiration_policy)
-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
-    
+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
+        
+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
     @mock.patch('allmydata.util.fileutil.get_available_space')
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
hunk ./src/allmydata/test/test_backends.py 129
     @mock.patch('os.listdir')
     @mock.patch('os.path.isdir')
     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
-                             mockget_available_space):
+                             mockget_available_space, mockget_shares):
         """ Write a new share. """
 
         def call_listdir(dirname):
hunk ./src/allmydata/test/test_backends.py 139
         mocklistdir.side_effect = call_listdir
 
         def call_isdir(dirname):
+            #XXX Should there be any other tests here?
             self.failUnlessReallyEqual(dirname, sharedirname)
             return True
 
hunk ./src/allmydata/test/test_backends.py 159
 
         mockget_available_space.side_effect = call_get_available_space
 
+        mocktime.return_value = 0
+        class MockShare:
+            def __init__(self):
+                self.shnum = 1
+                
+            def add_or_renew_lease(elf, lease_info):
+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
+                
+
+        share = MockShare()
+        def call_get_shares(storageindex):
+            return [share] 
+
+        mockget_shares.side_effect = call_get_shares
+
         class MockFile:
             def __init__(self):
                 self.buffer = ''
hunk ./src/allmydata/test/test_backends.py 199
             def tell(self):
                 return self.pos
 
-        mocktime.return_value = 0
 
         sharefile = MockFile()
         def call_open(fname, mode):
}
[jacp 11
wilcoxjg@gmail.com**20110708213919
 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
] {
hunk ./src/allmydata/storage/backends/das/core.py 144
         self.incomingdir = os.path.join(sharedir, 'incoming') 
         si_dir = storage_index_to_dir(storageindex)
         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
+        #XXX  self.fname and self.finalhome need to be resolve/merged.
         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
         if create:
             # touch the file, so later callers will see that we're working on
hunk ./src/allmydata/storage/backends/das/core.py 208
         pass
         
     def stat(self):
-        return os.stat(self.finalhome)[stat.ST_SIZE]
+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
 
     def get_shnum(self):
         return self.shnum
hunk ./src/allmydata/storage/immutable.py 44
     def remote_close(self):
         precondition(not self.closed)
         start = time.time()
+
         self._sharefile.close()
hunk ./src/allmydata/storage/immutable.py 46
+        filelen = self._sharefile.stat()
         self._sharefile = None
hunk ./src/allmydata/storage/immutable.py 48
+
         self.closed = True
         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
 
hunk ./src/allmydata/storage/immutable.py 52
-        filelen = self._sharefile.stat()
         self.ss.bucket_writer_closed(self, filelen)
         self.ss.add_latency("close", time.time() - start)
         self.ss.count("close")
hunk ./src/allmydata/storage/server.py 220
 
         for shnum in (sharenums - alreadygot):
             if (not limited) or (remaining_space >= max_space_per_bucket):
-                bw = self.backend.make_bucket_writer(storage_index, shnum,
-                                                     max_space_per_bucket, lease_info, canary)
+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
                 bucketwriters[shnum] = bw
                 self._active_writers[bw] = 1
                 if limited:
hunk ./src/allmydata/test/test_backends.py 20
 # The following share file contents was generated with
 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
 # with share data == 'a'.
-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
+renew_secret  = 'x'*32
+cancel_secret = 'y'*32
 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
 
hunk ./src/allmydata/test/test_backends.py 27
 testnodeid = 'testnodeidxxxxxxxxxx'
 tempdir = 'teststoredir'
-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
-sharefname = os.path.join(sharedirname, '0')
+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
+shareincomingname = os.path.join(sharedirincomingname, '0')
+sharefname = os.path.join(sharedirfinalname, '0')
+
 expiration_policy = {'enabled' : False, 
                      'mode' : 'age',
                      'override_lease_duration' : None,
hunk ./src/allmydata/test/test_backends.py 123
         mockopen.side_effect = call_open
         testbackend = DASCore(tempdir, expiration_policy)
         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
-        
+
+    @mock.patch('allmydata.util.fileutil.rename')
+    @mock.patch('allmydata.util.fileutil.make_dirs')
+    @mock.patch('os.path.exists')
+    @mock.patch('os.stat')
     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
     @mock.patch('allmydata.util.fileutil.get_available_space')
     @mock.patch('time.time')
hunk ./src/allmydata/test/test_backends.py 136
     @mock.patch('os.listdir')
     @mock.patch('os.path.isdir')
     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
-                             mockget_available_space, mockget_shares):
+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
+                             mockmake_dirs, mockrename):
         """ Write a new share. """
 
         def call_listdir(dirname):
hunk ./src/allmydata/test/test_backends.py 141
-            self.failUnlessReallyEqual(dirname, sharedirname)
+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
 
         mocklistdir.side_effect = call_listdir
hunk ./src/allmydata/test/test_backends.py 148
 
         def call_isdir(dirname):
             #XXX Should there be any other tests here?
-            self.failUnlessReallyEqual(dirname, sharedirname)
+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
             return True
 
         mockisdir.side_effect = call_isdir
hunk ./src/allmydata/test/test_backends.py 154
 
         def call_mkdir(dirname, permissions):
-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
                 self.Fail
             else:
                 return True
hunk ./src/allmydata/test/test_backends.py 208
                 return self.pos
 
 
-        sharefile = MockFile()
+        fobj = MockFile()
         def call_open(fname, mode):
             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
hunk ./src/allmydata/test/test_backends.py 211
-            return sharefile
+            return fobj
 
         mockopen.side_effect = call_open
 
hunk ./src/allmydata/test/test_backends.py 215
+        def call_make_dirs(dname):
+            self.failUnlessReallyEqual(dname, sharedirfinalname)
+            
+        mockmake_dirs.side_effect = call_make_dirs
+
+        def call_rename(src, dst):
+           self.failUnlessReallyEqual(src, shareincomingname)
+           self.failUnlessReallyEqual(dst, sharefname)
+            
+        mockrename.side_effect = call_rename
+
+        def call_exists(fname):
+            self.failUnlessReallyEqual(fname, sharefname)
+
+        mockexists.side_effect = call_exists
+
         # Now begin the test.
         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
         bs[0].remote_write(0, 'a')
hunk ./src/allmydata/test/test_backends.py 234
-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
+        spaceint = self.s.allocated_size()
+        self.failUnlessReallyEqual(spaceint, 1)
+
+        bs[0].remote_close()
 
         # What happens when there's not enough space for the client's request?
hunk ./src/allmydata/test/test_backends.py 241
-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
 
         # Now test the allocated_size method.
hunk ./src/allmydata/test/test_backends.py 244
-        spaceint = self.s.allocated_size()
-        self.failUnlessReallyEqual(spaceint, 1)
+        #self.failIf(mockexists.called, mockexists.call_args_list)
+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
+        #self.failIf(mockrename.called, mockrename.call_args_list)
+        #self.failIf(mockstat.called, mockstat.call_args_list)
 
     @mock.patch('os.path.exists')
     @mock.patch('os.path.getsize')
}
[checkpoint12 testing correct behavior with regard to incoming and final
wilcoxjg@gmail.com**20110710191915
 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
] {
hunk ./src/allmydata/storage/backends/das/core.py 74
         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
         self.lease_checker.setServiceParent(self)
 
+    def get_incoming(self, storageindex):
+        return set((1,))
+
     def get_available_space(self):
         if self.readonly:
             return 0
hunk ./src/allmydata/storage/server.py 77
         """Return a dict, indexed by category, that contains a dict of
         latency numbers for each category. If there are sufficient samples
         for unambiguous interpretation, each dict will contain the
-        following keys: mean, 01_0_percentile, 10_0_percentile,
+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
         99_0_percentile, 99_9_percentile.  If there are insufficient
         samples for a given percentile to be interpreted unambiguously
hunk ./src/allmydata/storage/server.py 120
 
     def get_stats(self):
         # remember: RIStatsProvider requires that our return dict
-        # contains numeric values.
+        # contains numeric, or None values.
         stats = { 'storage_server.allocated': self.allocated_size(), }
         stats['storage_server.reserved_space'] = self.reserved_space
         for category,ld in self.get_latencies().items():
hunk ./src/allmydata/storage/server.py 185
         start = time.time()
         self.count("allocate")
         alreadygot = set()
+        incoming = set()
         bucketwriters = {} # k: shnum, v: BucketWriter
 
         si_s = si_b2a(storage_index)
hunk ./src/allmydata/storage/server.py 219
             alreadygot.add(share.shnum)
             share.add_or_renew_lease(lease_info)
 
-        for shnum in (sharenums - alreadygot):
+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
+        incoming = self.backend.get_incoming(storageindex)
+
+        for shnum in ((sharenums - alreadygot) - incoming):
             if (not limited) or (remaining_space >= max_space_per_bucket):
                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
                 bucketwriters[shnum] = bw
hunk ./src/allmydata/storage/server.py 229
                 self._active_writers[bw] = 1
                 if limited:
                     remaining_space -= max_space_per_bucket
-
-        #XXX We SHOULD DOCUMENT LATER.
+            else:
+                # Bummer not enough space to accept this share.
+                pass
 
         self.add_latency("allocate", time.time() - start)
         return alreadygot, bucketwriters
hunk ./src/allmydata/storage/server.py 323
         self.add_latency("get", time.time() - start)
         return bucketreaders
 
-    def get_leases(self, storage_index):
+    def remote_get_incoming(self, storageindex):
+        incoming_share_set = self.backend.get_incoming(storageindex)
+        return incoming_share_set
+
+    def get_leases(self, storageindex):
         """Provide an iterator that yields all of the leases attached to this
         bucket. Each lease is returned as a LeaseInfo instance.
 
hunk ./src/allmydata/storage/server.py 337
         # since all shares get the same lease data, we just grab the leases
         # from the first share
         try:
-            shnum, filename = self._get_shares(storage_index).next()
+            shnum, filename = self._get_shares(storageindex).next()
             sf = ShareFile(filename)
             return sf.get_leases()
         except StopIteration:
hunk ./src/allmydata/test/test_backends.py 182
 
         share = MockShare()
         def call_get_shares(storageindex):
-            return [share] 
+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
+            return []#share] 
 
         mockget_shares.side_effect = call_get_shares
 
hunk ./src/allmydata/test/test_backends.py 222
         mockmake_dirs.side_effect = call_make_dirs
 
         def call_rename(src, dst):
-           self.failUnlessReallyEqual(src, shareincomingname)
-           self.failUnlessReallyEqual(dst, sharefname)
+            self.failUnlessReallyEqual(src, shareincomingname)
+            self.failUnlessReallyEqual(dst, sharefname)
             
         mockrename.side_effect = call_rename
 
hunk ./src/allmydata/test/test_backends.py 233
         mockexists.side_effect = call_exists
 
         # Now begin the test.
+
+        # XXX (0) ???  Fail unless something is not properly set-up?
         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
hunk ./src/allmydata/test/test_backends.py 236
+
+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+
+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
+        # with the same si, until BucketWriter.remote_close() has been called.
+        # self.failIf(bsa)
+
+        # XXX (3) Inspect final and fail unless there's nothing there.
         bs[0].remote_write(0, 'a')
hunk ./src/allmydata/test/test_backends.py 247
+        # XXX (4a) Inspect final and fail unless share 0 is there.
+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
         spaceint = self.s.allocated_size()
         self.failUnlessReallyEqual(spaceint, 1)
hunk ./src/allmydata/test/test_backends.py 253
 
+        #  If there's something in self.alreadygot prior to remote_close() then fail.
         bs[0].remote_close()
 
         # What happens when there's not enough space for the client's request?
hunk ./src/allmydata/test/test_backends.py 260
         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
 
         # Now test the allocated_size method.
-        #self.failIf(mockexists.called, mockexists.call_args_list)
+        # self.failIf(mockexists.called, mockexists.call_args_list)
         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
         #self.failIf(mockrename.called, mockrename.call_args_list)
         #self.failIf(mockstat.called, mockstat.call_args_list)
}
[fix inconsistent naming of storage_index vs storageindex in storage/server.py
wilcoxjg@gmail.com**20110710195139
 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
] {
hunk ./src/allmydata/storage/server.py 220
             share.add_or_renew_lease(lease_info)
 
         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
-        incoming = self.backend.get_incoming(storageindex)
+        incoming = self.backend.get_incoming(storage_index)
 
         for shnum in ((sharenums - alreadygot) - incoming):
             if (not limited) or (remaining_space >= max_space_per_bucket):
hunk ./src/allmydata/storage/server.py 323
         self.add_latency("get", time.time() - start)
         return bucketreaders
 
-    def remote_get_incoming(self, storageindex):
-        incoming_share_set = self.backend.get_incoming(storageindex)
+    def remote_get_incoming(self, storage_index):
+        incoming_share_set = self.backend.get_incoming(storage_index)
         return incoming_share_set
 
hunk ./src/allmydata/storage/server.py 327
-    def get_leases(self, storageindex):
+    def get_leases(self, storage_index):
         """Provide an iterator that yields all of the leases attached to this
         bucket. Each lease is returned as a LeaseInfo instance.
 
hunk ./src/allmydata/storage/server.py 337
         # since all shares get the same lease data, we just grab the leases
         # from the first share
         try:
-            shnum, filename = self._get_shares(storageindex).next()
+            shnum, filename = self._get_shares(storage_index).next()
             sf = ShareFile(filename)
             return sf.get_leases()
         except StopIteration:
replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
}
[adding comments to clarify what I'm about to do.
wilcoxjg@gmail.com**20110710220623
 Ignore-this: 44f97633c3eac1047660272e2308dd7c
] {
hunk ./src/allmydata/storage/backends/das/core.py 8
 
 import os, re, weakref, struct, time
 
-from foolscap.api import Referenceable
+#from foolscap.api import Referenceable
 from twisted.application import service
 
 from zope.interface import implements
hunk ./src/allmydata/storage/backends/das/core.py 12
-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
 from allmydata.util import fileutil, idlib, log, time_format
 import allmydata # for __full_version__
 
hunk ./src/allmydata/storage/server.py 219
             alreadygot.add(share.shnum)
             share.add_or_renew_lease(lease_info)
 
-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
+        # fill incoming with all shares that are incoming use a set operation
+        # since there's no need to operate on individual pieces
         incoming = self.backend.get_incoming(storageindex)
 
         for shnum in ((sharenums - alreadygot) - incoming):
hunk ./src/allmydata/test/test_backends.py 245
         # with the same si, until BucketWriter.remote_close() has been called.
         # self.failIf(bsa)
 
-        # XXX (3) Inspect final and fail unless there's nothing there.
         bs[0].remote_write(0, 'a')
hunk ./src/allmydata/test/test_backends.py 246
-        # XXX (4a) Inspect final and fail unless share 0 is there.
-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
         spaceint = self.s.allocated_size()
         self.failUnlessReallyEqual(spaceint, 1)
hunk ./src/allmydata/test/test_backends.py 250
 
-        #  If there's something in self.alreadygot prior to remote_close() then fail.
+        # XXX (3) Inspect final and fail unless there's nothing there.
         bs[0].remote_close()
hunk ./src/allmydata/test/test_backends.py 252
+        # XXX (4a) Inspect final and fail unless share 0 is there.
+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
 
         # What happens when there's not enough space for the client's request?
         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
}
[branching back, no longer attempting to mock inside TestServerFSBackend
wilcoxjg@gmail.com**20110711190849
 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
] {
hunk ./src/allmydata/storage/backends/das/core.py 75
         self.lease_checker.setServiceParent(self)
 
     def get_incoming(self, storageindex):
-        return set((1,))
-
-    def get_available_space(self):
-        if self.readonly:
-            return 0
-        return fileutil.get_available_space(self.storedir, self.reserved_space)
+        """Return the set of incoming shnums."""
+        return set(os.listdir(self.incomingdir))
 
     def get_shares(self, storage_index):
         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
hunk ./src/allmydata/storage/backends/das/core.py 90
             # Commonly caused by there being no shares at all.
             pass
         
+    def get_available_space(self):
+        if self.readonly:
+            return 0
+        return fileutil.get_available_space(self.storedir, self.reserved_space)
+
     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
hunk ./src/allmydata/test/test_backends.py 27
 
 testnodeid = 'testnodeidxxxxxxxxxx'
 tempdir = 'teststoredir'
-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
+basedir = os.path.join(tempdir, 'shares')
+baseincdir = os.path.join(basedir, 'incoming')
+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
 shareincomingname = os.path.join(sharedirincomingname, '0')
 sharefname = os.path.join(sharedirfinalname, '0')
 
hunk ./src/allmydata/test/test_backends.py 142
                              mockmake_dirs, mockrename):
         """ Write a new share. """
 
-        def call_listdir(dirname):
-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
-
-        mocklistdir.side_effect = call_listdir
-
-        def call_isdir(dirname):
-            #XXX Should there be any other tests here?
-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
-            return True
-
-        mockisdir.side_effect = call_isdir
-
-        def call_mkdir(dirname, permissions):
-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
-                self.Fail
-            else:
-                return True
-
-        mockmkdir.side_effect = call_mkdir
-
-        def call_get_available_space(storedir, reserved_space):
-            self.failUnlessReallyEqual(storedir, tempdir)
-            return 1
-
-        mockget_available_space.side_effect = call_get_available_space
-
-        mocktime.return_value = 0
         class MockShare:
             def __init__(self):
                 self.shnum = 1
hunk ./src/allmydata/test/test_backends.py 152
                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
-                
 
         share = MockShare()
hunk ./src/allmydata/test/test_backends.py 154
-        def call_get_shares(storageindex):
-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
-            return []#share] 
-
-        mockget_shares.side_effect = call_get_shares
 
         class MockFile:
             def __init__(self):
hunk ./src/allmydata/test/test_backends.py 176
             def tell(self):
                 return self.pos
 
-
         fobj = MockFile()
hunk ./src/allmydata/test/test_backends.py 177
+
+        directories = {}
+        def call_listdir(dirname):
+            if dirname not in directories:
+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
+            else:
+                return directories[dirname].get_contents()
+
+        mocklistdir.side_effect = call_listdir
+
+        class MockDir:
+            def __init__(self, dirname):
+                self.name = dirname
+                self.contents = []
+    
+            def get_contents(self):
+                return self.contents
+
+        def call_isdir(dirname):
+            #XXX Should there be any other tests here?
+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
+            return True
+
+        mockisdir.side_effect = call_isdir
+
+        def call_mkdir(dirname, permissions):
+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
+                self.Fail
+            if dirname in directories:
+                raise OSError(17, "File exists: '%s'" % dirname) 
+                self.Fail
+            elif dirname not in directories:
+                directories[dirname] = MockDir(dirname)
+                return True
+
+        mockmkdir.side_effect = call_mkdir
+
+        def call_get_available_space(storedir, reserved_space):
+            self.failUnlessReallyEqual(storedir, tempdir)
+            return 1
+
+        mockget_available_space.side_effect = call_get_available_space
+
+        mocktime.return_value = 0
+        def call_get_shares(storageindex):
+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
+            return []#share] 
+
+        mockget_shares.side_effect = call_get_shares
+
         def call_open(fname, mode):
             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
             return fobj
}
[checkpoint12 TestServerFSBackend no longer mocks filesystem
wilcoxjg@gmail.com**20110711193357
 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
] {
hunk ./src/allmydata/storage/backends/das/core.py 23
      create_mutable_sharefile
 from allmydata.storage.immutable import BucketWriter, BucketReader
 from allmydata.storage.crawler import FSBucketCountingCrawler
+from allmydata.util.hashutil import constant_time_compare
 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
 
 from zope.interface import implements
hunk ./src/allmydata/storage/backends/das/core.py 28
 
+# storage/
+# storage/shares/incoming
+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
+# storage/shares/$START/$STORAGEINDEX
+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
+
+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
+# base-32 chars).
 # $SHARENUM matches this regex:
 NUM_RE=re.compile("^[0-9]+$")
 
hunk ./src/allmydata/test/test_backends.py 126
         testbackend = DASCore(tempdir, expiration_policy)
         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
 
-    @mock.patch('allmydata.util.fileutil.rename')
-    @mock.patch('allmydata.util.fileutil.make_dirs')
-    @mock.patch('os.path.exists')
-    @mock.patch('os.stat')
-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
-    @mock.patch('allmydata.util.fileutil.get_available_space')
     @mock.patch('time.time')
hunk ./src/allmydata/test/test_backends.py 127
-    @mock.patch('os.mkdir')
-    @mock.patch('__builtin__.open')
-    @mock.patch('os.listdir')
-    @mock.patch('os.path.isdir')
-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
-                             mockmake_dirs, mockrename):
+    def test_write_share(self, mocktime):
         """ Write a new share. """
 
         class MockShare:
hunk ./src/allmydata/test/test_backends.py 143
 
         share = MockShare()
 
-        class MockFile:
-            def __init__(self):
-                self.buffer = ''
-                self.pos = 0
-            def write(self, instring):
-                begin = self.pos
-                padlen = begin - len(self.buffer)
-                if padlen > 0:
-                    self.buffer += '\x00' * padlen
-                end = self.pos + len(instring)
-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
-                self.pos = end
-            def close(self):
-                pass
-            def seek(self, pos):
-                self.pos = pos
-            def read(self, numberbytes):
-                return self.buffer[self.pos:self.pos+numberbytes]
-            def tell(self):
-                return self.pos
-
-        fobj = MockFile()
-
-        directories = {}
-        def call_listdir(dirname):
-            if dirname not in directories:
-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
-            else:
-                return directories[dirname].get_contents()
-
-        mocklistdir.side_effect = call_listdir
-
-        class MockDir:
-            def __init__(self, dirname):
-                self.name = dirname
-                self.contents = []
-    
-            def get_contents(self):
-                return self.contents
-
-        def call_isdir(dirname):
-            #XXX Should there be any other tests here?
-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
-            return True
-
-        mockisdir.side_effect = call_isdir
-
-        def call_mkdir(dirname, permissions):
-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
-                self.Fail
-            if dirname in directories:
-                raise OSError(17, "File exists: '%s'" % dirname) 
-                self.Fail
-            elif dirname not in directories:
-                directories[dirname] = MockDir(dirname)
-                return True
-
-        mockmkdir.side_effect = call_mkdir
-
-        def call_get_available_space(storedir, reserved_space):
-            self.failUnlessReallyEqual(storedir, tempdir)
-            return 1
-
-        mockget_available_space.side_effect = call_get_available_space
-
-        mocktime.return_value = 0
-        def call_get_shares(storageindex):
-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
-            return []#share] 
-
-        mockget_shares.side_effect = call_get_shares
-
-        def call_open(fname, mode):
-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
-            return fobj
-
-        mockopen.side_effect = call_open
-
-        def call_make_dirs(dname):
-            self.failUnlessReallyEqual(dname, sharedirfinalname)
-            
-        mockmake_dirs.side_effect = call_make_dirs
-
-        def call_rename(src, dst):
-            self.failUnlessReallyEqual(src, shareincomingname)
-            self.failUnlessReallyEqual(dst, sharefname)
-            
-        mockrename.side_effect = call_rename
-
-        def call_exists(fname):
-            self.failUnlessReallyEqual(fname, sharefname)
-
-        mockexists.side_effect = call_exists
-
         # Now begin the test.
 
         # XXX (0) ???  Fail unless something is not properly set-up?
}
[JACP
wilcoxjg@gmail.com**20110711194407
 Ignore-this: b54745de777c4bb58d68d708f010bbb
] {
hunk ./src/allmydata/storage/backends/das/core.py 86
 
     def get_incoming(self, storageindex):
         """Return the set of incoming shnums."""
-        return set(os.listdir(self.incomingdir))
+        try:
+            incominglist = os.listdir(self.incomingdir)
+            print "incominglist: ", incominglist
+            return set(incominglist)
+        except OSError:
+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
+            pass
 
     def get_shares(self, storage_index):
         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
hunk ./src/allmydata/storage/server.py 17
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
 
-# storage/
-# storage/shares/incoming
-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
-# storage/shares/$START/$STORAGEINDEX
-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
-
-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
-# base-32 chars).
-
-
 class StorageServer(service.MultiService, Referenceable):
     implements(RIStorageServer, IStatsProducer)
     name = 'storage'
}
[testing get incoming
wilcoxjg@gmail.com**20110711210224
 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
] {
hunk ./src/allmydata/storage/backends/das/core.py 87
     def get_incoming(self, storageindex):
         """Return the set of incoming shnums."""
         try:
-            incominglist = os.listdir(self.incomingdir)
+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
+            incominglist = os.listdir(incomingsharesdir)
             print "incominglist: ", incominglist
             return set(incominglist)
         except OSError:
hunk ./src/allmydata/storage/backends/das/core.py 92
-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
-            pass
-
+            # XXX I'd like to make this more specific. If there are no shares at all.
+            return set()
+            
     def get_shares(self, storage_index):
         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
hunk ./src/allmydata/test/test_backends.py 149
         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
 
         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
 
hunk ./src/allmydata/test/test_backends.py 152
-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
         # with the same si, until BucketWriter.remote_close() has been called.
         # self.failIf(bsa)
}
[ImmutableShareFile does not know its StorageIndex
wilcoxjg@gmail.com**20110711211424
 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
] {
hunk ./src/allmydata/storage/backends/das/core.py 112
             return 0
         return fileutil.get_available_space(self.storedir, self.reserved_space)
 
-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
         return bw
 
hunk ./src/allmydata/storage/backends/das/core.py 155
     LEASE_SIZE = struct.calcsize(">L32s32sL")
     sharetype = "immutable"
 
-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
         """ If max_size is not None then I won't allow more than
         max_size to be written to me. If create=True then max_size
         must not be None. """
}
[get_incoming correctly reports the 0 share after it has arrived
wilcoxjg@gmail.com**20110712025157
 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
] {
hunk ./src/allmydata/storage/backends/das/core.py 1
+import os, re, weakref, struct, time, stat
+
 from allmydata.interfaces import IStorageBackend
 from allmydata.storage.backends.base import Backend
 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
hunk ./src/allmydata/storage/backends/das/core.py 8
 from allmydata.util.assertutil import precondition
 
-import os, re, weakref, struct, time
-
 #from foolscap.api import Referenceable
 from twisted.application import service
 
hunk ./src/allmydata/storage/backends/das/core.py 89
         try:
             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
             incominglist = os.listdir(incomingsharesdir)
-            print "incominglist: ", incominglist
-            return set(incominglist)
+            incomingshnums = [int(x) for x in incominglist]
+            return set(incomingshnums)
         except OSError:
             # XXX I'd like to make this more specific. If there are no shares at all.
             return set()
hunk ./src/allmydata/storage/backends/das/core.py 113
         return fileutil.get_available_space(self.storedir, self.reserved_space)
 
     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
         return bw
 
hunk ./src/allmydata/storage/backends/das/core.py 160
         max_size to be written to me. If create=True then max_size
         must not be None. """
         precondition((max_size is not None) or (not create), max_size, create)
-        self.shnum = shnum 
-        self.storage_index = storageindex
-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
         self._max_size = max_size
hunk ./src/allmydata/storage/backends/das/core.py 161
-        self.incomingdir = os.path.join(sharedir, 'incoming') 
-        si_dir = storage_index_to_dir(storageindex)
-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
-        #XXX  self.fname and self.finalhome need to be resolve/merged.
-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
+        self.incominghome = incominghome
+        self.finalhome = finalhome
         if create:
             # touch the file, so later callers will see that we're working on
             # it. Also construct the metadata.
hunk ./src/allmydata/storage/backends/das/core.py 166
-            assert not os.path.exists(self.fname)
-            fileutil.make_dirs(os.path.dirname(self.fname))
-            f = open(self.fname, 'wb')
+            assert not os.path.exists(self.finalhome)
+            fileutil.make_dirs(os.path.dirname(self.incominghome))
+            f = open(self.incominghome, 'wb')
             # The second field -- the four-byte share data length -- is no
             # longer used as of Tahoe v1.3.0, but we continue to write it in
             # there in case someone downgrades a storage server from >=
hunk ./src/allmydata/storage/backends/das/core.py 183
             self._lease_offset = max_size + 0x0c
             self._num_leases = 0
         else:
-            f = open(self.fname, 'rb')
-            filesize = os.path.getsize(self.fname)
+            f = open(self.finalhome, 'rb')
+            filesize = os.path.getsize(self.finalhome)
             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
             f.close()
             if version != 1:
hunk ./src/allmydata/storage/backends/das/core.py 189
                 msg = "sharefile %s had version %d but we wanted 1" % \
-                      (self.fname, version)
+                      (self.finalhome, version)
                 raise UnknownImmutableContainerVersionError(msg)
             self._num_leases = num_leases
             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
hunk ./src/allmydata/storage/backends/das/core.py 225
         pass
         
     def stat(self):
-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
+        return os.stat(self.finalhome)[stat.ST_SIZE]
+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
 
     def get_shnum(self):
         return self.shnum
hunk ./src/allmydata/storage/backends/das/core.py 232
 
     def unlink(self):
-        os.unlink(self.fname)
+        os.unlink(self.finalhome)
 
     def read_share_data(self, offset, length):
         precondition(offset >= 0)
hunk ./src/allmydata/storage/backends/das/core.py 239
         # Reads beyond the end of the data are truncated. Reads that start
         # beyond the end of the data return an empty string.
         seekpos = self._data_offset+offset
-        fsize = os.path.getsize(self.fname)
+        fsize = os.path.getsize(self.finalhome)
         actuallength = max(0, min(length, fsize-seekpos))
         if actuallength == 0:
             return ""
hunk ./src/allmydata/storage/backends/das/core.py 243
-        f = open(self.fname, 'rb')
+        f = open(self.finalhome, 'rb')
         f.seek(seekpos)
         return f.read(actuallength)
 
hunk ./src/allmydata/storage/backends/das/core.py 252
         precondition(offset >= 0, offset)
         if self._max_size is not None and offset+length > self._max_size:
             raise DataTooLargeError(self._max_size, offset, length)
-        f = open(self.fname, 'rb+')
+        f = open(self.incominghome, 'rb+')
         real_offset = self._data_offset+offset
         f.seek(real_offset)
         assert f.tell() == real_offset
hunk ./src/allmydata/storage/backends/das/core.py 279
 
     def get_leases(self):
         """Yields a LeaseInfo instance for all leases."""
-        f = open(self.fname, 'rb')
+        f = open(self.finalhome, 'rb')
         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
         f.seek(self._lease_offset)
         for i in range(num_leases):
hunk ./src/allmydata/storage/backends/das/core.py 288
                 yield LeaseInfo().from_immutable_data(data)
 
     def add_lease(self, lease_info):
-        f = open(self.fname, 'rb+')
+        f = open(self.incominghome, 'rb+')
         num_leases = self._read_num_leases(f)
         self._write_lease_record(f, num_leases, lease_info)
         self._write_num_leases(f, num_leases+1)
hunk ./src/allmydata/storage/backends/das/core.py 301
                 if new_expire_time > lease.expiration_time:
                     # yes
                     lease.expiration_time = new_expire_time
-                    f = open(self.fname, 'rb+')
+                    f = open(self.finalhome, 'rb+')
                     self._write_lease_record(f, i, lease)
                     f.close()
                 return
hunk ./src/allmydata/storage/backends/das/core.py 336
             # the same order as they were added, so that if we crash while
             # doing this, we won't lose any non-cancelled leases.
             leases = [l for l in leases if l] # remove the cancelled leases
-            f = open(self.fname, 'rb+')
+            f = open(self.finalhome, 'rb+')
             for i,lease in enumerate(leases):
                 self._write_lease_record(f, i, lease)
             self._write_num_leases(f, len(leases))
hunk ./src/allmydata/storage/backends/das/core.py 344
             f.close()
         space_freed = self.LEASE_SIZE * num_leases_removed
         if not len(leases):
-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
             self.unlink()
         return space_freed
hunk ./src/allmydata/test/test_backends.py 129
     @mock.patch('time.time')
     def test_write_share(self, mocktime):
         """ Write a new share. """
-
-        class MockShare:
-            def __init__(self):
-                self.shnum = 1
-                
-            def add_or_renew_lease(elf, lease_info):
-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
-
-        share = MockShare()
-
         # Now begin the test.
 
         # XXX (0) ???  Fail unless something is not properly set-up?
hunk ./src/allmydata/test/test_backends.py 143
         # self.failIf(bsa)
 
         bs[0].remote_write(0, 'a')
-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
         spaceint = self.s.allocated_size()
         self.failUnlessReallyEqual(spaceint, 1)
 
hunk ./src/allmydata/test/test_backends.py 161
         #self.failIf(mockrename.called, mockrename.call_args_list)
         #self.failIf(mockstat.called, mockstat.call_args_list)
 
+    def test_handle_incoming(self):
+        incomingset = self.s.backend.get_incoming('teststorage_index')
+        self.failUnlessReallyEqual(incomingset, set())
+
+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        
+        incomingset = self.s.backend.get_incoming('teststorage_index')
+        self.failUnlessReallyEqual(incomingset, set((0,)))
+
+        bs[0].remote_close()
+        self.failUnlessReallyEqual(incomingset, set())
+
     @mock.patch('os.path.exists')
     @mock.patch('os.path.getsize')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 223
         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
 
 
-
 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
hunk ./src/allmydata/test/test_backends.py 271
         DASCore('teststoredir', expiration_policy)
 
         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
+
}

Context:

[add Protovis.js-based download-status timeline visualization
Brian Warner <warner@lothar.com>**20110629222606
 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
 
 provide status overlap info on the webapi t=json output, add decode/decrypt
 rate tooltips, add zoomin/zoomout buttons
] 
[add more download-status data, fix tests
Brian Warner <warner@lothar.com>**20110629222555
 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
] 
[prepare for viz: improve DownloadStatus events
Brian Warner <warner@lothar.com>**20110629222542
 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
 
 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
] 
[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
zooko@zooko.com**20110629185711
 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
] 
[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
david-sarah@jacaranda.org**20110130235809
 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
] 
[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
david-sarah@jacaranda.org**20110626054124
 Ignore-this: abb864427a1b91bd10d5132b4589fd90
] 
[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
david-sarah@jacaranda.org**20110623205528
 Ignore-this: c63e23146c39195de52fb17c7c49b2da
] 
[Rename test_package_initialization.py to (much shorter) test_import.py .
Brian Warner <warner@lothar.com>**20110611190234
 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
 
 The former name was making my 'ls' listings hard to read, by forcing them
 down to just two columns.
] 
[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
zooko@zooko.com**20110611163741
 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
 fixes #1412
] 
[wui: right-align the size column in the WUI
zooko@zooko.com**20110611153758
 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
 fixes #1412
] 
[docs: three minor fixes
zooko@zooko.com**20110610121656
 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
 CREDITS for arc for stats tweak
 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
 English usage tweak
] 
[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
david-sarah@jacaranda.org**20110609223719
 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
] 
[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
wilcoxjg@gmail.com**20110527120135
 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
 NEWS.rst, stats.py: documentation of change to get_latencies
 stats.rst: now documents percentile modification in get_latencies
 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
 fixes #1392
] 
[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
david-sarah@jacaranda.org**20110517011214
 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
] 
[docs: convert NEWS to NEWS.rst and change all references to it.
david-sarah@jacaranda.org**20110517010255
 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
] 
[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
david-sarah@jacaranda.org**20110512140559
 Ignore-this: 784548fc5367fac5450df1c46890876d
] 
[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
david-sarah@jacaranda.org**20110130164923
 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
] 
[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
zooko@zooko.com**20110128142006
 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
] 
[M-x whitespace-cleanup
zooko@zooko.com**20110510193653
 Ignore-this: dea02f831298c0f65ad096960e7df5c7
] 
[docs: fix typo in running.rst, thanks to arch_o_median
zooko@zooko.com**20110510193633
 Ignore-this: ca06de166a46abbc61140513918e79e8
] 
[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
david-sarah@jacaranda.org**20110204204902
 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
] 
[relnotes.txt: forseeable -> foreseeable. refs #1342
david-sarah@jacaranda.org**20110204204116
 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
] 
[replace remaining .html docs with .rst docs
zooko@zooko.com**20110510191650
 Ignore-this: d557d960a986d4ac8216d1677d236399
 Remove install.html (long since deprecated).
 Also replace some obsolete references to install.html with references to quickstart.rst.
 Fix some broken internal references within docs/historical/historical_known_issues.txt.
 Thanks to Ravi Pinjala and Patrick McDonald.
 refs #1227
] 
[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
zooko@zooko.com**20110428055232
 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
] 
[munin tahoe_files plugin: fix incorrect file count
francois@ctrlaltdel.ch**20110428055312
 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
 fixes #1391
] 
[corrected "k must never be smaller than N" to "k must never be greater than N"
secorp@allmydata.org**20110425010308
 Ignore-this: 233129505d6c70860087f22541805eac
] 
[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
david-sarah@jacaranda.org**20110411190738
 Ignore-this: 7847d26bc117c328c679f08a7baee519
] 
[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
david-sarah@jacaranda.org**20110410155844
 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
] 
[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
david-sarah@jacaranda.org**20110410155705
 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
] 
[remove unused variable detected by pyflakes
zooko@zooko.com**20110407172231
 Ignore-this: 7344652d5e0720af822070d91f03daf9
] 
[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
david-sarah@jacaranda.org**20110401202750
 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
] 
[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
Brian Warner <warner@lothar.com>**20110325232511
 Ignore-this: d5307faa6900f143193bfbe14e0f01a
] 
[control.py: remove all uses of s.get_serverid()
warner@lothar.com**20110227011203
 Ignore-this: f80a787953bd7fa3d40e828bde00e855
] 
[web: remove some uses of s.get_serverid(), not all
warner@lothar.com**20110227011159
 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
] 
[immutable/downloader/fetcher.py: remove all get_serverid() calls
warner@lothar.com**20110227011156
 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
] 
[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
warner@lothar.com**20110227011153
 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
 
 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
 _shares_from_server dict was being popped incorrectly (using shnum as the
 index instead of serverid). I'm still thinking through the consequences of
 this bug. It was probably benign and really hard to detect. I think it would
 cause us to incorrectly believe that we're pulling too many shares from a
 server, and thus prefer a different server rather than asking for a second
 share from the first server. The diversity code is intended to spread out the
 number of shares simultaneously being requested from each server, but with
 this bug, it might be spreading out the total number of shares requested at
 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
 segment, so the effect doesn't last very long).
] 
[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
warner@lothar.com**20110227011150
 Ignore-this: d8d56dd8e7b280792b40105e13664554
 
 test_download.py: create+check MyShare instances better, make sure they share
 Server objects, now that finder.py cares
] 
[immutable/downloader/finder.py: reduce use of get_serverid(), one left
warner@lothar.com**20110227011146
 Ignore-this: 5785be173b491ae8a78faf5142892020
] 
[immutable/offloaded.py: reduce use of get_serverid() a bit more
warner@lothar.com**20110227011142
 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
] 
[immutable/upload.py: reduce use of get_serverid()
warner@lothar.com**20110227011138
 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
] 
[immutable/checker.py: remove some uses of s.get_serverid(), not all
warner@lothar.com**20110227011134
 Ignore-this: e480a37efa9e94e8016d826c492f626e
] 
[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
warner@lothar.com**20110227011132
 Ignore-this: 6078279ddf42b179996a4b53bee8c421
 MockIServer stubs
] 
[upload.py: rearrange _make_trackers a bit, no behavior changes
warner@lothar.com**20110227011128
 Ignore-this: 296d4819e2af452b107177aef6ebb40f
] 
[happinessutil.py: finally rename merge_peers to merge_servers
warner@lothar.com**20110227011124
 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
] 
[test_upload.py: factor out FakeServerTracker
warner@lothar.com**20110227011120
 Ignore-this: 6c182cba90e908221099472cc159325b
] 
[test_upload.py: server-vs-tracker cleanup
warner@lothar.com**20110227011115
 Ignore-this: 2915133be1a3ba456e8603885437e03
] 
[happinessutil.py: server-vs-tracker cleanup
warner@lothar.com**20110227011111
 Ignore-this: b856c84033562d7d718cae7cb01085a9
] 
[upload.py: more tracker-vs-server cleanup
warner@lothar.com**20110227011107
 Ignore-this: bb75ed2afef55e47c085b35def2de315
] 
[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
warner@lothar.com**20110227011103
 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
] 
[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
warner@lothar.com**20110227011100
 Ignore-this: 7ea858755cbe5896ac212a925840fe68
 
 No behavioral changes, just updating variable/method names and log messages.
 The effects outside these three files should be minimal: some exception
 messages changed (to say "server" instead of "peer"), and some internal class
 names were changed. A few things still use "peer" to minimize external
 changes, like UploadResults.timings["peer_selection"] and
 happinessutil.merge_peers, which can be changed later.
] 
[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
warner@lothar.com**20110227011056
 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
] 
[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
warner@lothar.com**20110227011051
 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
] 
[test: increase timeout on a network test because Francois's ARM machine hit that timeout
zooko@zooko.com**20110317165909
 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
] 
[docs/configuration.rst: add a "Frontend Configuration" section
Brian Warner <warner@lothar.com>**20110222014323
 Ignore-this: 657018aa501fe4f0efef9851628444ca
 
 this points to docs/frontends/*.rst, which were previously underlinked
] 
[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
"Brian Warner <warner@lothar.com>"**20110221061544
 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
] 
[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
david-sarah@jacaranda.org**20110221015817
 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
] 
[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
david-sarah@jacaranda.org**20110221020125
 Ignore-this: b0744ed58f161bf188e037bad077fc48
] 
[Refactor StorageFarmBroker handling of servers
Brian Warner <warner@lothar.com>**20110221015804
 Ignore-this: 842144ed92f5717699b8f580eab32a51
 
 Pass around IServer instance instead of (peerid, rref) tuple. Replace
 "descriptor" with "server". Other replacements:
 
  get_all_servers -> get_connected_servers/get_known_servers
  get_servers_for_index -> get_servers_for_psi (now returns IServers)
 
 This change still needs to be pushed further down: lots of code is now
 getting the IServer and then distributing (peerid, rref) internally.
 Instead, it ought to distribute the IServer internally and delay
 extracting a serverid or rref until the last moment.
 
 no_network.py was updated to retain parallelism.
] 
[TAG allmydata-tahoe-1.8.2
warner@lothar.com**20110131020101] 
Patch bundle hash:
a52fe37e2ebd452af957b1f9376edfb6a68d8a76
