Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
  * storage: new mocking tests of storage server read and write
  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.

Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
  sloppy not for production

Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
  * a temp patch used as a snapshot

Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
  * snapshot of progress on backend implementation (not suitable for trunk)

Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
  * checkpoint patch

Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
  * checkpoint4

Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
  * checkpoint5

Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
  * checkpoint 6

Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
  * checkpoint 7

Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
  * checkpoint8
    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.

Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
  * checkpoint 9

Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
  * checkpoint10

Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
  * jacp 11

Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
  * checkpoint12 testing correct behavior with regard to incoming and final

Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
  * fix inconsistent naming of storage_index vs storageindex in storage/server.py

Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
  * adding comments to clarify what I'm about to do.

Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
  * branching back, no longer attempting to mock inside TestServerFSBackend

Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
  * checkpoint12 TestServerFSBackend no longer mocks filesystem

Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
  * JACP

Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
  * testing get incoming

Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
  * ImmutableShareFile does not know its StorageIndex

Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
  * get_incoming correctly reports the 0 share after it has arrived

Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
  * jacp14

Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
  * jacp14 or so

Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
  * temporary work-in-progress patch to be unrecorded
  tidy up a few tests, work done in pair-programming with Zancas

Thu Jul 14 15:21:39 MDT 2011  zooko@zooko.com
  * work in progress intended to be unrecorded and never committed to trunk
  switch from os.path.join to filepath
  incomplete refactoring of common "stay in your subtree" tester code into a superclass
  

Fri Jul 15 13:15:00 MDT 2011  zooko@zooko.com
  * another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
  In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.

Tue Jul 19 23:59:18 MDT 2011  zooko@zooko.com
  * another temporary patch for sharing work-in-progress
  A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
  There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
  (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
  

Fri Jul 22 01:00:36 MDT 2011  wilcoxjg@gmail.com
  * jacp16 or so

Fri Jul 22 14:32:44 MDT 2011  wilcoxjg@gmail.com
  * jacp17

Fri Jul 22 21:19:15 MDT 2011  wilcoxjg@gmail.com
  * jacp18

Sat Jul 23 21:42:30 MDT 2011  wilcoxjg@gmail.com
  * jacp19orso

Wed Jul 27 02:05:53 MDT 2011  wilcoxjg@gmail.com
  * jacp19

Thu Jul 28 01:25:14 MDT 2011  wilcoxjg@gmail.com
  * jacp20

New patches:

[storage: new mocking tests of storage server read and write
wilcoxjg@gmail.com**20110325203514
 Ignore-this: df65c3c4f061dd1516f88662023fdb41
 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
] {
addfile ./src/allmydata/test/test_server.py
hunk ./src/allmydata/test/test_server.py 1
+from twisted.trial import unittest
+
+from StringIO import StringIO
+
+from allmydata.test.common_util import ReallyEqualMixin
+
+import mock
+
+# This is the code that we're going to be testing.
+from allmydata.storage.server import StorageServer
+
+# The following share file contents was generated with
+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
+# with share data == 'a'.
+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
+
+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
+
+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
+    @mock.patch('__builtin__.open')
+    def test_create_server(self, mockopen):
+        """ This tests whether a server instance can be constructed. """
+
+        def call_open(fname, mode):
+            if fname == 'testdir/bucket_counter.state':
+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
+            elif fname == 'testdir/lease_checker.state':
+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
+            elif fname == 'testdir/lease_checker.history':
+                return StringIO()
+        mockopen.side_effect = call_open
+
+        # Now begin the test.
+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
+
+        # You passed!
+
+class TestServer(unittest.TestCase, ReallyEqualMixin):
+    @mock.patch('__builtin__.open')
+    def setUp(self, mockopen):
+        def call_open(fname, mode):
+            if fname == 'testdir/bucket_counter.state':
+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
+            elif fname == 'testdir/lease_checker.state':
+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
+            elif fname == 'testdir/lease_checker.history':
+                return StringIO()
+        mockopen.side_effect = call_open
+
+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
+
+
+    @mock.patch('time.time')
+    @mock.patch('os.mkdir')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
+        """Handle a report of corruption."""
+
+        def call_listdir(dirname):
+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
+
+        mocklistdir.side_effect = call_listdir
+
+        class MockFile:
+            def __init__(self):
+                self.buffer = ''
+                self.pos = 0
+            def write(self, instring):
+                begin = self.pos
+                padlen = begin - len(self.buffer)
+                if padlen > 0:
+                    self.buffer += '\x00' * padlen
+                end = self.pos + len(instring)
+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
+                self.pos = end
+            def close(self):
+                pass
+            def seek(self, pos):
+                self.pos = pos
+            def read(self, numberbytes):
+                return self.buffer[self.pos:self.pos+numberbytes]
+            def tell(self):
+                return self.pos
+
+        mocktime.return_value = 0
+
+        sharefile = MockFile()
+        def call_open(fname, mode):
+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
+            return sharefile
+
+        mockopen.side_effect = call_open
+        # Now begin the test.
+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        print bs
+        bs[0].remote_write(0, 'a')
+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
+
+
+    @mock.patch('os.path.exists')
+    @mock.patch('os.path.getsize')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
+        """ This tests whether the code correctly finds and reads
+        shares written out by old (Tahoe-LAFS <= v1.8.2)
+        servers. There is a similar test in test_download, but that one
+        is from the perspective of the client and exercises a deeper
+        stack of code. This one is for exercising just the
+        StorageServer object. """
+
+        def call_listdir(dirname):
+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
+            return ['0']
+
+        mocklistdir.side_effect = call_listdir
+
+        def call_open(fname, mode):
+            self.failUnlessReallyEqual(fname, sharefname)
+            self.failUnless('r' in mode, mode)
+            self.failUnless('b' in mode, mode)
+
+            return StringIO(share_file_data)
+        mockopen.side_effect = call_open
+
+        datalen = len(share_file_data)
+        def call_getsize(fname):
+            self.failUnlessReallyEqual(fname, sharefname)
+            return datalen
+        mockgetsize.side_effect = call_getsize
+
+        def call_exists(fname):
+            self.failUnlessReallyEqual(fname, sharefname)
+            return True
+        mockexists.side_effect = call_exists
+
+        # Now begin the test.
+        bs = self.s.remote_get_buckets('teststorage_index')
+
+        self.failUnlessEqual(len(bs), 1)
+        b = bs[0]
+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
+        # If you try to read past the end you get the as much data as is there.
+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
+        # If you start reading past the end of the file you get the empty string.
+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
}
[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
wilcoxjg@gmail.com**20110624202850
 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
 sloppy not for production
] {
move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
hunk ./src/allmydata/storage/crawler.py 13
     pass
 
 class ShareCrawler(service.MultiService):
-    """A ShareCrawler subclass is attached to a StorageServer, and
+    """A subcless of ShareCrawler is attached to a StorageServer, and
     periodically walks all of its shares, processing each one in some
     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
     since large servers can easily have a terabyte of shares, in several
hunk ./src/allmydata/storage/crawler.py 31
     We assume that the normal upload/download/get_buckets traffic of a tahoe
     grid will cause the prefixdir contents to be mostly cached in the kernel,
     or that the number of buckets in each prefixdir will be small enough to
-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
     prefix. On this server, each prefixdir took 130ms-200ms to list the first
     time, and 17ms to list the second time.
hunk ./src/allmydata/storage/crawler.py 68
     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
     minimum_cycle_time = 300 # don't run a cycle faster than this
 
-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
         service.MultiService.__init__(self)
         if allowed_cpu_percentage is not None:
             self.allowed_cpu_percentage = allowed_cpu_percentage
hunk ./src/allmydata/storage/crawler.py 72
-        self.server = server
-        self.sharedir = server.sharedir
-        self.statefile = statefile
+        self.backend = backend
         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
                          for i in range(2**10)]
         self.prefixes.sort()
hunk ./src/allmydata/storage/crawler.py 446
 
     minimum_cycle_time = 60*60 # we don't need this more than once an hour
 
-    def __init__(self, server, statefile, num_sample_prefixes=1):
-        ShareCrawler.__init__(self, server, statefile)
+    def __init__(self, statefile, num_sample_prefixes=1):
+        ShareCrawler.__init__(self, statefile)
         self.num_sample_prefixes = num_sample_prefixes
 
     def add_initial_state(self):
hunk ./src/allmydata/storage/expirer.py 15
     removed.
 
     I collect statistics on the leases and make these available to a web
-    status page, including::
+    status page, including:
 
     Space recovered during this cycle-so-far:
      actual (only if expiration_enabled=True):
hunk ./src/allmydata/storage/expirer.py 51
     slow_start = 360 # wait 6 minutes after startup
     minimum_cycle_time = 12*60*60 # not more than twice per day
 
-    def __init__(self, server, statefile, historyfile,
+    def __init__(self, statefile, historyfile,
                  expiration_enabled, mode,
                  override_lease_duration, # used if expiration_mode=="age"
                  cutoff_date, # used if expiration_mode=="cutoff-date"
hunk ./src/allmydata/storage/expirer.py 71
         else:
             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
         self.sharetypes_to_expire = sharetypes
-        ShareCrawler.__init__(self, server, statefile)
+        ShareCrawler.__init__(self, statefile)
 
     def add_initial_state(self):
         # we fill ["cycle-to-date"] here (even though they will be reset in
hunk ./src/allmydata/storage/immutable.py 44
     sharetype = "immutable"
 
     def __init__(self, filename, max_size=None, create=False):
-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
+        """ If max_size is not None then I won't allow more than
+        max_size to be written to me. If create=True then max_size
+        must not be None. """
         precondition((max_size is not None) or (not create), max_size, create)
         self.home = filename
         self._max_size = max_size
hunk ./src/allmydata/storage/immutable.py 87
 
     def read_share_data(self, offset, length):
         precondition(offset >= 0)
-        # reads beyond the end of the data are truncated. Reads that start
-        # beyond the end of the data return an empty string. I wonder why
-        # Python doesn't do the following computation for me?
+        # Reads beyond the end of the data are truncated. Reads that start
+        # beyond the end of the data return an empty string.
         seekpos = self._data_offset+offset
         fsize = os.path.getsize(self.home)
         actuallength = max(0, min(length, fsize-seekpos))
hunk ./src/allmydata/storage/immutable.py 198
             space_freed += os.stat(self.home)[stat.ST_SIZE]
             self.unlink()
         return space_freed
+class NullBucketWriter(Referenceable):
+    implements(RIBucketWriter)
 
hunk ./src/allmydata/storage/immutable.py 201
+    def remote_write(self, offset, data):
+        return
 
 class BucketWriter(Referenceable):
     implements(RIBucketWriter)
hunk ./src/allmydata/storage/server.py 7
 from twisted.application import service
 
 from zope.interface import implements
-from allmydata.interfaces import RIStorageServer, IStatsProducer
+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
 from allmydata.util import fileutil, idlib, log, time_format
 import allmydata # for __full_version__
 
hunk ./src/allmydata/storage/server.py 16
 from allmydata.storage.lease import LeaseInfo
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
 from allmydata.storage.crawler import BucketCountingCrawler
 from allmydata.storage.expirer import LeaseCheckingCrawler
 
hunk ./src/allmydata/storage/server.py 20
+from zope.interface import implements
+
+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
+# be started and stopped.
+class Backend(service.MultiService):
+    implements(IStatsProducer)
+    def __init__(self):
+        service.MultiService.__init__(self)
+
+    def get_bucket_shares(self):
+        """XXX"""
+        raise NotImplementedError
+
+    def get_share(self):
+        """XXX"""
+        raise NotImplementedError
+
+    def make_bucket_writer(self):
+        """XXX"""
+        raise NotImplementedError
+
+class NullBackend(Backend):
+    def __init__(self):
+        Backend.__init__(self)
+
+    def get_available_space(self):
+        return None
+
+    def get_bucket_shares(self, storage_index):
+        return set()
+
+    def get_share(self, storage_index, sharenum):
+        return None
+
+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
+        return NullBucketWriter()
+
+class FSBackend(Backend):
+    def __init__(self, storedir, readonly=False, reserved_space=0):
+        Backend.__init__(self)
+
+        self._setup_storage(storedir, readonly, reserved_space)
+        self._setup_corruption_advisory()
+        self._setup_bucket_counter()
+        self._setup_lease_checkerf()
+
+    def _setup_storage(self, storedir, readonly, reserved_space):
+        self.storedir = storedir
+        self.readonly = readonly
+        self.reserved_space = int(reserved_space)
+        if self.reserved_space:
+            if self.get_available_space() is None:
+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
+                        umid="0wZ27w", level=log.UNUSUAL)
+
+        self.sharedir = os.path.join(self.storedir, "shares")
+        fileutil.make_dirs(self.sharedir)
+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
+        self._clean_incomplete()
+
+    def _clean_incomplete(self):
+        fileutil.rm_dir(self.incomingdir)
+        fileutil.make_dirs(self.incomingdir)
+
+    def _setup_corruption_advisory(self):
+        # we don't actually create the corruption-advisory dir until necessary
+        self.corruption_advisory_dir = os.path.join(self.storedir,
+                                                    "corruption-advisories")
+
+    def _setup_bucket_counter(self):
+        statefile = os.path.join(self.storedir, "bucket_counter.state")
+        self.bucket_counter = BucketCountingCrawler(statefile)
+        self.bucket_counter.setServiceParent(self)
+
+    def _setup_lease_checkerf(self):
+        statefile = os.path.join(self.storedir, "lease_checker.state")
+        historyfile = os.path.join(self.storedir, "lease_checker.history")
+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
+                                   expiration_enabled, expiration_mode,
+                                   expiration_override_lease_duration,
+                                   expiration_cutoff_date,
+                                   expiration_sharetypes)
+        self.lease_checker.setServiceParent(self)
+
+    def get_available_space(self):
+        if self.readonly:
+            return 0
+        return fileutil.get_available_space(self.storedir, self.reserved_space)
+
+    def get_bucket_shares(self, storage_index):
+        """Return a list of (shnum, pathname) tuples for files that hold
+        shares for this storage_index. In each tuple, 'shnum' will always be
+        the integer form of the last component of 'pathname'."""
+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
+        try:
+            for f in os.listdir(storagedir):
+                if NUM_RE.match(f):
+                    filename = os.path.join(storagedir, f)
+                    yield (int(f), filename)
+        except OSError:
+            # Commonly caused by there being no buckets at all.
+            pass
+
 # storage/
 # storage/shares/incoming
 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
hunk ./src/allmydata/storage/server.py 143
     name = 'storage'
     LeaseCheckerClass = LeaseCheckingCrawler
 
-    def __init__(self, storedir, nodeid, reserved_space=0,
-                 discard_storage=False, readonly_storage=False,
+    def __init__(self, nodeid, backend, reserved_space=0,
+                 readonly_storage=False,
                  stats_provider=None,
                  expiration_enabled=False,
                  expiration_mode="age",
hunk ./src/allmydata/storage/server.py 155
         assert isinstance(nodeid, str)
         assert len(nodeid) == 20
         self.my_nodeid = nodeid
-        self.storedir = storedir
-        sharedir = os.path.join(storedir, "shares")
-        fileutil.make_dirs(sharedir)
-        self.sharedir = sharedir
-        # we don't actually create the corruption-advisory dir until necessary
-        self.corruption_advisory_dir = os.path.join(storedir,
-                                                    "corruption-advisories")
-        self.reserved_space = int(reserved_space)
-        self.no_storage = discard_storage
-        self.readonly_storage = readonly_storage
         self.stats_provider = stats_provider
         if self.stats_provider:
             self.stats_provider.register_producer(self)
hunk ./src/allmydata/storage/server.py 158
-        self.incomingdir = os.path.join(sharedir, 'incoming')
-        self._clean_incomplete()
-        fileutil.make_dirs(self.incomingdir)
         self._active_writers = weakref.WeakKeyDictionary()
hunk ./src/allmydata/storage/server.py 159
+        self.backend = backend
+        self.backend.setServiceParent(self)
         log.msg("StorageServer created", facility="tahoe.storage")
 
hunk ./src/allmydata/storage/server.py 163
-        if reserved_space:
-            if self.get_available_space() is None:
-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
-                        umin="0wZ27w", level=log.UNUSUAL)
-
         self.latencies = {"allocate": [], # immutable
                           "write": [],
                           "close": [],
hunk ./src/allmydata/storage/server.py 174
                           "renew": [],
                           "cancel": [],
                           }
-        self.add_bucket_counter()
-
-        statefile = os.path.join(self.storedir, "lease_checker.state")
-        historyfile = os.path.join(self.storedir, "lease_checker.history")
-        klass = self.LeaseCheckerClass
-        self.lease_checker = klass(self, statefile, historyfile,
-                                   expiration_enabled, expiration_mode,
-                                   expiration_override_lease_duration,
-                                   expiration_cutoff_date,
-                                   expiration_sharetypes)
-        self.lease_checker.setServiceParent(self)
 
     def __repr__(self):
         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
hunk ./src/allmydata/storage/server.py 178
 
-    def add_bucket_counter(self):
-        statefile = os.path.join(self.storedir, "bucket_counter.state")
-        self.bucket_counter = BucketCountingCrawler(self, statefile)
-        self.bucket_counter.setServiceParent(self)
-
     def count(self, name, delta=1):
         if self.stats_provider:
             self.stats_provider.count("storage_server." + name, delta)
hunk ./src/allmydata/storage/server.py 233
             kwargs["facility"] = "tahoe.storage"
         return log.msg(*args, **kwargs)
 
-    def _clean_incomplete(self):
-        fileutil.rm_dir(self.incomingdir)
-
     def get_stats(self):
         # remember: RIStatsProvider requires that our return dict
         # contains numeric values.
hunk ./src/allmydata/storage/server.py 269
             stats['storage_server.total_bucket_count'] = bucket_count
         return stats
 
-    def get_available_space(self):
-        """Returns available space for share storage in bytes, or None if no
-        API to get this information is available."""
-
-        if self.readonly_storage:
-            return 0
-        return fileutil.get_available_space(self.storedir, self.reserved_space)
-
     def allocated_size(self):
         space = 0
         for bw in self._active_writers:
hunk ./src/allmydata/storage/server.py 276
         return space
 
     def remote_get_version(self):
-        remaining_space = self.get_available_space()
+        remaining_space = self.backend.get_available_space()
         if remaining_space is None:
             # We're on a platform that has no API to get disk stats.
             remaining_space = 2**64
hunk ./src/allmydata/storage/server.py 301
         self.count("allocate")
         alreadygot = set()
         bucketwriters = {} # k: shnum, v: BucketWriter
-        si_dir = storage_index_to_dir(storage_index)
-        si_s = si_b2a(storage_index)
 
hunk ./src/allmydata/storage/server.py 302
+        si_s = si_b2a(storage_index)
         log.msg("storage: allocate_buckets %s" % si_s)
 
         # in this implementation, the lease information (including secrets)
hunk ./src/allmydata/storage/server.py 316
 
         max_space_per_bucket = allocated_size
 
-        remaining_space = self.get_available_space()
+        remaining_space = self.backend.get_available_space()
         limited = remaining_space is not None
         if limited:
             # this is a bit conservative, since some of this allocated_size()
hunk ./src/allmydata/storage/server.py 329
         # they asked about: this will save them a lot of work. Add or update
         # leases for all of them: if they want us to hold shares for this
         # file, they'll want us to hold leases for this file.
-        for (shnum, fn) in self._get_bucket_shares(storage_index):
+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
             alreadygot.add(shnum)
             sf = ShareFile(fn)
             sf.add_or_renew_lease(lease_info)
hunk ./src/allmydata/storage/server.py 335
 
         for shnum in sharenums:
-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
-            if os.path.exists(finalhome):
+            share = self.backend.get_share(storage_index, shnum)
+
+            if not share:
+                if (not limited) or (remaining_space >= max_space_per_bucket):
+                    # ok! we need to create the new share file.
+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
+                                      max_space_per_bucket, lease_info, canary)
+                    bucketwriters[shnum] = bw
+                    self._active_writers[bw] = 1
+                    if limited:
+                        remaining_space -= max_space_per_bucket
+                else:
+                    # bummer! not enough space to accept this bucket
+                    pass
+
+            elif share.is_complete():
                 # great! we already have it. easy.
                 pass
hunk ./src/allmydata/storage/server.py 353
-            elif os.path.exists(incominghome):
+            elif not share.is_complete():
                 # Note that we don't create BucketWriters for shnums that
                 # have a partial share (in incoming/), so if a second upload
                 # occurs while the first is still in progress, the second
hunk ./src/allmydata/storage/server.py 359
                 # uploader will use different storage servers.
                 pass
-            elif (not limited) or (remaining_space >= max_space_per_bucket):
-                # ok! we need to create the new share file.
-                bw = BucketWriter(self, incominghome, finalhome,
-                                  max_space_per_bucket, lease_info, canary)
-                if self.no_storage:
-                    bw.throw_out_all_data = True
-                bucketwriters[shnum] = bw
-                self._active_writers[bw] = 1
-                if limited:
-                    remaining_space -= max_space_per_bucket
-            else:
-                # bummer! not enough space to accept this bucket
-                pass
-
-        if bucketwriters:
-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
 
         self.add_latency("allocate", time.time() - start)
         return alreadygot, bucketwriters
hunk ./src/allmydata/storage/server.py 437
             self.stats_provider.count('storage_server.bytes_added', consumed_size)
         del self._active_writers[bw]
 
-    def _get_bucket_shares(self, storage_index):
-        """Return a list of (shnum, pathname) tuples for files that hold
-        shares for this storage_index. In each tuple, 'shnum' will always be
-        the integer form of the last component of 'pathname'."""
-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
-        try:
-            for f in os.listdir(storagedir):
-                if NUM_RE.match(f):
-                    filename = os.path.join(storagedir, f)
-                    yield (int(f), filename)
-        except OSError:
-            # Commonly caused by there being no buckets at all.
-            pass
 
     def remote_get_buckets(self, storage_index):
         start = time.time()
hunk ./src/allmydata/storage/server.py 444
         si_s = si_b2a(storage_index)
         log.msg("storage: get_buckets %s" % si_s)
         bucketreaders = {} # k: sharenum, v: BucketReader
-        for shnum, filename in self._get_bucket_shares(storage_index):
+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
             bucketreaders[shnum] = BucketReader(self, filename,
                                                 storage_index, shnum)
         self.add_latency("get", time.time() - start)
hunk ./src/allmydata/test/test_backends.py 10
 import mock
 
 # This is the code that we're going to be testing.
-from allmydata.storage.server import StorageServer
+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
 
 # The following share file contents was generated with
 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
hunk ./src/allmydata/test/test_backends.py 21
 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
 
 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
+    @mock.patch('time.time')
+    @mock.patch('os.mkdir')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
+        """ This tests whether a server instance can be constructed
+        with a null backend. The server instance fails the test if it
+        tries to read or write to the file system. """
+
+        # Now begin the test.
+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
+
+        self.failIf(mockisdir.called)
+        self.failIf(mocklistdir.called)
+        self.failIf(mockopen.called)
+        self.failIf(mockmkdir.called)
+
+        # You passed!
+
+    @mock.patch('time.time')
+    @mock.patch('os.mkdir')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 44
-    def test_create_server(self, mockopen):
-        """ This tests whether a server instance can be constructed. """
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
+        """ This tests whether a server instance can be constructed
+        with a filesystem backend. To pass the test, it has to use the
+        filesystem in only the prescribed ways. """
 
         def call_open(fname, mode):
             if fname == 'testdir/bucket_counter.state':
hunk ./src/allmydata/test/test_backends.py 58
                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
             elif fname == 'testdir/lease_checker.history':
                 return StringIO()
+            else:
+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
         mockopen.side_effect = call_open
 
         # Now begin the test.
hunk ./src/allmydata/test/test_backends.py 63
-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
+
+        self.failIf(mockisdir.called)
+        self.failIf(mocklistdir.called)
+        self.failIf(mockopen.called)
+        self.failIf(mockmkdir.called)
+        self.failIf(mocktime.called)
 
         # You passed!
 
hunk ./src/allmydata/test/test_backends.py 73
-class TestServer(unittest.TestCase, ReallyEqualMixin):
+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
+    def setUp(self):
+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
+
+    @mock.patch('os.mkdir')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
+        """ Write a new share. """
+
+        # Now begin the test.
+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        bs[0].remote_write(0, 'a')
+        self.failIf(mockisdir.called)
+        self.failIf(mocklistdir.called)
+        self.failIf(mockopen.called)
+        self.failIf(mockmkdir.called)
+
+    @mock.patch('os.path.exists')
+    @mock.patch('os.path.getsize')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
+        """ This tests whether the code correctly finds and reads
+        shares written out by old (Tahoe-LAFS <= v1.8.2)
+        servers. There is a similar test in test_download, but that one
+        is from the perspective of the client and exercises a deeper
+        stack of code. This one is for exercising just the
+        StorageServer object. """
+
+        # Now begin the test.
+        bs = self.s.remote_get_buckets('teststorage_index')
+
+        self.failUnlessEqual(len(bs), 0)
+        self.failIf(mocklistdir.called)
+        self.failIf(mockopen.called)
+        self.failIf(mockgetsize.called)
+        self.failIf(mockexists.called)
+
+
+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
     @mock.patch('__builtin__.open')
     def setUp(self, mockopen):
         def call_open(fname, mode):
hunk ./src/allmydata/test/test_backends.py 126
                 return StringIO()
         mockopen.side_effect = call_open
 
-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
-
+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
 
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
hunk ./src/allmydata/test/test_backends.py 134
     @mock.patch('os.listdir')
     @mock.patch('os.path.isdir')
     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
-        """Handle a report of corruption."""
+        """ Write a new share. """
 
         def call_listdir(dirname):
             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
hunk ./src/allmydata/test/test_backends.py 173
         mockopen.side_effect = call_open
         # Now begin the test.
         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
-        print bs
         bs[0].remote_write(0, 'a')
         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
 
hunk ./src/allmydata/test/test_backends.py 176
-
     @mock.patch('os.path.exists')
     @mock.patch('os.path.getsize')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 218
 
         self.failUnlessEqual(len(bs), 1)
         b = bs[0]
+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
         # If you try to read past the end you get the as much data as is there.
         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
hunk ./src/allmydata/test/test_backends.py 224
         # If you start reading past the end of the file you get the empty string.
         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
+
+
}
[a temp patch used as a snapshot
wilcoxjg@gmail.com**20110626052732
 Ignore-this: 95f05e314eaec870afa04c76d979aa44
] {
hunk ./docs/configuration.rst 637
   [storage]
   enabled = True
   readonly = True
-  sizelimit = 10000000000
 
 
   [helper]
hunk ./docs/garbage-collection.rst 16
 
 When a file or directory in the virtual filesystem is no longer referenced,
 the space that its shares occupied on each storage server can be freed,
-making room for other shares. Tahoe currently uses a garbage collection
+making room for other shares. Tahoe uses a garbage collection
 ("GC") mechanism to implement this space-reclamation process. Each share has
 one or more "leases", which are managed by clients who want the
 file/directory to be retained. The storage server accepts each share for a
hunk ./docs/garbage-collection.rst 34
 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
 If lease renewal occurs quickly and with 100% reliability, than any renewal
 time that is shorter than the lease duration will suffice, but a larger ratio
-of duration-over-renewal-time will be more robust in the face of occasional
+of lease duration to renewal time will be more robust in the face of occasional
 delays or failures.
 
 The current recommended values for a small Tahoe grid are to renew the leases
replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
hunk ./src/allmydata/client.py 260
             sharetypes.append("mutable")
         expiration_sharetypes = tuple(sharetypes)
 
+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
+            xyz 
+        xyz
         ss = StorageServer(storedir, self.nodeid,
                            reserved_space=reserved,
                            discard_storage=discard,
hunk ./src/allmydata/storage/crawler.py 234
         f = open(tmpfile, "wb")
         pickle.dump(self.state, f)
         f.close()
-        fileutil.move_into_place(tmpfile, self.statefile)
+        fileutil.move_into_place(tmpfile, self.statefname)
 
     def startService(self):
         # arrange things to look like we were just sleeping, so
}
[snapshot of progress on backend implementation (not suitable for trunk)
wilcoxjg@gmail.com**20110626053244
 Ignore-this: 50c764af791c2b99ada8289546806a0a
] {
adddir ./src/allmydata/storage/backends
adddir ./src/allmydata/storage/backends/das
move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
adddir ./src/allmydata/storage/backends/null
hunk ./src/allmydata/interfaces.py 270
         store that on disk.
         """
 
+class IStorageBackend(Interface):
+    """
+    Objects of this kind live on the server side and are used by the
+    storage server object.
+    """
+    def get_available_space(self, reserved_space):
+        """ Returns available space for share storage in bytes, or
+        None if this information is not available or if the available
+        space is unlimited.
+
+        If the backend is configured for read-only mode then this will
+        return 0.
+
+        reserved_space is how many bytes to subtract from the answer, so
+        you can pass how many bytes you would like to leave unused on this
+        filesystem as reserved_space. """
+
+    def get_bucket_shares(self):
+        """XXX"""
+
+    def get_share(self):
+        """XXX"""
+
+    def make_bucket_writer(self):
+        """XXX"""
+
+class IStorageBackendShare(Interface):
+    """
+    This object contains as much as all of the share data.  It is intended
+    for lazy evaluation such that in many use cases substantially less than
+    all of the share data will be accessed.
+    """
+    def is_complete(self):
+        """
+        Returns the share state, or None if the share does not exist.
+        """
+
 class IStorageBucketWriter(Interface):
     """
     Objects of this kind live on the client side.
hunk ./src/allmydata/interfaces.py 2492
 
 class EmptyPathnameComponentError(Exception):
     """The webapi disallows empty pathname components."""
+
+class IShareStore(Interface):
+    pass
+
addfile ./src/allmydata/storage/backends/__init__.py
addfile ./src/allmydata/storage/backends/das/__init__.py
addfile ./src/allmydata/storage/backends/das/core.py
hunk ./src/allmydata/storage/backends/das/core.py 1
+from allmydata.interfaces import IStorageBackend
+from allmydata.storage.backends.base import Backend
+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
+from allmydata.util.assertutil import precondition
+
+import os, re, weakref, struct, time
+
+from foolscap.api import Referenceable
+from twisted.application import service
+
+from zope.interface import implements
+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
+from allmydata.util import fileutil, idlib, log, time_format
+import allmydata # for __full_version__
+
+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
+from allmydata.storage.lease import LeaseInfo
+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
+     create_mutable_sharefile
+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
+from allmydata.storage.crawler import FSBucketCountingCrawler
+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
+
+from zope.interface import implements
+
+class DASCore(Backend):
+    implements(IStorageBackend)
+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
+        Backend.__init__(self)
+
+        self._setup_storage(storedir, readonly, reserved_space)
+        self._setup_corruption_advisory()
+        self._setup_bucket_counter()
+        self._setup_lease_checkerf(expiration_policy)
+
+    def _setup_storage(self, storedir, readonly, reserved_space):
+        self.storedir = storedir
+        self.readonly = readonly
+        self.reserved_space = int(reserved_space)
+        if self.reserved_space:
+            if self.get_available_space() is None:
+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
+                        umid="0wZ27w", level=log.UNUSUAL)
+
+        self.sharedir = os.path.join(self.storedir, "shares")
+        fileutil.make_dirs(self.sharedir)
+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
+        self._clean_incomplete()
+
+    def _clean_incomplete(self):
+        fileutil.rm_dir(self.incomingdir)
+        fileutil.make_dirs(self.incomingdir)
+
+    def _setup_corruption_advisory(self):
+        # we don't actually create the corruption-advisory dir until necessary
+        self.corruption_advisory_dir = os.path.join(self.storedir,
+                                                    "corruption-advisories")
+
+    def _setup_bucket_counter(self):
+        statefname = os.path.join(self.storedir, "bucket_counter.state")
+        self.bucket_counter = FSBucketCountingCrawler(statefname)
+        self.bucket_counter.setServiceParent(self)
+
+    def _setup_lease_checkerf(self, expiration_policy):
+        statefile = os.path.join(self.storedir, "lease_checker.state")
+        historyfile = os.path.join(self.storedir, "lease_checker.history")
+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
+        self.lease_checker.setServiceParent(self)
+
+    def get_available_space(self):
+        if self.readonly:
+            return 0
+        return fileutil.get_available_space(self.storedir, self.reserved_space)
+
+    def get_shares(self, storage_index):
+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
+        try:
+            for f in os.listdir(finalstoragedir):
+                if NUM_RE.match(f):
+                    filename = os.path.join(finalstoragedir, f)
+                    yield FSBShare(filename, int(f))
+        except OSError:
+            # Commonly caused by there being no buckets at all.
+            pass
+        
+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
+        return bw
+        
+
+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
+# and share data. The share data is accessed by RIBucketWriter.write and
+# RIBucketReader.read . The lease information is not accessible through these
+# interfaces.
+
+# The share file has the following layout:
+#  0x00: share file version number, four bytes, current version is 1
+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
+#  0x08: number of leases, four bytes big-endian
+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
+#  A+0x0c = B: first lease. Lease format is:
+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
+#   B+0x04: renew secret, 32 bytes (SHA256)
+#   B+0x24: cancel secret, 32 bytes (SHA256)
+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
+#   B+0x48: next lease, or end of record
+
+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
+# but it is still filled in by storage servers in case the storage server
+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
+# share file is moved from one storage server to another. The value stored in
+# this field is truncated, so if the actual share data length is >= 2**32,
+# then the value stored in this field will be the actual share data length
+# modulo 2**32.
+
+class ImmutableShare:
+    LEASE_SIZE = struct.calcsize(">L32s32sL")
+    sharetype = "immutable"
+
+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
+        """ If max_size is not None then I won't allow more than
+        max_size to be written to me. If create=True then max_size
+        must not be None. """
+        precondition((max_size is not None) or (not create), max_size, create)
+        self.shnum = shnum 
+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
+        self._max_size = max_size
+        if create:
+            # touch the file, so later callers will see that we're working on
+            # it. Also construct the metadata.
+            assert not os.path.exists(self.fname)
+            fileutil.make_dirs(os.path.dirname(self.fname))
+            f = open(self.fname, 'wb')
+            # The second field -- the four-byte share data length -- is no
+            # longer used as of Tahoe v1.3.0, but we continue to write it in
+            # there in case someone downgrades a storage server from >=
+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
+            # server to another, etc. We do saturation -- a share data length
+            # larger than 2**32-1 (what can fit into the field) is marked as
+            # the largest length that can fit into the field. That way, even
+            # if this does happen, the old < v1.3.0 server will still allow
+            # clients to read the first part of the share.
+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
+            f.close()
+            self._lease_offset = max_size + 0x0c
+            self._num_leases = 0
+        else:
+            f = open(self.fname, 'rb')
+            filesize = os.path.getsize(self.fname)
+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
+            f.close()
+            if version != 1:
+                msg = "sharefile %s had version %d but we wanted 1" % \
+                      (self.fname, version)
+                raise UnknownImmutableContainerVersionError(msg)
+            self._num_leases = num_leases
+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
+        self._data_offset = 0xc
+
+    def unlink(self):
+        os.unlink(self.fname)
+
+    def read_share_data(self, offset, length):
+        precondition(offset >= 0)
+        # Reads beyond the end of the data are truncated. Reads that start
+        # beyond the end of the data return an empty string.
+        seekpos = self._data_offset+offset
+        fsize = os.path.getsize(self.fname)
+        actuallength = max(0, min(length, fsize-seekpos))
+        if actuallength == 0:
+            return ""
+        f = open(self.fname, 'rb')
+        f.seek(seekpos)
+        return f.read(actuallength)
+
+    def write_share_data(self, offset, data):
+        length = len(data)
+        precondition(offset >= 0, offset)
+        if self._max_size is not None and offset+length > self._max_size:
+            raise DataTooLargeError(self._max_size, offset, length)
+        f = open(self.fname, 'rb+')
+        real_offset = self._data_offset+offset
+        f.seek(real_offset)
+        assert f.tell() == real_offset
+        f.write(data)
+        f.close()
+
+    def _write_lease_record(self, f, lease_number, lease_info):
+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
+        f.seek(offset)
+        assert f.tell() == offset
+        f.write(lease_info.to_immutable_data())
+
+    def _read_num_leases(self, f):
+        f.seek(0x08)
+        (num_leases,) = struct.unpack(">L", f.read(4))
+        return num_leases
+
+    def _write_num_leases(self, f, num_leases):
+        f.seek(0x08)
+        f.write(struct.pack(">L", num_leases))
+
+    def _truncate_leases(self, f, num_leases):
+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
+
+    def get_leases(self):
+        """Yields a LeaseInfo instance for all leases."""
+        f = open(self.fname, 'rb')
+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
+        f.seek(self._lease_offset)
+        for i in range(num_leases):
+            data = f.read(self.LEASE_SIZE)
+            if data:
+                yield LeaseInfo().from_immutable_data(data)
+
+    def add_lease(self, lease_info):
+        f = open(self.fname, 'rb+')
+        num_leases = self._read_num_leases(f)
+        self._write_lease_record(f, num_leases, lease_info)
+        self._write_num_leases(f, num_leases+1)
+        f.close()
+
+    def renew_lease(self, renew_secret, new_expire_time):
+        for i,lease in enumerate(self.get_leases()):
+            if constant_time_compare(lease.renew_secret, renew_secret):
+                # yup. See if we need to update the owner time.
+                if new_expire_time > lease.expiration_time:
+                    # yes
+                    lease.expiration_time = new_expire_time
+                    f = open(self.fname, 'rb+')
+                    self._write_lease_record(f, i, lease)
+                    f.close()
+                return
+        raise IndexError("unable to renew non-existent lease")
+
+    def add_or_renew_lease(self, lease_info):
+        try:
+            self.renew_lease(lease_info.renew_secret,
+                             lease_info.expiration_time)
+        except IndexError:
+            self.add_lease(lease_info)
+
+
+    def cancel_lease(self, cancel_secret):
+        """Remove a lease with the given cancel_secret. If the last lease is
+        cancelled, the file will be removed. Return the number of bytes that
+        were freed (by truncating the list of leases, and possibly by
+        deleting the file. Raise IndexError if there was no lease with the
+        given cancel_secret.
+        """
+
+        leases = list(self.get_leases())
+        num_leases_removed = 0
+        for i,lease in enumerate(leases):
+            if constant_time_compare(lease.cancel_secret, cancel_secret):
+                leases[i] = None
+                num_leases_removed += 1
+        if not num_leases_removed:
+            raise IndexError("unable to find matching lease to cancel")
+        if num_leases_removed:
+            # pack and write out the remaining leases. We write these out in
+            # the same order as they were added, so that if we crash while
+            # doing this, we won't lose any non-cancelled leases.
+            leases = [l for l in leases if l] # remove the cancelled leases
+            f = open(self.fname, 'rb+')
+            for i,lease in enumerate(leases):
+                self._write_lease_record(f, i, lease)
+            self._write_num_leases(f, len(leases))
+            self._truncate_leases(f, len(leases))
+            f.close()
+        space_freed = self.LEASE_SIZE * num_leases_removed
+        if not len(leases):
+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
+            self.unlink()
+        return space_freed
hunk ./src/allmydata/storage/backends/das/expirer.py 2
 import time, os, pickle, struct
-from allmydata.storage.crawler import ShareCrawler
-from allmydata.storage.shares import get_share_file
+from allmydata.storage.crawler import FSShareCrawler
 from allmydata.storage.common import UnknownMutableContainerVersionError, \
      UnknownImmutableContainerVersionError
 from twisted.python import log as twlog
hunk ./src/allmydata/storage/backends/das/expirer.py 7
 
-class LeaseCheckingCrawler(ShareCrawler):
+class FSLeaseCheckingCrawler(FSShareCrawler):
     """I examine the leases on all shares, determining which are still valid
     and which have expired. I can remove the expired leases (if so
     configured), and the share will be deleted when the last lease is
hunk ./src/allmydata/storage/backends/das/expirer.py 50
     slow_start = 360 # wait 6 minutes after startup
     minimum_cycle_time = 12*60*60 # not more than twice per day
 
-    def __init__(self, statefile, historyfile,
-                 expiration_enabled, mode,
-                 override_lease_duration, # used if expiration_mode=="age"
-                 cutoff_date, # used if expiration_mode=="cutoff-date"
-                 sharetypes):
+    def __init__(self, statefile, historyfile, expiration_policy):
         self.historyfile = historyfile
hunk ./src/allmydata/storage/backends/das/expirer.py 52
-        self.expiration_enabled = expiration_enabled
-        self.mode = mode
+        self.expiration_enabled = expiration_policy['enabled']
+        self.mode = expiration_policy['mode']
         self.override_lease_duration = None
         self.cutoff_date = None
         if self.mode == "age":
hunk ./src/allmydata/storage/backends/das/expirer.py 57
-            assert isinstance(override_lease_duration, (int, type(None)))
-            self.override_lease_duration = override_lease_duration # seconds
+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
         elif self.mode == "cutoff-date":
hunk ./src/allmydata/storage/backends/das/expirer.py 60
-            assert isinstance(cutoff_date, int) # seconds-since-epoch
+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
             assert cutoff_date is not None
hunk ./src/allmydata/storage/backends/das/expirer.py 62
-            self.cutoff_date = cutoff_date
+            self.cutoff_date = expiration_policy['cutoff_date']
         else:
hunk ./src/allmydata/storage/backends/das/expirer.py 64
-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
-        self.sharetypes_to_expire = sharetypes
-        ShareCrawler.__init__(self, statefile)
+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
+        self.sharetypes_to_expire = expiration_policy['sharetypes']
+        FSShareCrawler.__init__(self, statefile)
 
     def add_initial_state(self):
         # we fill ["cycle-to-date"] here (even though they will be reset in
hunk ./src/allmydata/storage/backends/das/expirer.py 156
 
     def process_share(self, sharefilename):
         # first, find out what kind of a share it is
-        sf = get_share_file(sharefilename)
+        f = open(sharefilename, "rb")
+        prefix = f.read(32)
+        f.close()
+        if prefix == MutableShareFile.MAGIC:
+            sf = MutableShareFile(sharefilename)
+        else:
+            # otherwise assume it's immutable
+            sf = FSBShare(sharefilename)
         sharetype = sf.sharetype
         now = time.time()
         s = self.stat(sharefilename)
addfile ./src/allmydata/storage/backends/null/__init__.py
addfile ./src/allmydata/storage/backends/null/core.py
hunk ./src/allmydata/storage/backends/null/core.py 1
+from allmydata.storage.backends.base import Backend
+
+class NullCore(Backend):
+    def __init__(self):
+        Backend.__init__(self)
+
+    def get_available_space(self):
+        return None
+
+    def get_shares(self, storage_index):
+        return set()
+
+    def get_share(self, storage_index, sharenum):
+        return None
+
+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
+        return NullBucketWriter()
hunk ./src/allmydata/storage/crawler.py 12
 class TimeSliceExceeded(Exception):
     pass
 
-class ShareCrawler(service.MultiService):
+class FSShareCrawler(service.MultiService):
     """A subcless of ShareCrawler is attached to a StorageServer, and
     periodically walks all of its shares, processing each one in some
     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
hunk ./src/allmydata/storage/crawler.py 68
     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
     minimum_cycle_time = 300 # don't run a cycle faster than this
 
-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
+    def __init__(self, statefname, allowed_cpu_percentage=None):
         service.MultiService.__init__(self)
         if allowed_cpu_percentage is not None:
             self.allowed_cpu_percentage = allowed_cpu_percentage
hunk ./src/allmydata/storage/crawler.py 72
-        self.backend = backend
+        self.statefname = statefname
         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
                          for i in range(2**10)]
         self.prefixes.sort()
hunk ./src/allmydata/storage/crawler.py 192
         #                            of the last bucket to be processed, or
         #                            None if we are sleeping between cycles
         try:
-            f = open(self.statefile, "rb")
+            f = open(self.statefname, "rb")
             state = pickle.load(f)
             f.close()
         except EnvironmentError:
hunk ./src/allmydata/storage/crawler.py 230
         else:
             last_complete_prefix = self.prefixes[lcpi]
         self.state["last-complete-prefix"] = last_complete_prefix
-        tmpfile = self.statefile + ".tmp"
+        tmpfile = self.statefname + ".tmp"
         f = open(tmpfile, "wb")
         pickle.dump(self.state, f)
         f.close()
hunk ./src/allmydata/storage/crawler.py 433
         pass
 
 
-class BucketCountingCrawler(ShareCrawler):
+class FSBucketCountingCrawler(FSShareCrawler):
     """I keep track of how many buckets are being managed by this server.
     This is equivalent to the number of distributed files and directories for
     which I am providing storage. The actual number of files+directories in
hunk ./src/allmydata/storage/crawler.py 446
 
     minimum_cycle_time = 60*60 # we don't need this more than once an hour
 
-    def __init__(self, statefile, num_sample_prefixes=1):
-        ShareCrawler.__init__(self, statefile)
+    def __init__(self, statefname, num_sample_prefixes=1):
+        FSShareCrawler.__init__(self, statefname)
         self.num_sample_prefixes = num_sample_prefixes
 
     def add_initial_state(self):
hunk ./src/allmydata/storage/immutable.py 14
 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
      DataTooLargeError
 
-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
-# and share data. The share data is accessed by RIBucketWriter.write and
-# RIBucketReader.read . The lease information is not accessible through these
-# interfaces.
-
-# The share file has the following layout:
-#  0x00: share file version number, four bytes, current version is 1
-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
-#  0x08: number of leases, four bytes big-endian
-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
-#  A+0x0c = B: first lease. Lease format is:
-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
-#   B+0x04: renew secret, 32 bytes (SHA256)
-#   B+0x24: cancel secret, 32 bytes (SHA256)
-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
-#   B+0x48: next lease, or end of record
-
-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
-# but it is still filled in by storage servers in case the storage server
-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
-# share file is moved from one storage server to another. The value stored in
-# this field is truncated, so if the actual share data length is >= 2**32,
-# then the value stored in this field will be the actual share data length
-# modulo 2**32.
-
-class ShareFile:
-    LEASE_SIZE = struct.calcsize(">L32s32sL")
-    sharetype = "immutable"
-
-    def __init__(self, filename, max_size=None, create=False):
-        """ If max_size is not None then I won't allow more than
-        max_size to be written to me. If create=True then max_size
-        must not be None. """
-        precondition((max_size is not None) or (not create), max_size, create)
-        self.home = filename
-        self._max_size = max_size
-        if create:
-            # touch the file, so later callers will see that we're working on
-            # it. Also construct the metadata.
-            assert not os.path.exists(self.home)
-            fileutil.make_dirs(os.path.dirname(self.home))
-            f = open(self.home, 'wb')
-            # The second field -- the four-byte share data length -- is no
-            # longer used as of Tahoe v1.3.0, but we continue to write it in
-            # there in case someone downgrades a storage server from >=
-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
-            # server to another, etc. We do saturation -- a share data length
-            # larger than 2**32-1 (what can fit into the field) is marked as
-            # the largest length that can fit into the field. That way, even
-            # if this does happen, the old < v1.3.0 server will still allow
-            # clients to read the first part of the share.
-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
-            f.close()
-            self._lease_offset = max_size + 0x0c
-            self._num_leases = 0
-        else:
-            f = open(self.home, 'rb')
-            filesize = os.path.getsize(self.home)
-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
-            f.close()
-            if version != 1:
-                msg = "sharefile %s had version %d but we wanted 1" % \
-                      (filename, version)
-                raise UnknownImmutableContainerVersionError(msg)
-            self._num_leases = num_leases
-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
-        self._data_offset = 0xc
-
-    def unlink(self):
-        os.unlink(self.home)
-
-    def read_share_data(self, offset, length):
-        precondition(offset >= 0)
-        # Reads beyond the end of the data are truncated. Reads that start
-        # beyond the end of the data return an empty string.
-        seekpos = self._data_offset+offset
-        fsize = os.path.getsize(self.home)
-        actuallength = max(0, min(length, fsize-seekpos))
-        if actuallength == 0:
-            return ""
-        f = open(self.home, 'rb')
-        f.seek(seekpos)
-        return f.read(actuallength)
-
-    def write_share_data(self, offset, data):
-        length = len(data)
-        precondition(offset >= 0, offset)
-        if self._max_size is not None and offset+length > self._max_size:
-            raise DataTooLargeError(self._max_size, offset, length)
-        f = open(self.home, 'rb+')
-        real_offset = self._data_offset+offset
-        f.seek(real_offset)
-        assert f.tell() == real_offset
-        f.write(data)
-        f.close()
-
-    def _write_lease_record(self, f, lease_number, lease_info):
-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
-        f.seek(offset)
-        assert f.tell() == offset
-        f.write(lease_info.to_immutable_data())
-
-    def _read_num_leases(self, f):
-        f.seek(0x08)
-        (num_leases,) = struct.unpack(">L", f.read(4))
-        return num_leases
-
-    def _write_num_leases(self, f, num_leases):
-        f.seek(0x08)
-        f.write(struct.pack(">L", num_leases))
-
-    def _truncate_leases(self, f, num_leases):
-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
-
-    def get_leases(self):
-        """Yields a LeaseInfo instance for all leases."""
-        f = open(self.home, 'rb')
-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
-        f.seek(self._lease_offset)
-        for i in range(num_leases):
-            data = f.read(self.LEASE_SIZE)
-            if data:
-                yield LeaseInfo().from_immutable_data(data)
-
-    def add_lease(self, lease_info):
-        f = open(self.home, 'rb+')
-        num_leases = self._read_num_leases(f)
-        self._write_lease_record(f, num_leases, lease_info)
-        self._write_num_leases(f, num_leases+1)
-        f.close()
-
-    def renew_lease(self, renew_secret, new_expire_time):
-        for i,lease in enumerate(self.get_leases()):
-            if constant_time_compare(lease.renew_secret, renew_secret):
-                # yup. See if we need to update the owner time.
-                if new_expire_time > lease.expiration_time:
-                    # yes
-                    lease.expiration_time = new_expire_time
-                    f = open(self.home, 'rb+')
-                    self._write_lease_record(f, i, lease)
-                    f.close()
-                return
-        raise IndexError("unable to renew non-existent lease")
-
-    def add_or_renew_lease(self, lease_info):
-        try:
-            self.renew_lease(lease_info.renew_secret,
-                             lease_info.expiration_time)
-        except IndexError:
-            self.add_lease(lease_info)
-
-
-    def cancel_lease(self, cancel_secret):
-        """Remove a lease with the given cancel_secret. If the last lease is
-        cancelled, the file will be removed. Return the number of bytes that
-        were freed (by truncating the list of leases, and possibly by
-        deleting the file. Raise IndexError if there was no lease with the
-        given cancel_secret.
-        """
-
-        leases = list(self.get_leases())
-        num_leases_removed = 0
-        for i,lease in enumerate(leases):
-            if constant_time_compare(lease.cancel_secret, cancel_secret):
-                leases[i] = None
-                num_leases_removed += 1
-        if not num_leases_removed:
-            raise IndexError("unable to find matching lease to cancel")
-        if num_leases_removed:
-            # pack and write out the remaining leases. We write these out in
-            # the same order as they were added, so that if we crash while
-            # doing this, we won't lose any non-cancelled leases.
-            leases = [l for l in leases if l] # remove the cancelled leases
-            f = open(self.home, 'rb+')
-            for i,lease in enumerate(leases):
-                self._write_lease_record(f, i, lease)
-            self._write_num_leases(f, len(leases))
-            self._truncate_leases(f, len(leases))
-            f.close()
-        space_freed = self.LEASE_SIZE * num_leases_removed
-        if not len(leases):
-            space_freed += os.stat(self.home)[stat.ST_SIZE]
-            self.unlink()
-        return space_freed
-class NullBucketWriter(Referenceable):
-    implements(RIBucketWriter)
-
-    def remote_write(self, offset, data):
-        return
-
 class BucketWriter(Referenceable):
     implements(RIBucketWriter)
 
hunk ./src/allmydata/storage/immutable.py 17
-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
         self.ss = ss
hunk ./src/allmydata/storage/immutable.py 19
-        self.incominghome = incominghome
-        self.finalhome = finalhome
         self._max_size = max_size # don't allow the client to write more than this
         self._canary = canary
         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
hunk ./src/allmydata/storage/immutable.py 24
         self.closed = False
         self.throw_out_all_data = False
-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
+        self._sharefile = immutableshare
         # also, add our lease to the file now, so that other ones can be
         # added by simultaneous uploaders
         self._sharefile.add_lease(lease_info)
hunk ./src/allmydata/storage/server.py 16
 from allmydata.storage.lease import LeaseInfo
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
-from allmydata.storage.crawler import BucketCountingCrawler
-from allmydata.storage.expirer import LeaseCheckingCrawler
 
 from zope.interface import implements
 
hunk ./src/allmydata/storage/server.py 19
-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
-# be started and stopped.
-class Backend(service.MultiService):
-    implements(IStatsProducer)
-    def __init__(self):
-        service.MultiService.__init__(self)
-
-    def get_bucket_shares(self):
-        """XXX"""
-        raise NotImplementedError
-
-    def get_share(self):
-        """XXX"""
-        raise NotImplementedError
-
-    def make_bucket_writer(self):
-        """XXX"""
-        raise NotImplementedError
-
-class NullBackend(Backend):
-    def __init__(self):
-        Backend.__init__(self)
-
-    def get_available_space(self):
-        return None
-
-    def get_bucket_shares(self, storage_index):
-        return set()
-
-    def get_share(self, storage_index, sharenum):
-        return None
-
-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
-        return NullBucketWriter()
-
-class FSBackend(Backend):
-    def __init__(self, storedir, readonly=False, reserved_space=0):
-        Backend.__init__(self)
-
-        self._setup_storage(storedir, readonly, reserved_space)
-        self._setup_corruption_advisory()
-        self._setup_bucket_counter()
-        self._setup_lease_checkerf()
-
-    def _setup_storage(self, storedir, readonly, reserved_space):
-        self.storedir = storedir
-        self.readonly = readonly
-        self.reserved_space = int(reserved_space)
-        if self.reserved_space:
-            if self.get_available_space() is None:
-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
-                        umid="0wZ27w", level=log.UNUSUAL)
-
-        self.sharedir = os.path.join(self.storedir, "shares")
-        fileutil.make_dirs(self.sharedir)
-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
-        self._clean_incomplete()
-
-    def _clean_incomplete(self):
-        fileutil.rm_dir(self.incomingdir)
-        fileutil.make_dirs(self.incomingdir)
-
-    def _setup_corruption_advisory(self):
-        # we don't actually create the corruption-advisory dir until necessary
-        self.corruption_advisory_dir = os.path.join(self.storedir,
-                                                    "corruption-advisories")
-
-    def _setup_bucket_counter(self):
-        statefile = os.path.join(self.storedir, "bucket_counter.state")
-        self.bucket_counter = BucketCountingCrawler(statefile)
-        self.bucket_counter.setServiceParent(self)
-
-    def _setup_lease_checkerf(self):
-        statefile = os.path.join(self.storedir, "lease_checker.state")
-        historyfile = os.path.join(self.storedir, "lease_checker.history")
-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
-                                   expiration_enabled, expiration_mode,
-                                   expiration_override_lease_duration,
-                                   expiration_cutoff_date,
-                                   expiration_sharetypes)
-        self.lease_checker.setServiceParent(self)
-
-    def get_available_space(self):
-        if self.readonly:
-            return 0
-        return fileutil.get_available_space(self.storedir, self.reserved_space)
-
-    def get_bucket_shares(self, storage_index):
-        """Return a list of (shnum, pathname) tuples for files that hold
-        shares for this storage_index. In each tuple, 'shnum' will always be
-        the integer form of the last component of 'pathname'."""
-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
-        try:
-            for f in os.listdir(storagedir):
-                if NUM_RE.match(f):
-                    filename = os.path.join(storagedir, f)
-                    yield (int(f), filename)
-        except OSError:
-            # Commonly caused by there being no buckets at all.
-            pass
-
 # storage/
 # storage/shares/incoming
 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
hunk ./src/allmydata/storage/server.py 32
 # $SHARENUM matches this regex:
 NUM_RE=re.compile("^[0-9]+$")
 
-
-
 class StorageServer(service.MultiService, Referenceable):
     implements(RIStorageServer, IStatsProducer)
     name = 'storage'
hunk ./src/allmydata/storage/server.py 35
-    LeaseCheckerClass = LeaseCheckingCrawler
 
     def __init__(self, nodeid, backend, reserved_space=0,
                  readonly_storage=False,
hunk ./src/allmydata/storage/server.py 38
-                 stats_provider=None,
-                 expiration_enabled=False,
-                 expiration_mode="age",
-                 expiration_override_lease_duration=None,
-                 expiration_cutoff_date=None,
-                 expiration_sharetypes=("mutable", "immutable")):
+                 stats_provider=None ):
         service.MultiService.__init__(self)
         assert isinstance(nodeid, str)
         assert len(nodeid) == 20
hunk ./src/allmydata/storage/server.py 217
         # they asked about: this will save them a lot of work. Add or update
         # leases for all of them: if they want us to hold shares for this
         # file, they'll want us to hold leases for this file.
-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
-            alreadygot.add(shnum)
-            sf = ShareFile(fn)
-            sf.add_or_renew_lease(lease_info)
-
-        for shnum in sharenums:
-            share = self.backend.get_share(storage_index, shnum)
+        for share in self.backend.get_shares(storage_index):
+            alreadygot.add(share.shnum)
+            share.add_or_renew_lease(lease_info)
 
hunk ./src/allmydata/storage/server.py 221
-            if not share:
-                if (not limited) or (remaining_space >= max_space_per_bucket):
-                    # ok! we need to create the new share file.
-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
-                                      max_space_per_bucket, lease_info, canary)
-                    bucketwriters[shnum] = bw
-                    self._active_writers[bw] = 1
-                    if limited:
-                        remaining_space -= max_space_per_bucket
-                else:
-                    # bummer! not enough space to accept this bucket
-                    pass
+        for shnum in (sharenums - alreadygot):
+            if (not limited) or (remaining_space >= max_space_per_bucket):
+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
+                self.backend.set_storage_server(self)
+                bw = self.backend.make_bucket_writer(storage_index, shnum,
+                                                     max_space_per_bucket, lease_info, canary)
+                bucketwriters[shnum] = bw
+                self._active_writers[bw] = 1
+                if limited:
+                    remaining_space -= max_space_per_bucket
 
hunk ./src/allmydata/storage/server.py 232
-            elif share.is_complete():
-                # great! we already have it. easy.
-                pass
-            elif not share.is_complete():
-                # Note that we don't create BucketWriters for shnums that
-                # have a partial share (in incoming/), so if a second upload
-                # occurs while the first is still in progress, the second
-                # uploader will use different storage servers.
-                pass
+        #XXX We SHOULD DOCUMENT LATER.
 
         self.add_latency("allocate", time.time() - start)
         return alreadygot, bucketwriters
hunk ./src/allmydata/storage/server.py 238
 
     def _iter_share_files(self, storage_index):
-        for shnum, filename in self._get_bucket_shares(storage_index):
+        for shnum, filename in self._get_shares(storage_index):
             f = open(filename, 'rb')
             header = f.read(32)
             f.close()
hunk ./src/allmydata/storage/server.py 318
         si_s = si_b2a(storage_index)
         log.msg("storage: get_buckets %s" % si_s)
         bucketreaders = {} # k: sharenum, v: BucketReader
-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
+        for shnum, filename in self.backend.get_shares(storage_index):
             bucketreaders[shnum] = BucketReader(self, filename,
                                                 storage_index, shnum)
         self.add_latency("get", time.time() - start)
hunk ./src/allmydata/storage/server.py 334
         # since all shares get the same lease data, we just grab the leases
         # from the first share
         try:
-            shnum, filename = self._get_bucket_shares(storage_index).next()
+            shnum, filename = self._get_shares(storage_index).next()
             sf = ShareFile(filename)
             return sf.get_leases()
         except StopIteration:
hunk ./src/allmydata/storage/shares.py 1
-#! /usr/bin/python
-
-from allmydata.storage.mutable import MutableShareFile
-from allmydata.storage.immutable import ShareFile
-
-def get_share_file(filename):
-    f = open(filename, "rb")
-    prefix = f.read(32)
-    f.close()
-    if prefix == MutableShareFile.MAGIC:
-        return MutableShareFile(filename)
-    # otherwise assume it's immutable
-    return ShareFile(filename)
-
rmfile ./src/allmydata/storage/shares.py
hunk ./src/allmydata/test/common_util.py 20
 
 def flip_one_bit(s, offset=0, size=None):
     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
-    than offset+size. """
+    than offset+size. Return the new string. """
     if size is None:
         size=len(s)-offset
     i = randrange(offset, offset+size)
hunk ./src/allmydata/test/test_backends.py 7
 
 from allmydata.test.common_util import ReallyEqualMixin
 
-import mock
+import mock, os
 
 # This is the code that we're going to be testing.
hunk ./src/allmydata/test/test_backends.py 10
-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
+from allmydata.storage.server import StorageServer
+
+from allmydata.storage.backends.das.core import DASCore
+from allmydata.storage.backends.null.core import NullCore
+
 
 # The following share file contents was generated with
 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
hunk ./src/allmydata/test/test_backends.py 22
 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
 
-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
+tempdir = 'teststoredir'
+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
+sharefname = os.path.join(sharedirname, '0')
 
 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
     @mock.patch('time.time')
hunk ./src/allmydata/test/test_backends.py 58
         filesystem in only the prescribed ways. """
 
         def call_open(fname, mode):
-            if fname == 'testdir/bucket_counter.state':
-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
-            elif fname == 'testdir/lease_checker.state':
-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
-            elif fname == 'testdir/lease_checker.history':
+            if fname == os.path.join(tempdir,'bucket_counter.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
                 return StringIO()
             else:
                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
hunk ./src/allmydata/test/test_backends.py 124
     @mock.patch('__builtin__.open')
     def setUp(self, mockopen):
         def call_open(fname, mode):
-            if fname == 'testdir/bucket_counter.state':
-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
-            elif fname == 'testdir/lease_checker.state':
-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
-            elif fname == 'testdir/lease_checker.history':
+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
                 return StringIO()
         mockopen.side_effect = call_open
hunk ./src/allmydata/test/test_backends.py 131
-
-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
+        expiration_policy = {'enabled' : False, 
+                             'mode' : 'age',
+                             'override_lease_duration' : None,
+                             'cutoff_date' : None,
+                             'sharetypes' : None}
+        testbackend = DASCore(tempdir, expiration_policy)
+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
 
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
hunk ./src/allmydata/test/test_backends.py 148
         """ Write a new share. """
 
         def call_listdir(dirname):
-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
+            self.failUnlessReallyEqual(dirname, sharedirname)
+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
 
         mocklistdir.side_effect = call_listdir
 
hunk ./src/allmydata/test/test_backends.py 178
 
         sharefile = MockFile()
         def call_open(fname, mode):
-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
             return sharefile
 
         mockopen.side_effect = call_open
hunk ./src/allmydata/test/test_backends.py 200
         StorageServer object. """
 
         def call_listdir(dirname):
-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
             return ['0']
 
         mocklistdir.side_effect = call_listdir
}
[checkpoint patch
wilcoxjg@gmail.com**20110626165715
 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
] {
hunk ./src/allmydata/storage/backends/das/core.py 21
 from allmydata.storage.lease import LeaseInfo
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
+from allmydata.storage.immutable import BucketWriter, BucketReader
 from allmydata.storage.crawler import FSBucketCountingCrawler
 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
 
hunk ./src/allmydata/storage/backends/das/core.py 27
 from zope.interface import implements
 
+# $SHARENUM matches this regex:
+NUM_RE=re.compile("^[0-9]+$")
+
 class DASCore(Backend):
     implements(IStorageBackend)
     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
hunk ./src/allmydata/storage/backends/das/core.py 80
         return fileutil.get_available_space(self.storedir, self.reserved_space)
 
     def get_shares(self, storage_index):
-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
         try:
             for f in os.listdir(finalstoragedir):
hunk ./src/allmydata/storage/backends/das/core.py 86
                 if NUM_RE.match(f):
                     filename = os.path.join(finalstoragedir, f)
-                    yield FSBShare(filename, int(f))
+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
         except OSError:
             # Commonly caused by there being no buckets at all.
             pass
hunk ./src/allmydata/storage/backends/das/core.py 95
         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
         return bw
+
+    def set_storage_server(self, ss):
+        self.ss = ss
         
 
 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
hunk ./src/allmydata/storage/server.py 29
 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
 # base-32 chars).
 
-# $SHARENUM matches this regex:
-NUM_RE=re.compile("^[0-9]+$")
 
 class StorageServer(service.MultiService, Referenceable):
     implements(RIStorageServer, IStatsProducer)
}
[checkpoint4
wilcoxjg@gmail.com**20110628202202
 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
] {
hunk ./src/allmydata/storage/backends/das/core.py 96
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
         return bw
 
+    def make_bucket_reader(self, share):
+        return BucketReader(self.ss, share)
+
     def set_storage_server(self, ss):
         self.ss = ss
         
hunk ./src/allmydata/storage/backends/das/core.py 138
         must not be None. """
         precondition((max_size is not None) or (not create), max_size, create)
         self.shnum = shnum 
+        self.storage_index = storageindex
         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
         self._max_size = max_size
         if create:
hunk ./src/allmydata/storage/backends/das/core.py 173
             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
         self._data_offset = 0xc
 
+    def get_shnum(self):
+        return self.shnum
+
     def unlink(self):
         os.unlink(self.fname)
 
hunk ./src/allmydata/storage/backends/null/core.py 2
 from allmydata.storage.backends.base import Backend
+from allmydata.storage.immutable import BucketWriter, BucketReader
 
 class NullCore(Backend):
     def __init__(self):
hunk ./src/allmydata/storage/backends/null/core.py 17
     def get_share(self, storage_index, sharenum):
         return None
 
-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
-        return NullBucketWriter()
+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
+        
+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
+
+    def set_storage_server(self, ss):
+        self.ss = ss
+
+class ImmutableShare:
+    sharetype = "immutable"
+
+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
+        """ If max_size is not None then I won't allow more than
+        max_size to be written to me. If create=True then max_size
+        must not be None. """
+        precondition((max_size is not None) or (not create), max_size, create)
+        self.shnum = shnum 
+        self.storage_index = storageindex
+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
+        self._max_size = max_size
+        if create:
+            # touch the file, so later callers will see that we're working on
+            # it. Also construct the metadata.
+            assert not os.path.exists(self.fname)
+            fileutil.make_dirs(os.path.dirname(self.fname))
+            f = open(self.fname, 'wb')
+            # The second field -- the four-byte share data length -- is no
+            # longer used as of Tahoe v1.3.0, but we continue to write it in
+            # there in case someone downgrades a storage server from >=
+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
+            # server to another, etc. We do saturation -- a share data length
+            # larger than 2**32-1 (what can fit into the field) is marked as
+            # the largest length that can fit into the field. That way, even
+            # if this does happen, the old < v1.3.0 server will still allow
+            # clients to read the first part of the share.
+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
+            f.close()
+            self._lease_offset = max_size + 0x0c
+            self._num_leases = 0
+        else:
+            f = open(self.fname, 'rb')
+            filesize = os.path.getsize(self.fname)
+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
+            f.close()
+            if version != 1:
+                msg = "sharefile %s had version %d but we wanted 1" % \
+                      (self.fname, version)
+                raise UnknownImmutableContainerVersionError(msg)
+            self._num_leases = num_leases
+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
+        self._data_offset = 0xc
+
+    def get_shnum(self):
+        return self.shnum
+
+    def unlink(self):
+        os.unlink(self.fname)
+
+    def read_share_data(self, offset, length):
+        precondition(offset >= 0)
+        # Reads beyond the end of the data are truncated. Reads that start
+        # beyond the end of the data return an empty string.
+        seekpos = self._data_offset+offset
+        fsize = os.path.getsize(self.fname)
+        actuallength = max(0, min(length, fsize-seekpos))
+        if actuallength == 0:
+            return ""
+        f = open(self.fname, 'rb')
+        f.seek(seekpos)
+        return f.read(actuallength)
+
+    def write_share_data(self, offset, data):
+        length = len(data)
+        precondition(offset >= 0, offset)
+        if self._max_size is not None and offset+length > self._max_size:
+            raise DataTooLargeError(self._max_size, offset, length)
+        f = open(self.fname, 'rb+')
+        real_offset = self._data_offset+offset
+        f.seek(real_offset)
+        assert f.tell() == real_offset
+        f.write(data)
+        f.close()
+
+    def _write_lease_record(self, f, lease_number, lease_info):
+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
+        f.seek(offset)
+        assert f.tell() == offset
+        f.write(lease_info.to_immutable_data())
+
+    def _read_num_leases(self, f):
+        f.seek(0x08)
+        (num_leases,) = struct.unpack(">L", f.read(4))
+        return num_leases
+
+    def _write_num_leases(self, f, num_leases):
+        f.seek(0x08)
+        f.write(struct.pack(">L", num_leases))
+
+    def _truncate_leases(self, f, num_leases):
+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
+
+    def get_leases(self):
+        """Yields a LeaseInfo instance for all leases."""
+        f = open(self.fname, 'rb')
+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
+        f.seek(self._lease_offset)
+        for i in range(num_leases):
+            data = f.read(self.LEASE_SIZE)
+            if data:
+                yield LeaseInfo().from_immutable_data(data)
+
+    def add_lease(self, lease_info):
+        f = open(self.fname, 'rb+')
+        num_leases = self._read_num_leases(f)
+        self._write_lease_record(f, num_leases, lease_info)
+        self._write_num_leases(f, num_leases+1)
+        f.close()
+
+    def renew_lease(self, renew_secret, new_expire_time):
+        for i,lease in enumerate(self.get_leases()):
+            if constant_time_compare(lease.renew_secret, renew_secret):
+                # yup. See if we need to update the owner time.
+                if new_expire_time > lease.expiration_time:
+                    # yes
+                    lease.expiration_time = new_expire_time
+                    f = open(self.fname, 'rb+')
+                    self._write_lease_record(f, i, lease)
+                    f.close()
+                return
+        raise IndexError("unable to renew non-existent lease")
+
+    def add_or_renew_lease(self, lease_info):
+        try:
+            self.renew_lease(lease_info.renew_secret,
+                             lease_info.expiration_time)
+        except IndexError:
+            self.add_lease(lease_info)
+
+
+    def cancel_lease(self, cancel_secret):
+        """Remove a lease with the given cancel_secret. If the last lease is
+        cancelled, the file will be removed. Return the number of bytes that
+        were freed (by truncating the list of leases, and possibly by
+        deleting the file. Raise IndexError if there was no lease with the
+        given cancel_secret.
+        """
+
+        leases = list(self.get_leases())
+        num_leases_removed = 0
+        for i,lease in enumerate(leases):
+            if constant_time_compare(lease.cancel_secret, cancel_secret):
+                leases[i] = None
+                num_leases_removed += 1
+        if not num_leases_removed:
+            raise IndexError("unable to find matching lease to cancel")
+        if num_leases_removed:
+            # pack and write out the remaining leases. We write these out in
+            # the same order as they were added, so that if we crash while
+            # doing this, we won't lose any non-cancelled leases.
+            leases = [l for l in leases if l] # remove the cancelled leases
+            f = open(self.fname, 'rb+')
+            for i,lease in enumerate(leases):
+                self._write_lease_record(f, i, lease)
+            self._write_num_leases(f, len(leases))
+            self._truncate_leases(f, len(leases))
+            f.close()
+        space_freed = self.LEASE_SIZE * num_leases_removed
+        if not len(leases):
+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
+            self.unlink()
+        return space_freed
hunk ./src/allmydata/storage/immutable.py 114
 class BucketReader(Referenceable):
     implements(RIBucketReader)
 
-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
+    def __init__(self, ss, share):
         self.ss = ss
hunk ./src/allmydata/storage/immutable.py 116
-        self._share_file = ShareFile(sharefname)
-        self.storage_index = storage_index
-        self.shnum = shnum
+        self._share_file = share
+        self.storage_index = share.storage_index
+        self.shnum = share.shnum
 
     def __repr__(self):
         return "<%s %s %s>" % (self.__class__.__name__,
hunk ./src/allmydata/storage/server.py 316
         si_s = si_b2a(storage_index)
         log.msg("storage: get_buckets %s" % si_s)
         bucketreaders = {} # k: sharenum, v: BucketReader
-        for shnum, filename in self.backend.get_shares(storage_index):
-            bucketreaders[shnum] = BucketReader(self, filename,
-                                                storage_index, shnum)
+        self.backend.set_storage_server(self)
+        for share in self.backend.get_shares(storage_index):
+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
         self.add_latency("get", time.time() - start)
         return bucketreaders
 
hunk ./src/allmydata/test/test_backends.py 25
 tempdir = 'teststoredir'
 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
 sharefname = os.path.join(sharedirname, '0')
+expiration_policy = {'enabled' : False, 
+                     'mode' : 'age',
+                     'override_lease_duration' : None,
+                     'cutoff_date' : None,
+                     'sharetypes' : None}
 
 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
     @mock.patch('time.time')
hunk ./src/allmydata/test/test_backends.py 43
         tries to read or write to the file system. """
 
         # Now begin the test.
-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
 
         self.failIf(mockisdir.called)
         self.failIf(mocklistdir.called)
hunk ./src/allmydata/test/test_backends.py 74
         mockopen.side_effect = call_open
 
         # Now begin the test.
-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
 
         self.failIf(mockisdir.called)
         self.failIf(mocklistdir.called)
hunk ./src/allmydata/test/test_backends.py 86
 
 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
     def setUp(self):
-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
 
     @mock.patch('os.mkdir')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 136
             elif fname == os.path.join(tempdir, 'lease_checker.history'):
                 return StringIO()
         mockopen.side_effect = call_open
-        expiration_policy = {'enabled' : False, 
-                             'mode' : 'age',
-                             'override_lease_duration' : None,
-                             'cutoff_date' : None,
-                             'sharetypes' : None}
         testbackend = DASCore(tempdir, expiration_policy)
         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
 
}
[checkpoint5
wilcoxjg@gmail.com**20110705034626
 Ignore-this: 255780bd58299b0aa33c027e9d008262
] {
addfile ./src/allmydata/storage/backends/base.py
hunk ./src/allmydata/storage/backends/base.py 1
+from twisted.application import service
+
+class Backend(service.MultiService):
+    def __init__(self):
+        service.MultiService.__init__(self)
hunk ./src/allmydata/storage/backends/null/core.py 19
 
     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
         
+        immutableshare = ImmutableShare() 
         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
 
     def set_storage_server(self, ss):
hunk ./src/allmydata/storage/backends/null/core.py 28
 class ImmutableShare:
     sharetype = "immutable"
 
-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
+    def __init__(self):
         """ If max_size is not None then I won't allow more than
         max_size to be written to me. If create=True then max_size
         must not be None. """
hunk ./src/allmydata/storage/backends/null/core.py 32
-        precondition((max_size is not None) or (not create), max_size, create)
-        self.shnum = shnum 
-        self.storage_index = storageindex
-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
-        self._max_size = max_size
-        if create:
-            # touch the file, so later callers will see that we're working on
-            # it. Also construct the metadata.
-            assert not os.path.exists(self.fname)
-            fileutil.make_dirs(os.path.dirname(self.fname))
-            f = open(self.fname, 'wb')
-            # The second field -- the four-byte share data length -- is no
-            # longer used as of Tahoe v1.3.0, but we continue to write it in
-            # there in case someone downgrades a storage server from >=
-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
-            # server to another, etc. We do saturation -- a share data length
-            # larger than 2**32-1 (what can fit into the field) is marked as
-            # the largest length that can fit into the field. That way, even
-            # if this does happen, the old < v1.3.0 server will still allow
-            # clients to read the first part of the share.
-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
-            f.close()
-            self._lease_offset = max_size + 0x0c
-            self._num_leases = 0
-        else:
-            f = open(self.fname, 'rb')
-            filesize = os.path.getsize(self.fname)
-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
-            f.close()
-            if version != 1:
-                msg = "sharefile %s had version %d but we wanted 1" % \
-                      (self.fname, version)
-                raise UnknownImmutableContainerVersionError(msg)
-            self._num_leases = num_leases
-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
-        self._data_offset = 0xc
+        pass
 
     def get_shnum(self):
         return self.shnum
hunk ./src/allmydata/storage/backends/null/core.py 54
         return f.read(actuallength)
 
     def write_share_data(self, offset, data):
-        length = len(data)
-        precondition(offset >= 0, offset)
-        if self._max_size is not None and offset+length > self._max_size:
-            raise DataTooLargeError(self._max_size, offset, length)
-        f = open(self.fname, 'rb+')
-        real_offset = self._data_offset+offset
-        f.seek(real_offset)
-        assert f.tell() == real_offset
-        f.write(data)
-        f.close()
+        pass
 
     def _write_lease_record(self, f, lease_number, lease_info):
         offset = self._lease_offset + lease_number * self.LEASE_SIZE
hunk ./src/allmydata/storage/backends/null/core.py 84
             if data:
                 yield LeaseInfo().from_immutable_data(data)
 
-    def add_lease(self, lease_info):
-        f = open(self.fname, 'rb+')
-        num_leases = self._read_num_leases(f)
-        self._write_lease_record(f, num_leases, lease_info)
-        self._write_num_leases(f, num_leases+1)
-        f.close()
+    def add_lease(self, lease):
+        pass
 
     def renew_lease(self, renew_secret, new_expire_time):
         for i,lease in enumerate(self.get_leases()):
hunk ./src/allmydata/test/test_backends.py 32
                      'sharetypes' : None}
 
 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
-    @mock.patch('time.time')
-    @mock.patch('os.mkdir')
-    @mock.patch('__builtin__.open')
-    @mock.patch('os.listdir')
-    @mock.patch('os.path.isdir')
-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
-        """ This tests whether a server instance can be constructed
-        with a null backend. The server instance fails the test if it
-        tries to read or write to the file system. """
-
-        # Now begin the test.
-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
-
-        self.failIf(mockisdir.called)
-        self.failIf(mocklistdir.called)
-        self.failIf(mockopen.called)
-        self.failIf(mockmkdir.called)
-
-        # You passed!
-
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 53
                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
         mockopen.side_effect = call_open
 
-        # Now begin the test.
-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
-
-        self.failIf(mockisdir.called)
-        self.failIf(mocklistdir.called)
-        self.failIf(mockopen.called)
-        self.failIf(mockmkdir.called)
-        self.failIf(mocktime.called)
-
-        # You passed!
-
-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
-    def setUp(self):
-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
-
-    @mock.patch('os.mkdir')
-    @mock.patch('__builtin__.open')
-    @mock.patch('os.listdir')
-    @mock.patch('os.path.isdir')
-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
-        """ Write a new share. """
-
-        # Now begin the test.
-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
-        bs[0].remote_write(0, 'a')
-        self.failIf(mockisdir.called)
-        self.failIf(mocklistdir.called)
-        self.failIf(mockopen.called)
-        self.failIf(mockmkdir.called)
+        def call_isdir(fname):
+            if fname == os.path.join(tempdir,'shares'):
+                return True
+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
+                return True
+            else:
+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
+        mockisdir.side_effect = call_isdir
 
hunk ./src/allmydata/test/test_backends.py 62
-    @mock.patch('os.path.exists')
-    @mock.patch('os.path.getsize')
-    @mock.patch('__builtin__.open')
-    @mock.patch('os.listdir')
-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
-        """ This tests whether the code correctly finds and reads
-        shares written out by old (Tahoe-LAFS <= v1.8.2)
-        servers. There is a similar test in test_download, but that one
-        is from the perspective of the client and exercises a deeper
-        stack of code. This one is for exercising just the
-        StorageServer object. """
+        def call_mkdir(fname, mode):
+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
+            self.failUnlessEqual(0777, mode)
+            if fname == tempdir:
+                return None
+            elif fname == os.path.join(tempdir,'shares'):
+                return None
+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
+                return None
+            else:
+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
+        mockmkdir.side_effect = call_mkdir
 
         # Now begin the test.
hunk ./src/allmydata/test/test_backends.py 76
-        bs = self.s.remote_get_buckets('teststorage_index')
+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
 
hunk ./src/allmydata/test/test_backends.py 78
-        self.failUnlessEqual(len(bs), 0)
-        self.failIf(mocklistdir.called)
-        self.failIf(mockopen.called)
-        self.failIf(mockgetsize.called)
-        self.failIf(mockexists.called)
+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
 
 
 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
hunk ./src/allmydata/test/test_backends.py 193
         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
 
 
+
+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
+    @mock.patch('time.time')
+    @mock.patch('os.mkdir')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
+        """ This tests whether a file system backend instance can be
+        constructed. To pass the test, it has to use the
+        filesystem in only the prescribed ways. """
+
+        def call_open(fname, mode):
+            if fname == os.path.join(tempdir,'bucket_counter.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
+                return StringIO()
+            else:
+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
+        mockopen.side_effect = call_open
+
+        def call_isdir(fname):
+            if fname == os.path.join(tempdir,'shares'):
+                return True
+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
+                return True
+            else:
+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
+        mockisdir.side_effect = call_isdir
+
+        def call_mkdir(fname, mode):
+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
+            self.failUnlessEqual(0777, mode)
+            if fname == tempdir:
+                return None
+            elif fname == os.path.join(tempdir,'shares'):
+                return None
+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
+                return None
+            else:
+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
+        mockmkdir.side_effect = call_mkdir
+
+        # Now begin the test.
+        DASCore('teststoredir', expiration_policy)
+
+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
}
[checkpoint 6
wilcoxjg@gmail.com**20110706190824
 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
] {
hunk ./src/allmydata/interfaces.py 100
                          renew_secret=LeaseRenewSecret,
                          cancel_secret=LeaseCancelSecret,
                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
-                         allocated_size=Offset, canary=Referenceable):
+                         allocated_size=Offset, 
+                         canary=Referenceable):
         """
hunk ./src/allmydata/interfaces.py 103
-        @param storage_index: the index of the bucket to be created or
+        @param storage_index: the index of the shares to be created or
                               increfed.
hunk ./src/allmydata/interfaces.py 105
-        @param sharenums: these are the share numbers (probably between 0 and
-                          99) that the sender is proposing to store on this
-                          server.
-        @param renew_secret: This is the secret used to protect bucket refresh
+        @param renew_secret: This is the secret used to protect shares refresh
                              This secret is generated by the client and
                              stored for later comparison by the server. Each
                              server is given a different secret.
hunk ./src/allmydata/interfaces.py 109
-        @param cancel_secret: Like renew_secret, but protects bucket decref.
-        @param canary: If the canary is lost before close(), the bucket is
+        @param cancel_secret: Like renew_secret, but protects shares decref.
+        @param sharenums: these are the share numbers (probably between 0 and
+                          99) that the sender is proposing to store on this
+                          server.
+        @param allocated_size: XXX The size of the shares the client wishes to store.
+        @param canary: If the canary is lost before close(), the shares are
                        deleted.
hunk ./src/allmydata/interfaces.py 116
+
         @return: tuple of (alreadygot, allocated), where alreadygot is what we
                  already have and allocated is what we hereby agree to accept.
                  New leases are added for shares in both lists.
hunk ./src/allmydata/interfaces.py 128
                   renew_secret=LeaseRenewSecret,
                   cancel_secret=LeaseCancelSecret):
         """
-        Add a new lease on the given bucket. If the renew_secret matches an
+        Add a new lease on the given shares. If the renew_secret matches an
         existing lease, that lease will be renewed instead. If there is no
         bucket for the given storage_index, return silently. (note that in
         tahoe-1.3.0 and earlier, IndexError was raised if there was no
hunk ./src/allmydata/storage/server.py 17
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
 
-from zope.interface import implements
-
 # storage/
 # storage/shares/incoming
 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
hunk ./src/allmydata/test/test_backends.py 6
 from StringIO import StringIO
 
 from allmydata.test.common_util import ReallyEqualMixin
+from allmydata.util.assertutil import _assert
 
 import mock, os
 
hunk ./src/allmydata/test/test_backends.py 92
                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
             elif fname == os.path.join(tempdir, 'lease_checker.history'):
                 return StringIO()
+            else:
+                _assert(False, "The tester code doesn't recognize this case.")  
+
         mockopen.side_effect = call_open
         testbackend = DASCore(tempdir, expiration_policy)
         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
hunk ./src/allmydata/test/test_backends.py 109
 
         def call_listdir(dirname):
             self.failUnlessReallyEqual(dirname, sharedirname)
-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
 
         mocklistdir.side_effect = call_listdir
 
hunk ./src/allmydata/test/test_backends.py 113
+        def call_isdir(dirname):
+            self.failUnlessReallyEqual(dirname, sharedirname)
+            return True
+
+        mockisdir.side_effect = call_isdir
+
+        def call_mkdir(dirname, permissions):
+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
+                self.Fail
+            else:
+                return True
+
+        mockmkdir.side_effect = call_mkdir
+
         class MockFile:
             def __init__(self):
                 self.buffer = ''
hunk ./src/allmydata/test/test_backends.py 156
             return sharefile
 
         mockopen.side_effect = call_open
+
         # Now begin the test.
         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
         bs[0].remote_write(0, 'a')
hunk ./src/allmydata/test/test_backends.py 161
         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
+        
+        # Now test the allocated_size method.
+        spaceint = self.s.allocated_size()
 
     @mock.patch('os.path.exists')
     @mock.patch('os.path.getsize')
}
[checkpoint 7
wilcoxjg@gmail.com**20110706200820
 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
] hunk ./src/allmydata/test/test_backends.py 164
         
         # Now test the allocated_size method.
         spaceint = self.s.allocated_size()
+        self.failUnlessReallyEqual(spaceint, 1)
 
     @mock.patch('os.path.exists')
     @mock.patch('os.path.getsize')
[checkpoint8
wilcoxjg@gmail.com**20110706223126
 Ignore-this: 97336180883cb798b16f15411179f827
   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
] hunk ./src/allmydata/test/test_backends.py 32
                      'cutoff_date' : None,
                      'sharetypes' : None}
 
+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
+    def setUp(self):
+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
+
+    @mock.patch('os.mkdir')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
+        """ Write a new share. """
+
+        # Now begin the test.
+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        bs[0].remote_write(0, 'a')
+        self.failIf(mockisdir.called)
+        self.failIf(mocklistdir.called)
+        self.failIf(mockopen.called)
+        self.failIf(mockmkdir.called)
+
 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
[checkpoint 9
wilcoxjg@gmail.com**20110707042942
 Ignore-this: 75396571fd05944755a104a8fc38aaf6
] {
hunk ./src/allmydata/storage/backends/das/core.py 88
                     filename = os.path.join(finalstoragedir, f)
                     yield ImmutableShare(self.sharedir, storage_index, int(f))
         except OSError:
-            # Commonly caused by there being no buckets at all.
+            # Commonly caused by there being no shares at all.
             pass
         
     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
hunk ./src/allmydata/storage/backends/das/core.py 141
         self.storage_index = storageindex
         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
         self._max_size = max_size
+        self.incomingdir = os.path.join(sharedir, 'incoming') 
+        si_dir = storage_index_to_dir(storageindex)
+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
         if create:
             # touch the file, so later callers will see that we're working on
             # it. Also construct the metadata.
hunk ./src/allmydata/storage/backends/das/core.py 177
             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
         self._data_offset = 0xc
 
+    def close(self):
+        fileutil.make_dirs(os.path.dirname(self.finalhome))
+        fileutil.rename(self.incominghome, self.finalhome)
+        try:
+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
+            # We try to delete the parent (.../ab/abcde) to avoid leaving
+            # these directories lying around forever, but the delete might
+            # fail if we're working on another share for the same storage
+            # index (like ab/abcde/5). The alternative approach would be to
+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
+            # ShareWriter), each of which is responsible for a single
+            # directory on disk, and have them use reference counting of
+            # their children to know when they should do the rmdir. This
+            # approach is simpler, but relies on os.rmdir refusing to delete
+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
+            os.rmdir(os.path.dirname(self.incominghome))
+            # we also delete the grandparent (prefix) directory, .../ab ,
+            # again to avoid leaving directories lying around. This might
+            # fail if there is another bucket open that shares a prefix (like
+            # ab/abfff).
+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
+            # we leave the great-grandparent (incoming/) directory in place.
+        except EnvironmentError:
+            # ignore the "can't rmdir because the directory is not empty"
+            # exceptions, those are normal consequences of the
+            # above-mentioned conditions.
+            pass
+        pass
+        
+    def stat(self):
+        return os.stat(self.finalhome)[stat.ST_SIZE]
+
     def get_shnum(self):
         return self.shnum
 
hunk ./src/allmydata/storage/immutable.py 7
 
 from zope.interface import implements
 from allmydata.interfaces import RIBucketWriter, RIBucketReader
-from allmydata.util import base32, fileutil, log
+from allmydata.util import base32, log
 from allmydata.util.assertutil import precondition
 from allmydata.util.hashutil import constant_time_compare
 from allmydata.storage.lease import LeaseInfo
hunk ./src/allmydata/storage/immutable.py 44
     def remote_close(self):
         precondition(not self.closed)
         start = time.time()
-
-        fileutil.make_dirs(os.path.dirname(self.finalhome))
-        fileutil.rename(self.incominghome, self.finalhome)
-        try:
-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
-            # We try to delete the parent (.../ab/abcde) to avoid leaving
-            # these directories lying around forever, but the delete might
-            # fail if we're working on another share for the same storage
-            # index (like ab/abcde/5). The alternative approach would be to
-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
-            # ShareWriter), each of which is responsible for a single
-            # directory on disk, and have them use reference counting of
-            # their children to know when they should do the rmdir. This
-            # approach is simpler, but relies on os.rmdir refusing to delete
-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
-            os.rmdir(os.path.dirname(self.incominghome))
-            # we also delete the grandparent (prefix) directory, .../ab ,
-            # again to avoid leaving directories lying around. This might
-            # fail if there is another bucket open that shares a prefix (like
-            # ab/abfff).
-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
-            # we leave the great-grandparent (incoming/) directory in place.
-        except EnvironmentError:
-            # ignore the "can't rmdir because the directory is not empty"
-            # exceptions, those are normal consequences of the
-            # above-mentioned conditions.
-            pass
+        self._sharefile.close()
         self._sharefile = None
         self.closed = True
         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
hunk ./src/allmydata/storage/immutable.py 49
 
-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
+        filelen = self._sharefile.stat()
         self.ss.bucket_writer_closed(self, filelen)
         self.ss.add_latency("close", time.time() - start)
         self.ss.count("close")
hunk ./src/allmydata/storage/server.py 45
         self._active_writers = weakref.WeakKeyDictionary()
         self.backend = backend
         self.backend.setServiceParent(self)
+        self.backend.set_storage_server(self)
         log.msg("StorageServer created", facility="tahoe.storage")
 
         self.latencies = {"allocate": [], # immutable
hunk ./src/allmydata/storage/server.py 220
 
         for shnum in (sharenums - alreadygot):
             if (not limited) or (remaining_space >= max_space_per_bucket):
-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
-                self.backend.set_storage_server(self)
                 bw = self.backend.make_bucket_writer(storage_index, shnum,
                                                      max_space_per_bucket, lease_info, canary)
                 bucketwriters[shnum] = bw
hunk ./src/allmydata/test/test_backends.py 117
         mockopen.side_effect = call_open
         testbackend = DASCore(tempdir, expiration_policy)
         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
-
+    
+    @mock.patch('allmydata.util.fileutil.get_available_space')
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 124
     @mock.patch('os.listdir')
     @mock.patch('os.path.isdir')
-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
+                             mockget_available_space):
         """ Write a new share. """
 
         def call_listdir(dirname):
hunk ./src/allmydata/test/test_backends.py 148
 
         mockmkdir.side_effect = call_mkdir
 
+        def call_get_available_space(storedir, reserved_space):
+            self.failUnlessReallyEqual(storedir, tempdir)
+            return 1
+
+        mockget_available_space.side_effect = call_get_available_space
+
         class MockFile:
             def __init__(self):
                 self.buffer = ''
hunk ./src/allmydata/test/test_backends.py 188
         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
         bs[0].remote_write(0, 'a')
         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
-        
+
+        # What happens when there's not enough space for the client's request?
+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
+
         # Now test the allocated_size method.
         spaceint = self.s.allocated_size()
         self.failUnlessReallyEqual(spaceint, 1)
}
[checkpoint10
wilcoxjg@gmail.com**20110707172049
 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
] {
hunk ./src/allmydata/test/test_backends.py 20
 # The following share file contents was generated with
 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
 # with share data == 'a'.
-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
 
hunk ./src/allmydata/test/test_backends.py 25
+testnodeid = 'testnodeidxxxxxxxxxx'
 tempdir = 'teststoredir'
 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
 sharefname = os.path.join(sharedirname, '0')
hunk ./src/allmydata/test/test_backends.py 37
 
 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
     def setUp(self):
-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
+        self.s = StorageServer(testnodeid, backend=NullCore())
 
     @mock.patch('os.mkdir')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 99
         mockmkdir.side_effect = call_mkdir
 
         # Now begin the test.
-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
 
         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
 
hunk ./src/allmydata/test/test_backends.py 119
 
         mockopen.side_effect = call_open
         testbackend = DASCore(tempdir, expiration_policy)
-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
-    
+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
+        
+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
     @mock.patch('allmydata.util.fileutil.get_available_space')
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
hunk ./src/allmydata/test/test_backends.py 129
     @mock.patch('os.listdir')
     @mock.patch('os.path.isdir')
     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
-                             mockget_available_space):
+                             mockget_available_space, mockget_shares):
         """ Write a new share. """
 
         def call_listdir(dirname):
hunk ./src/allmydata/test/test_backends.py 139
         mocklistdir.side_effect = call_listdir
 
         def call_isdir(dirname):
+            #XXX Should there be any other tests here?
             self.failUnlessReallyEqual(dirname, sharedirname)
             return True
 
hunk ./src/allmydata/test/test_backends.py 159
 
         mockget_available_space.side_effect = call_get_available_space
 
+        mocktime.return_value = 0
+        class MockShare:
+            def __init__(self):
+                self.shnum = 1
+                
+            def add_or_renew_lease(elf, lease_info):
+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
+                
+
+        share = MockShare()
+        def call_get_shares(storageindex):
+            return [share] 
+
+        mockget_shares.side_effect = call_get_shares
+
         class MockFile:
             def __init__(self):
                 self.buffer = ''
hunk ./src/allmydata/test/test_backends.py 199
             def tell(self):
                 return self.pos
 
-        mocktime.return_value = 0
 
         sharefile = MockFile()
         def call_open(fname, mode):
}
[jacp 11
wilcoxjg@gmail.com**20110708213919
 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
] {
hunk ./src/allmydata/storage/backends/das/core.py 144
         self.incomingdir = os.path.join(sharedir, 'incoming') 
         si_dir = storage_index_to_dir(storageindex)
         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
+        #XXX  self.fname and self.finalhome need to be resolve/merged.
         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
         if create:
             # touch the file, so later callers will see that we're working on
hunk ./src/allmydata/storage/backends/das/core.py 208
         pass
         
     def stat(self):
-        return os.stat(self.finalhome)[stat.ST_SIZE]
+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
 
     def get_shnum(self):
         return self.shnum
hunk ./src/allmydata/storage/immutable.py 44
     def remote_close(self):
         precondition(not self.closed)
         start = time.time()
+
         self._sharefile.close()
hunk ./src/allmydata/storage/immutable.py 46
+        filelen = self._sharefile.stat()
         self._sharefile = None
hunk ./src/allmydata/storage/immutable.py 48
+
         self.closed = True
         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
 
hunk ./src/allmydata/storage/immutable.py 52
-        filelen = self._sharefile.stat()
         self.ss.bucket_writer_closed(self, filelen)
         self.ss.add_latency("close", time.time() - start)
         self.ss.count("close")
hunk ./src/allmydata/storage/server.py 220
 
         for shnum in (sharenums - alreadygot):
             if (not limited) or (remaining_space >= max_space_per_bucket):
-                bw = self.backend.make_bucket_writer(storage_index, shnum,
-                                                     max_space_per_bucket, lease_info, canary)
+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
                 bucketwriters[shnum] = bw
                 self._active_writers[bw] = 1
                 if limited:
hunk ./src/allmydata/test/test_backends.py 20
 # The following share file contents was generated with
 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
 # with share data == 'a'.
-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
+renew_secret  = 'x'*32
+cancel_secret = 'y'*32
 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
 
hunk ./src/allmydata/test/test_backends.py 27
 testnodeid = 'testnodeidxxxxxxxxxx'
 tempdir = 'teststoredir'
-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
-sharefname = os.path.join(sharedirname, '0')
+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
+shareincomingname = os.path.join(sharedirincomingname, '0')
+sharefname = os.path.join(sharedirfinalname, '0')
+
 expiration_policy = {'enabled' : False, 
                      'mode' : 'age',
                      'override_lease_duration' : None,
hunk ./src/allmydata/test/test_backends.py 123
         mockopen.side_effect = call_open
         testbackend = DASCore(tempdir, expiration_policy)
         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
-        
+
+    @mock.patch('allmydata.util.fileutil.rename')
+    @mock.patch('allmydata.util.fileutil.make_dirs')
+    @mock.patch('os.path.exists')
+    @mock.patch('os.stat')
     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
     @mock.patch('allmydata.util.fileutil.get_available_space')
     @mock.patch('time.time')
hunk ./src/allmydata/test/test_backends.py 136
     @mock.patch('os.listdir')
     @mock.patch('os.path.isdir')
     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
-                             mockget_available_space, mockget_shares):
+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
+                             mockmake_dirs, mockrename):
         """ Write a new share. """
 
         def call_listdir(dirname):
hunk ./src/allmydata/test/test_backends.py 141
-            self.failUnlessReallyEqual(dirname, sharedirname)
+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
 
         mocklistdir.side_effect = call_listdir
hunk ./src/allmydata/test/test_backends.py 148
 
         def call_isdir(dirname):
             #XXX Should there be any other tests here?
-            self.failUnlessReallyEqual(dirname, sharedirname)
+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
             return True
 
         mockisdir.side_effect = call_isdir
hunk ./src/allmydata/test/test_backends.py 154
 
         def call_mkdir(dirname, permissions):
-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
                 self.Fail
             else:
                 return True
hunk ./src/allmydata/test/test_backends.py 208
                 return self.pos
 
 
-        sharefile = MockFile()
+        fobj = MockFile()
         def call_open(fname, mode):
             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
hunk ./src/allmydata/test/test_backends.py 211
-            return sharefile
+            return fobj
 
         mockopen.side_effect = call_open
 
hunk ./src/allmydata/test/test_backends.py 215
+        def call_make_dirs(dname):
+            self.failUnlessReallyEqual(dname, sharedirfinalname)
+            
+        mockmake_dirs.side_effect = call_make_dirs
+
+        def call_rename(src, dst):
+           self.failUnlessReallyEqual(src, shareincomingname)
+           self.failUnlessReallyEqual(dst, sharefname)
+            
+        mockrename.side_effect = call_rename
+
+        def call_exists(fname):
+            self.failUnlessReallyEqual(fname, sharefname)
+
+        mockexists.side_effect = call_exists
+
         # Now begin the test.
         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
         bs[0].remote_write(0, 'a')
hunk ./src/allmydata/test/test_backends.py 234
-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
+        spaceint = self.s.allocated_size()
+        self.failUnlessReallyEqual(spaceint, 1)
+
+        bs[0].remote_close()
 
         # What happens when there's not enough space for the client's request?
hunk ./src/allmydata/test/test_backends.py 241
-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
 
         # Now test the allocated_size method.
hunk ./src/allmydata/test/test_backends.py 244
-        spaceint = self.s.allocated_size()
-        self.failUnlessReallyEqual(spaceint, 1)
+        #self.failIf(mockexists.called, mockexists.call_args_list)
+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
+        #self.failIf(mockrename.called, mockrename.call_args_list)
+        #self.failIf(mockstat.called, mockstat.call_args_list)
 
     @mock.patch('os.path.exists')
     @mock.patch('os.path.getsize')
}
[checkpoint12 testing correct behavior with regard to incoming and final
wilcoxjg@gmail.com**20110710191915
 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
] {
hunk ./src/allmydata/storage/backends/das/core.py 74
         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
         self.lease_checker.setServiceParent(self)
 
+    def get_incoming(self, storageindex):
+        return set((1,))
+
     def get_available_space(self):
         if self.readonly:
             return 0
hunk ./src/allmydata/storage/server.py 77
         """Return a dict, indexed by category, that contains a dict of
         latency numbers for each category. If there are sufficient samples
         for unambiguous interpretation, each dict will contain the
-        following keys: mean, 01_0_percentile, 10_0_percentile,
+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
         99_0_percentile, 99_9_percentile.  If there are insufficient
         samples for a given percentile to be interpreted unambiguously
hunk ./src/allmydata/storage/server.py 120
 
     def get_stats(self):
         # remember: RIStatsProvider requires that our return dict
-        # contains numeric values.
+        # contains numeric, or None values.
         stats = { 'storage_server.allocated': self.allocated_size(), }
         stats['storage_server.reserved_space'] = self.reserved_space
         for category,ld in self.get_latencies().items():
hunk ./src/allmydata/storage/server.py 185
         start = time.time()
         self.count("allocate")
         alreadygot = set()
+        incoming = set()
         bucketwriters = {} # k: shnum, v: BucketWriter
 
         si_s = si_b2a(storage_index)
hunk ./src/allmydata/storage/server.py 219
             alreadygot.add(share.shnum)
             share.add_or_renew_lease(lease_info)
 
-        for shnum in (sharenums - alreadygot):
+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
+        incoming = self.backend.get_incoming(storageindex)
+
+        for shnum in ((sharenums - alreadygot) - incoming):
             if (not limited) or (remaining_space >= max_space_per_bucket):
                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
                 bucketwriters[shnum] = bw
hunk ./src/allmydata/storage/server.py 229
                 self._active_writers[bw] = 1
                 if limited:
                     remaining_space -= max_space_per_bucket
-
-        #XXX We SHOULD DOCUMENT LATER.
+            else:
+                # Bummer not enough space to accept this share.
+                pass
 
         self.add_latency("allocate", time.time() - start)
         return alreadygot, bucketwriters
hunk ./src/allmydata/storage/server.py 323
         self.add_latency("get", time.time() - start)
         return bucketreaders
 
-    def get_leases(self, storage_index):
+    def remote_get_incoming(self, storageindex):
+        incoming_share_set = self.backend.get_incoming(storageindex)
+        return incoming_share_set
+
+    def get_leases(self, storageindex):
         """Provide an iterator that yields all of the leases attached to this
         bucket. Each lease is returned as a LeaseInfo instance.
 
hunk ./src/allmydata/storage/server.py 337
         # since all shares get the same lease data, we just grab the leases
         # from the first share
         try:
-            shnum, filename = self._get_shares(storage_index).next()
+            shnum, filename = self._get_shares(storageindex).next()
             sf = ShareFile(filename)
             return sf.get_leases()
         except StopIteration:
hunk ./src/allmydata/test/test_backends.py 182
 
         share = MockShare()
         def call_get_shares(storageindex):
-            return [share] 
+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
+            return []#share] 
 
         mockget_shares.side_effect = call_get_shares
 
hunk ./src/allmydata/test/test_backends.py 222
         mockmake_dirs.side_effect = call_make_dirs
 
         def call_rename(src, dst):
-           self.failUnlessReallyEqual(src, shareincomingname)
-           self.failUnlessReallyEqual(dst, sharefname)
+            self.failUnlessReallyEqual(src, shareincomingname)
+            self.failUnlessReallyEqual(dst, sharefname)
             
         mockrename.side_effect = call_rename
 
hunk ./src/allmydata/test/test_backends.py 233
         mockexists.side_effect = call_exists
 
         # Now begin the test.
+
+        # XXX (0) ???  Fail unless something is not properly set-up?
         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
hunk ./src/allmydata/test/test_backends.py 236
+
+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+
+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
+        # with the same si, until BucketWriter.remote_close() has been called.
+        # self.failIf(bsa)
+
+        # XXX (3) Inspect final and fail unless there's nothing there.
         bs[0].remote_write(0, 'a')
hunk ./src/allmydata/test/test_backends.py 247
+        # XXX (4a) Inspect final and fail unless share 0 is there.
+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
         spaceint = self.s.allocated_size()
         self.failUnlessReallyEqual(spaceint, 1)
hunk ./src/allmydata/test/test_backends.py 253
 
+        #  If there's something in self.alreadygot prior to remote_close() then fail.
         bs[0].remote_close()
 
         # What happens when there's not enough space for the client's request?
hunk ./src/allmydata/test/test_backends.py 260
         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
 
         # Now test the allocated_size method.
-        #self.failIf(mockexists.called, mockexists.call_args_list)
+        # self.failIf(mockexists.called, mockexists.call_args_list)
         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
         #self.failIf(mockrename.called, mockrename.call_args_list)
         #self.failIf(mockstat.called, mockstat.call_args_list)
}
[fix inconsistent naming of storage_index vs storageindex in storage/server.py
wilcoxjg@gmail.com**20110710195139
 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
] {
hunk ./src/allmydata/storage/server.py 220
             share.add_or_renew_lease(lease_info)
 
         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
-        incoming = self.backend.get_incoming(storageindex)
+        incoming = self.backend.get_incoming(storage_index)
 
         for shnum in ((sharenums - alreadygot) - incoming):
             if (not limited) or (remaining_space >= max_space_per_bucket):
hunk ./src/allmydata/storage/server.py 323
         self.add_latency("get", time.time() - start)
         return bucketreaders
 
-    def remote_get_incoming(self, storageindex):
-        incoming_share_set = self.backend.get_incoming(storageindex)
+    def remote_get_incoming(self, storage_index):
+        incoming_share_set = self.backend.get_incoming(storage_index)
         return incoming_share_set
 
hunk ./src/allmydata/storage/server.py 327
-    def get_leases(self, storageindex):
+    def get_leases(self, storage_index):
         """Provide an iterator that yields all of the leases attached to this
         bucket. Each lease is returned as a LeaseInfo instance.
 
hunk ./src/allmydata/storage/server.py 337
         # since all shares get the same lease data, we just grab the leases
         # from the first share
         try:
-            shnum, filename = self._get_shares(storageindex).next()
+            shnum, filename = self._get_shares(storage_index).next()
             sf = ShareFile(filename)
             return sf.get_leases()
         except StopIteration:
replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
}
[adding comments to clarify what I'm about to do.
wilcoxjg@gmail.com**20110710220623
 Ignore-this: 44f97633c3eac1047660272e2308dd7c
] {
hunk ./src/allmydata/storage/backends/das/core.py 8
 
 import os, re, weakref, struct, time
 
-from foolscap.api import Referenceable
+#from foolscap.api import Referenceable
 from twisted.application import service
 
 from zope.interface import implements
hunk ./src/allmydata/storage/backends/das/core.py 12
-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
 from allmydata.util import fileutil, idlib, log, time_format
 import allmydata # for __full_version__
 
hunk ./src/allmydata/storage/server.py 219
             alreadygot.add(share.shnum)
             share.add_or_renew_lease(lease_info)
 
-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
+        # fill incoming with all shares that are incoming use a set operation
+        # since there's no need to operate on individual pieces
         incoming = self.backend.get_incoming(storageindex)
 
         for shnum in ((sharenums - alreadygot) - incoming):
hunk ./src/allmydata/test/test_backends.py 245
         # with the same si, until BucketWriter.remote_close() has been called.
         # self.failIf(bsa)
 
-        # XXX (3) Inspect final and fail unless there's nothing there.
         bs[0].remote_write(0, 'a')
hunk ./src/allmydata/test/test_backends.py 246
-        # XXX (4a) Inspect final and fail unless share 0 is there.
-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
         spaceint = self.s.allocated_size()
         self.failUnlessReallyEqual(spaceint, 1)
hunk ./src/allmydata/test/test_backends.py 250
 
-        #  If there's something in self.alreadygot prior to remote_close() then fail.
+        # XXX (3) Inspect final and fail unless there's nothing there.
         bs[0].remote_close()
hunk ./src/allmydata/test/test_backends.py 252
+        # XXX (4a) Inspect final and fail unless share 0 is there.
+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
 
         # What happens when there's not enough space for the client's request?
         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
}
[branching back, no longer attempting to mock inside TestServerFSBackend
wilcoxjg@gmail.com**20110711190849
 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
] {
hunk ./src/allmydata/storage/backends/das/core.py 75
         self.lease_checker.setServiceParent(self)
 
     def get_incoming(self, storageindex):
-        return set((1,))
-
-    def get_available_space(self):
-        if self.readonly:
-            return 0
-        return fileutil.get_available_space(self.storedir, self.reserved_space)
+        """Return the set of incoming shnums."""
+        return set(os.listdir(self.incomingdir))
 
     def get_shares(self, storage_index):
         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
hunk ./src/allmydata/storage/backends/das/core.py 90
             # Commonly caused by there being no shares at all.
             pass
         
+    def get_available_space(self):
+        if self.readonly:
+            return 0
+        return fileutil.get_available_space(self.storedir, self.reserved_space)
+
     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
hunk ./src/allmydata/test/test_backends.py 27
 
 testnodeid = 'testnodeidxxxxxxxxxx'
 tempdir = 'teststoredir'
-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
+basedir = os.path.join(tempdir, 'shares')
+baseincdir = os.path.join(basedir, 'incoming')
+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
 shareincomingname = os.path.join(sharedirincomingname, '0')
 sharefname = os.path.join(sharedirfinalname, '0')
 
hunk ./src/allmydata/test/test_backends.py 142
                              mockmake_dirs, mockrename):
         """ Write a new share. """
 
-        def call_listdir(dirname):
-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
-
-        mocklistdir.side_effect = call_listdir
-
-        def call_isdir(dirname):
-            #XXX Should there be any other tests here?
-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
-            return True
-
-        mockisdir.side_effect = call_isdir
-
-        def call_mkdir(dirname, permissions):
-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
-                self.Fail
-            else:
-                return True
-
-        mockmkdir.side_effect = call_mkdir
-
-        def call_get_available_space(storedir, reserved_space):
-            self.failUnlessReallyEqual(storedir, tempdir)
-            return 1
-
-        mockget_available_space.side_effect = call_get_available_space
-
-        mocktime.return_value = 0
         class MockShare:
             def __init__(self):
                 self.shnum = 1
hunk ./src/allmydata/test/test_backends.py 152
                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
-                
 
         share = MockShare()
hunk ./src/allmydata/test/test_backends.py 154
-        def call_get_shares(storageindex):
-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
-            return []#share] 
-
-        mockget_shares.side_effect = call_get_shares
 
         class MockFile:
             def __init__(self):
hunk ./src/allmydata/test/test_backends.py 176
             def tell(self):
                 return self.pos
 
-
         fobj = MockFile()
hunk ./src/allmydata/test/test_backends.py 177
+
+        directories = {}
+        def call_listdir(dirname):
+            if dirname not in directories:
+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
+            else:
+                return directories[dirname].get_contents()
+
+        mocklistdir.side_effect = call_listdir
+
+        class MockDir:
+            def __init__(self, dirname):
+                self.name = dirname
+                self.contents = []
+    
+            def get_contents(self):
+                return self.contents
+
+        def call_isdir(dirname):
+            #XXX Should there be any other tests here?
+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
+            return True
+
+        mockisdir.side_effect = call_isdir
+
+        def call_mkdir(dirname, permissions):
+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
+                self.Fail
+            if dirname in directories:
+                raise OSError(17, "File exists: '%s'" % dirname) 
+                self.Fail
+            elif dirname not in directories:
+                directories[dirname] = MockDir(dirname)
+                return True
+
+        mockmkdir.side_effect = call_mkdir
+
+        def call_get_available_space(storedir, reserved_space):
+            self.failUnlessReallyEqual(storedir, tempdir)
+            return 1
+
+        mockget_available_space.side_effect = call_get_available_space
+
+        mocktime.return_value = 0
+        def call_get_shares(storageindex):
+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
+            return []#share] 
+
+        mockget_shares.side_effect = call_get_shares
+
         def call_open(fname, mode):
             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
             return fobj
}
[checkpoint12 TestServerFSBackend no longer mocks filesystem
wilcoxjg@gmail.com**20110711193357
 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
] {
hunk ./src/allmydata/storage/backends/das/core.py 23
      create_mutable_sharefile
 from allmydata.storage.immutable import BucketWriter, BucketReader
 from allmydata.storage.crawler import FSBucketCountingCrawler
+from allmydata.util.hashutil import constant_time_compare
 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
 
 from zope.interface import implements
hunk ./src/allmydata/storage/backends/das/core.py 28
 
+# storage/
+# storage/shares/incoming
+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
+# storage/shares/$START/$STORAGEINDEX
+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
+
+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
+# base-32 chars).
 # $SHARENUM matches this regex:
 NUM_RE=re.compile("^[0-9]+$")
 
hunk ./src/allmydata/test/test_backends.py 126
         testbackend = DASCore(tempdir, expiration_policy)
         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
 
-    @mock.patch('allmydata.util.fileutil.rename')
-    @mock.patch('allmydata.util.fileutil.make_dirs')
-    @mock.patch('os.path.exists')
-    @mock.patch('os.stat')
-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
-    @mock.patch('allmydata.util.fileutil.get_available_space')
     @mock.patch('time.time')
hunk ./src/allmydata/test/test_backends.py 127
-    @mock.patch('os.mkdir')
-    @mock.patch('__builtin__.open')
-    @mock.patch('os.listdir')
-    @mock.patch('os.path.isdir')
-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
-                             mockmake_dirs, mockrename):
+    def test_write_share(self, mocktime):
         """ Write a new share. """
 
         class MockShare:
hunk ./src/allmydata/test/test_backends.py 143
 
         share = MockShare()
 
-        class MockFile:
-            def __init__(self):
-                self.buffer = ''
-                self.pos = 0
-            def write(self, instring):
-                begin = self.pos
-                padlen = begin - len(self.buffer)
-                if padlen > 0:
-                    self.buffer += '\x00' * padlen
-                end = self.pos + len(instring)
-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
-                self.pos = end
-            def close(self):
-                pass
-            def seek(self, pos):
-                self.pos = pos
-            def read(self, numberbytes):
-                return self.buffer[self.pos:self.pos+numberbytes]
-            def tell(self):
-                return self.pos
-
-        fobj = MockFile()
-
-        directories = {}
-        def call_listdir(dirname):
-            if dirname not in directories:
-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
-            else:
-                return directories[dirname].get_contents()
-
-        mocklistdir.side_effect = call_listdir
-
-        class MockDir:
-            def __init__(self, dirname):
-                self.name = dirname
-                self.contents = []
-    
-            def get_contents(self):
-                return self.contents
-
-        def call_isdir(dirname):
-            #XXX Should there be any other tests here?
-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
-            return True
-
-        mockisdir.side_effect = call_isdir
-
-        def call_mkdir(dirname, permissions):
-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
-                self.Fail
-            if dirname in directories:
-                raise OSError(17, "File exists: '%s'" % dirname) 
-                self.Fail
-            elif dirname not in directories:
-                directories[dirname] = MockDir(dirname)
-                return True
-
-        mockmkdir.side_effect = call_mkdir
-
-        def call_get_available_space(storedir, reserved_space):
-            self.failUnlessReallyEqual(storedir, tempdir)
-            return 1
-
-        mockget_available_space.side_effect = call_get_available_space
-
-        mocktime.return_value = 0
-        def call_get_shares(storageindex):
-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
-            return []#share] 
-
-        mockget_shares.side_effect = call_get_shares
-
-        def call_open(fname, mode):
-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
-            return fobj
-
-        mockopen.side_effect = call_open
-
-        def call_make_dirs(dname):
-            self.failUnlessReallyEqual(dname, sharedirfinalname)
-            
-        mockmake_dirs.side_effect = call_make_dirs
-
-        def call_rename(src, dst):
-            self.failUnlessReallyEqual(src, shareincomingname)
-            self.failUnlessReallyEqual(dst, sharefname)
-            
-        mockrename.side_effect = call_rename
-
-        def call_exists(fname):
-            self.failUnlessReallyEqual(fname, sharefname)
-
-        mockexists.side_effect = call_exists
-
         # Now begin the test.
 
         # XXX (0) ???  Fail unless something is not properly set-up?
}
[JACP
wilcoxjg@gmail.com**20110711194407
 Ignore-this: b54745de777c4bb58d68d708f010bbb
] {
hunk ./src/allmydata/storage/backends/das/core.py 86
 
     def get_incoming(self, storageindex):
         """Return the set of incoming shnums."""
-        return set(os.listdir(self.incomingdir))
+        try:
+            incominglist = os.listdir(self.incomingdir)
+            print "incominglist: ", incominglist
+            return set(incominglist)
+        except OSError:
+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
+            pass
 
     def get_shares(self, storage_index):
         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
hunk ./src/allmydata/storage/server.py 17
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
 
-# storage/
-# storage/shares/incoming
-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
-# storage/shares/$START/$STORAGEINDEX
-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
-
-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
-# base-32 chars).
-
-
 class StorageServer(service.MultiService, Referenceable):
     implements(RIStorageServer, IStatsProducer)
     name = 'storage'
}
[testing get incoming
wilcoxjg@gmail.com**20110711210224
 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
] {
hunk ./src/allmydata/storage/backends/das/core.py 87
     def get_incoming(self, storageindex):
         """Return the set of incoming shnums."""
         try:
-            incominglist = os.listdir(self.incomingdir)
+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
+            incominglist = os.listdir(incomingsharesdir)
             print "incominglist: ", incominglist
             return set(incominglist)
         except OSError:
hunk ./src/allmydata/storage/backends/das/core.py 92
-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
-            pass
-
+            # XXX I'd like to make this more specific. If there are no shares at all.
+            return set()
+            
     def get_shares(self, storage_index):
         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
hunk ./src/allmydata/test/test_backends.py 149
         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
 
         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
 
hunk ./src/allmydata/test/test_backends.py 152
-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
         # with the same si, until BucketWriter.remote_close() has been called.
         # self.failIf(bsa)
}
[ImmutableShareFile does not know its StorageIndex
wilcoxjg@gmail.com**20110711211424
 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
] {
hunk ./src/allmydata/storage/backends/das/core.py 112
             return 0
         return fileutil.get_available_space(self.storedir, self.reserved_space)
 
-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
         return bw
 
hunk ./src/allmydata/storage/backends/das/core.py 155
     LEASE_SIZE = struct.calcsize(">L32s32sL")
     sharetype = "immutable"
 
-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
         """ If max_size is not None then I won't allow more than
         max_size to be written to me. If create=True then max_size
         must not be None. """
}
[get_incoming correctly reports the 0 share after it has arrived
wilcoxjg@gmail.com**20110712025157
 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
] {
hunk ./src/allmydata/storage/backends/das/core.py 1
+import os, re, weakref, struct, time, stat
+
 from allmydata.interfaces import IStorageBackend
 from allmydata.storage.backends.base import Backend
 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
hunk ./src/allmydata/storage/backends/das/core.py 8
 from allmydata.util.assertutil import precondition
 
-import os, re, weakref, struct, time
-
 #from foolscap.api import Referenceable
 from twisted.application import service
 
hunk ./src/allmydata/storage/backends/das/core.py 89
         try:
             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
             incominglist = os.listdir(incomingsharesdir)
-            print "incominglist: ", incominglist
-            return set(incominglist)
+            incomingshnums = [int(x) for x in incominglist]
+            return set(incomingshnums)
         except OSError:
             # XXX I'd like to make this more specific. If there are no shares at all.
             return set()
hunk ./src/allmydata/storage/backends/das/core.py 113
         return fileutil.get_available_space(self.storedir, self.reserved_space)
 
     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
         return bw
 
hunk ./src/allmydata/storage/backends/das/core.py 160
         max_size to be written to me. If create=True then max_size
         must not be None. """
         precondition((max_size is not None) or (not create), max_size, create)
-        self.shnum = shnum 
-        self.storage_index = storageindex
-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
         self._max_size = max_size
hunk ./src/allmydata/storage/backends/das/core.py 161
-        self.incomingdir = os.path.join(sharedir, 'incoming') 
-        si_dir = storage_index_to_dir(storageindex)
-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
-        #XXX  self.fname and self.finalhome need to be resolve/merged.
-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
+        self.incominghome = incominghome
+        self.finalhome = finalhome
         if create:
             # touch the file, so later callers will see that we're working on
             # it. Also construct the metadata.
hunk ./src/allmydata/storage/backends/das/core.py 166
-            assert not os.path.exists(self.fname)
-            fileutil.make_dirs(os.path.dirname(self.fname))
-            f = open(self.fname, 'wb')
+            assert not os.path.exists(self.finalhome)
+            fileutil.make_dirs(os.path.dirname(self.incominghome))
+            f = open(self.incominghome, 'wb')
             # The second field -- the four-byte share data length -- is no
             # longer used as of Tahoe v1.3.0, but we continue to write it in
             # there in case someone downgrades a storage server from >=
hunk ./src/allmydata/storage/backends/das/core.py 183
             self._lease_offset = max_size + 0x0c
             self._num_leases = 0
         else:
-            f = open(self.fname, 'rb')
-            filesize = os.path.getsize(self.fname)
+            f = open(self.finalhome, 'rb')
+            filesize = os.path.getsize(self.finalhome)
             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
             f.close()
             if version != 1:
hunk ./src/allmydata/storage/backends/das/core.py 189
                 msg = "sharefile %s had version %d but we wanted 1" % \
-                      (self.fname, version)
+                      (self.finalhome, version)
                 raise UnknownImmutableContainerVersionError(msg)
             self._num_leases = num_leases
             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
hunk ./src/allmydata/storage/backends/das/core.py 225
         pass
         
     def stat(self):
-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
+        return os.stat(self.finalhome)[stat.ST_SIZE]
+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
 
     def get_shnum(self):
         return self.shnum
hunk ./src/allmydata/storage/backends/das/core.py 232
 
     def unlink(self):
-        os.unlink(self.fname)
+        os.unlink(self.finalhome)
 
     def read_share_data(self, offset, length):
         precondition(offset >= 0)
hunk ./src/allmydata/storage/backends/das/core.py 239
         # Reads beyond the end of the data are truncated. Reads that start
         # beyond the end of the data return an empty string.
         seekpos = self._data_offset+offset
-        fsize = os.path.getsize(self.fname)
+        fsize = os.path.getsize(self.finalhome)
         actuallength = max(0, min(length, fsize-seekpos))
         if actuallength == 0:
             return ""
hunk ./src/allmydata/storage/backends/das/core.py 243
-        f = open(self.fname, 'rb')
+        f = open(self.finalhome, 'rb')
         f.seek(seekpos)
         return f.read(actuallength)
 
hunk ./src/allmydata/storage/backends/das/core.py 252
         precondition(offset >= 0, offset)
         if self._max_size is not None and offset+length > self._max_size:
             raise DataTooLargeError(self._max_size, offset, length)
-        f = open(self.fname, 'rb+')
+        f = open(self.incominghome, 'rb+')
         real_offset = self._data_offset+offset
         f.seek(real_offset)
         assert f.tell() == real_offset
hunk ./src/allmydata/storage/backends/das/core.py 279
 
     def get_leases(self):
         """Yields a LeaseInfo instance for all leases."""
-        f = open(self.fname, 'rb')
+        f = open(self.finalhome, 'rb')
         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
         f.seek(self._lease_offset)
         for i in range(num_leases):
hunk ./src/allmydata/storage/backends/das/core.py 288
                 yield LeaseInfo().from_immutable_data(data)
 
     def add_lease(self, lease_info):
-        f = open(self.fname, 'rb+')
+        f = open(self.incominghome, 'rb+')
         num_leases = self._read_num_leases(f)
         self._write_lease_record(f, num_leases, lease_info)
         self._write_num_leases(f, num_leases+1)
hunk ./src/allmydata/storage/backends/das/core.py 301
                 if new_expire_time > lease.expiration_time:
                     # yes
                     lease.expiration_time = new_expire_time
-                    f = open(self.fname, 'rb+')
+                    f = open(self.finalhome, 'rb+')
                     self._write_lease_record(f, i, lease)
                     f.close()
                 return
hunk ./src/allmydata/storage/backends/das/core.py 336
             # the same order as they were added, so that if we crash while
             # doing this, we won't lose any non-cancelled leases.
             leases = [l for l in leases if l] # remove the cancelled leases
-            f = open(self.fname, 'rb+')
+            f = open(self.finalhome, 'rb+')
             for i,lease in enumerate(leases):
                 self._write_lease_record(f, i, lease)
             self._write_num_leases(f, len(leases))
hunk ./src/allmydata/storage/backends/das/core.py 344
             f.close()
         space_freed = self.LEASE_SIZE * num_leases_removed
         if not len(leases):
-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
             self.unlink()
         return space_freed
hunk ./src/allmydata/test/test_backends.py 129
     @mock.patch('time.time')
     def test_write_share(self, mocktime):
         """ Write a new share. """
-
-        class MockShare:
-            def __init__(self):
-                self.shnum = 1
-                
-            def add_or_renew_lease(elf, lease_info):
-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
-
-        share = MockShare()
-
         # Now begin the test.
 
         # XXX (0) ???  Fail unless something is not properly set-up?
hunk ./src/allmydata/test/test_backends.py 143
         # self.failIf(bsa)
 
         bs[0].remote_write(0, 'a')
-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
         spaceint = self.s.allocated_size()
         self.failUnlessReallyEqual(spaceint, 1)
 
hunk ./src/allmydata/test/test_backends.py 161
         #self.failIf(mockrename.called, mockrename.call_args_list)
         #self.failIf(mockstat.called, mockstat.call_args_list)
 
+    def test_handle_incoming(self):
+        incomingset = self.s.backend.get_incoming('teststorage_index')
+        self.failUnlessReallyEqual(incomingset, set())
+
+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        
+        incomingset = self.s.backend.get_incoming('teststorage_index')
+        self.failUnlessReallyEqual(incomingset, set((0,)))
+
+        bs[0].remote_close()
+        self.failUnlessReallyEqual(incomingset, set())
+
     @mock.patch('os.path.exists')
     @mock.patch('os.path.getsize')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 223
         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
 
 
-
 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
     @mock.patch('time.time')
     @mock.patch('os.mkdir')
hunk ./src/allmydata/test/test_backends.py 271
         DASCore('teststoredir', expiration_policy)
 
         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
+
}
[jacp14
wilcoxjg@gmail.com**20110712061211
 Ignore-this: 57b86958eceeef1442b21cca14798a0f
] {
hunk ./src/allmydata/storage/backends/das/core.py 95
             # XXX I'd like to make this more specific. If there are no shares at all.
             return set()
             
-    def get_shares(self, storage_index):
+    def get_shares(self, storageindex):
         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
hunk ./src/allmydata/storage/backends/das/core.py 97
-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
         try:
             for f in os.listdir(finalstoragedir):
                 if NUM_RE.match(f):
hunk ./src/allmydata/storage/backends/das/core.py 102
                     filename = os.path.join(finalstoragedir, f)
-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
+                    yield ImmutableShare(filename, storageindex, f)
         except OSError:
             # Commonly caused by there being no shares at all.
             pass
hunk ./src/allmydata/storage/backends/das/core.py 115
     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
         return bw
 
hunk ./src/allmydata/storage/backends/das/core.py 155
     LEASE_SIZE = struct.calcsize(">L32s32sL")
     sharetype = "immutable"
 
-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
         """ If max_size is not None then I won't allow more than
         max_size to be written to me. If create=True then max_size
         must not be None. """
hunk ./src/allmydata/storage/backends/das/core.py 160
         precondition((max_size is not None) or (not create), max_size, create)
+        self.storageindex = storageindex
         self._max_size = max_size
         self.incominghome = incominghome
         self.finalhome = finalhome
hunk ./src/allmydata/storage/backends/das/core.py 164
+        self.shnum = shnum
         if create:
             # touch the file, so later callers will see that we're working on
             # it. Also construct the metadata.
hunk ./src/allmydata/storage/backends/das/core.py 212
             # their children to know when they should do the rmdir. This
             # approach is simpler, but relies on os.rmdir refusing to delete
             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
+            #print "os.path.dirname(self.incominghome): "
+            #print os.path.dirname(self.incominghome)
             os.rmdir(os.path.dirname(self.incominghome))
             # we also delete the grandparent (prefix) directory, .../ab ,
             # again to avoid leaving directories lying around. This might
hunk ./src/allmydata/storage/immutable.py 93
     def __init__(self, ss, share):
         self.ss = ss
         self._share_file = share
-        self.storage_index = share.storage_index
+        self.storageindex = share.storageindex
         self.shnum = share.shnum
 
     def __repr__(self):
hunk ./src/allmydata/storage/immutable.py 98
         return "<%s %s %s>" % (self.__class__.__name__,
-                               base32.b2a_l(self.storage_index[:8], 60),
+                               base32.b2a_l(self.storageindex[:8], 60),
                                self.shnum)
 
     def remote_read(self, offset, length):
hunk ./src/allmydata/storage/immutable.py 110
 
     def remote_advise_corrupt_share(self, reason):
         return self.ss.remote_advise_corrupt_share("immutable",
-                                                   self.storage_index,
+                                                   self.storageindex,
                                                    self.shnum,
                                                    reason)
hunk ./src/allmydata/test/test_backends.py 20
 # The following share file contents was generated with
 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
 # with share data == 'a'.
-renew_secret  = 'x'*32
-cancel_secret = 'y'*32
-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
+shareversionnumber = '\x00\x00\x00\x01'
+sharedatalength = '\x00\x00\x00\x01'
+numberofleases = '\x00\x00\x00\x01'
+shareinputdata = 'a'
+ownernumber = '\x00\x00\x00\x00'
+renewsecret  = 'x'*32
+cancelsecret = 'y'*32
+expirationtime = '\x00(\xde\x80'
+nextlease = ''
+containerdata = shareversionnumber + sharedatalength + numberofleases
+client_data = shareinputdata + ownernumber + renewsecret + \
+    cancelsecret + expirationtime + nextlease
+share_data = containerdata + client_data
+
 
 testnodeid = 'testnodeidxxxxxxxxxx'
 tempdir = 'teststoredir'
hunk ./src/allmydata/test/test_backends.py 52
 
 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
     def setUp(self):
-        self.s = StorageServer(testnodeid, backend=NullCore())
+        self.ss = StorageServer(testnodeid, backend=NullCore())
 
     @mock.patch('os.mkdir')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 62
         """ Write a new share. """
 
         # Now begin the test.
-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
         bs[0].remote_write(0, 'a')
         self.failIf(mockisdir.called)
         self.failIf(mocklistdir.called)
hunk ./src/allmydata/test/test_backends.py 133
                 _assert(False, "The tester code doesn't recognize this case.")  
 
         mockopen.side_effect = call_open
-        testbackend = DASCore(tempdir, expiration_policy)
-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
+        self.backend = DASCore(tempdir, expiration_policy)
+        self.ss = StorageServer(testnodeid, self.backend)
+        self.ssinf = StorageServer(testnodeid, self.backend)
 
     @mock.patch('time.time')
     def test_write_share(self, mocktime):
hunk ./src/allmydata/test/test_backends.py 142
         """ Write a new share. """
         # Now begin the test.
 
-        # XXX (0) ???  Fail unless something is not properly set-up?
-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        mocktime.return_value = 0
+        # Inspect incoming and fail unless it's empty.
+        incomingset = self.ss.backend.get_incoming('teststorage_index')
+        self.failUnlessReallyEqual(incomingset, set())
+        
+        # Among other things, populate incoming with the sharenum: 0.
+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
 
hunk ./src/allmydata/test/test_backends.py 150
-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
+        
+        # Attempt to create a second share writer with the same share.
+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
 
hunk ./src/allmydata/test/test_backends.py 156
-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
+        # Show that no sharewriter results from a remote_allocate_buckets
         # with the same si, until BucketWriter.remote_close() has been called.
hunk ./src/allmydata/test/test_backends.py 158
-        # self.failIf(bsa)
+        self.failIf(bsa)
 
hunk ./src/allmydata/test/test_backends.py 160
+        # Write 'a' to shnum 0. Only tested together with close and read.
         bs[0].remote_write(0, 'a')
hunk ./src/allmydata/test/test_backends.py 162
-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
-        spaceint = self.s.allocated_size()
+
+        # Test allocated size. 
+        spaceint = self.ss.allocated_size()
         self.failUnlessReallyEqual(spaceint, 1)
 
         # XXX (3) Inspect final and fail unless there's nothing there.
hunk ./src/allmydata/test/test_backends.py 168
+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
         bs[0].remote_close()
         # XXX (4a) Inspect final and fail unless share 0 is there.
hunk ./src/allmydata/test/test_backends.py 171
+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
+        #contents = sharesinfinal[0].read_share_data(0,999)
+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
 
         # What happens when there's not enough space for the client's request?
hunk ./src/allmydata/test/test_backends.py 177
-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
 
         # Now test the allocated_size method.
         # self.failIf(mockexists.called, mockexists.call_args_list)
hunk ./src/allmydata/test/test_backends.py 185
         #self.failIf(mockrename.called, mockrename.call_args_list)
         #self.failIf(mockstat.called, mockstat.call_args_list)
 
-    def test_handle_incoming(self):
-        incomingset = self.s.backend.get_incoming('teststorage_index')
-        self.failUnlessReallyEqual(incomingset, set())
-
-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
-        
-        incomingset = self.s.backend.get_incoming('teststorage_index')
-        self.failUnlessReallyEqual(incomingset, set((0,)))
-
-        bs[0].remote_close()
-        self.failUnlessReallyEqual(incomingset, set())
-
     @mock.patch('os.path.exists')
     @mock.patch('os.path.getsize')
     @mock.patch('__builtin__.open')
hunk ./src/allmydata/test/test_backends.py 208
             self.failUnless('r' in mode, mode)
             self.failUnless('b' in mode, mode)
 
-            return StringIO(share_file_data)
+            return StringIO(share_data)
         mockopen.side_effect = call_open
 
hunk ./src/allmydata/test/test_backends.py 211
-        datalen = len(share_file_data)
+        datalen = len(share_data)
         def call_getsize(fname):
             self.failUnlessReallyEqual(fname, sharefname)
             return datalen
hunk ./src/allmydata/test/test_backends.py 223
         mockexists.side_effect = call_exists
 
         # Now begin the test.
-        bs = self.s.remote_get_buckets('teststorage_index')
+        bs = self.ss.remote_get_buckets('teststorage_index')
 
         self.failUnlessEqual(len(bs), 1)
hunk ./src/allmydata/test/test_backends.py 226
-        b = bs[0]
+        b = bs['0']
         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
hunk ./src/allmydata/test/test_backends.py 228
-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
         # If you try to read past the end you get the as much data as is there.
hunk ./src/allmydata/test/test_backends.py 230
-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
         # If you start reading past the end of the file you get the empty string.
         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
 
}
[jacp14 or so
wilcoxjg@gmail.com**20110713060346
 Ignore-this: 7026810f60879d65b525d450e43ff87a
] {
hunk ./src/allmydata/storage/backends/das/core.py 102
             for f in os.listdir(finalstoragedir):
                 if NUM_RE.match(f):
                     filename = os.path.join(finalstoragedir, f)
-                    yield ImmutableShare(filename, storageindex, f)
+                    yield ImmutableShare(filename, storageindex, int(f))
         except OSError:
             # Commonly caused by there being no shares at all.
             pass
hunk ./src/allmydata/storage/backends/null/core.py 25
     def set_storage_server(self, ss):
         self.ss = ss
 
+    def get_incoming(self, storageindex):
+        return set()
+
 class ImmutableShare:
     sharetype = "immutable"
 
hunk ./src/allmydata/storage/immutable.py 19
 
     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
         self.ss = ss
-        self._max_size = max_size # don't allow the client to write more than this
+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
+
         self._canary = canary
         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
         self.closed = False
hunk ./src/allmydata/test/test_backends.py 135
         mockopen.side_effect = call_open
         self.backend = DASCore(tempdir, expiration_policy)
         self.ss = StorageServer(testnodeid, self.backend)
-        self.ssinf = StorageServer(testnodeid, self.backend)
+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
 
     @mock.patch('time.time')
     def test_write_share(self, mocktime):
hunk ./src/allmydata/test/test_backends.py 161
         # with the same si, until BucketWriter.remote_close() has been called.
         self.failIf(bsa)
 
-        # Write 'a' to shnum 0. Only tested together with close and read.
-        bs[0].remote_write(0, 'a')
-
         # Test allocated size. 
         spaceint = self.ss.allocated_size()
         self.failUnlessReallyEqual(spaceint, 1)
hunk ./src/allmydata/test/test_backends.py 165
 
-        # XXX (3) Inspect final and fail unless there's nothing there.
+        # Write 'a' to shnum 0. Only tested together with close and read.
+        bs[0].remote_write(0, 'a')
+        
+        # Preclose: Inspect final, failUnless nothing there.
         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
         bs[0].remote_close()
hunk ./src/allmydata/test/test_backends.py 171
-        # XXX (4a) Inspect final and fail unless share 0 is there.
-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
-        #contents = sharesinfinal[0].read_share_data(0,999)
-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
 
hunk ./src/allmydata/test/test_backends.py 172
-        # What happens when there's not enough space for the client's request?
-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
+        # Postclose: (Omnibus) failUnless written data is in final.
+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
+        contents = sharesinfinal[0].read_share_data(0,73)
+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
 
hunk ./src/allmydata/test/test_backends.py 177
-        # Now test the allocated_size method.
-        # self.failIf(mockexists.called, mockexists.call_args_list)
-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
-        #self.failIf(mockrename.called, mockrename.call_args_list)
-        #self.failIf(mockstat.called, mockstat.call_args_list)
+        # Cover interior of for share in get_shares loop.
+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        
+    @mock.patch('time.time')
+    @mock.patch('allmydata.util.fileutil.get_available_space')
+    def test_out_of_space(self, mockget_available_space, mocktime):
+        mocktime.return_value = 0
+        
+        def call_get_available_space(dir, reserve):
+            return 0
+
+        mockget_available_space.side_effect = call_get_available_space
+        
+        
+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
 
     @mock.patch('os.path.exists')
     @mock.patch('os.path.getsize')
hunk ./src/allmydata/test/test_backends.py 234
         bs = self.ss.remote_get_buckets('teststorage_index')
 
         self.failUnlessEqual(len(bs), 1)
-        b = bs['0']
+        b = bs[0]
         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
         # If you try to read past the end you get the as much data as is there.
}
[temporary work-in-progress patch to be unrecorded
zooko@zooko.com**20110714003008
 Ignore-this: 39ecb812eca5abe04274c19897af5b45
 tidy up a few tests, work done in pair-programming with Zancas
] {
hunk ./src/allmydata/storage/backends/das/core.py 65
         self._clean_incomplete()
 
     def _clean_incomplete(self):
-        fileutil.rm_dir(self.incomingdir)
+        fileutil.rmtree(self.incomingdir)
         fileutil.make_dirs(self.incomingdir)
 
     def _setup_corruption_advisory(self):
hunk ./src/allmydata/storage/immutable.py 1
-import os, stat, struct, time
+import os, time
 
 from foolscap.api import Referenceable
 
hunk ./src/allmydata/storage/server.py 1
-import os, re, weakref, struct, time
+import os, weakref, struct, time
 
 from foolscap.api import Referenceable
 from twisted.application import service
hunk ./src/allmydata/storage/server.py 7
 
 from zope.interface import implements
-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
+from allmydata.interfaces import RIStorageServer, IStatsProducer
 from allmydata.util import fileutil, idlib, log, time_format
 import allmydata # for __full_version__
 
hunk ./src/allmydata/storage/server.py 313
         self.add_latency("get", time.time() - start)
         return bucketreaders
 
-    def remote_get_incoming(self, storageindex):
-        incoming_share_set = self.backend.get_incoming(storageindex)
-        return incoming_share_set
-
     def get_leases(self, storageindex):
         """Provide an iterator that yields all of the leases attached to this
         bucket. Each lease is returned as a LeaseInfo instance.
hunk ./src/allmydata/test/test_backends.py 3
 from twisted.trial import unittest
 
+from twisted.path.filepath import FilePath
+
 from StringIO import StringIO
 
 from allmydata.test.common_util import ReallyEqualMixin
hunk ./src/allmydata/test/test_backends.py 38
 
 
 testnodeid = 'testnodeidxxxxxxxxxx'
-tempdir = 'teststoredir'
-basedir = os.path.join(tempdir, 'shares')
+storedir = 'teststoredir'
+storedirfp = FilePath(storedir)
+basedir = os.path.join(storedir, 'shares')
 baseincdir = os.path.join(basedir, 'incoming')
 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
hunk ./src/allmydata/test/test_backends.py 53
                      'cutoff_date' : None,
                      'sharetypes' : None}
 
-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
+    """ NullBackend is just for testing and executable documentation, so
+    this test is actually a test of StorageServer in which we're using
+    NullBackend as helper code for the test, rather than a test of
+    NullBackend. """
     def setUp(self):
         self.ss = StorageServer(testnodeid, backend=NullCore())
 
hunk ./src/allmydata/test/test_backends.py 62
     @mock.patch('os.mkdir')
+
     @mock.patch('__builtin__.open')
     @mock.patch('os.listdir')
     @mock.patch('os.path.isdir')
hunk ./src/allmydata/test/test_backends.py 69
     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
         """ Write a new share. """
 
-        # Now begin the test.
         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
         bs[0].remote_write(0, 'a')
         self.failIf(mockisdir.called)
hunk ./src/allmydata/test/test_backends.py 83
     @mock.patch('os.listdir')
     @mock.patch('os.path.isdir')
     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
-        """ This tests whether a server instance can be constructed
-        with a filesystem backend. To pass the test, it has to use the
-        filesystem in only the prescribed ways. """
+        """ This tests whether a server instance can be constructed with a
+        filesystem backend. To pass the test, it mustn't use the filesystem
+        outside of its configured storedir. """
 
         def call_open(fname, mode):
hunk ./src/allmydata/test/test_backends.py 88
-            if fname == os.path.join(tempdir,'bucket_counter.state'):
-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
+            if fname == os.path.join(storedir, 'bucket_counter.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
+            elif fname == os.path.join(storedir, 'lease_checker.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
+            elif fname == os.path.join(storedir, 'lease_checker.history'):
                 return StringIO()
             else:
hunk ./src/allmydata/test/test_backends.py 95
-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
+                fnamefp = FilePath(fname)
+                self.failUnless(storedirfp in fnamefp.parents(),
+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
         mockopen.side_effect = call_open
 
         def call_isdir(fname):
hunk ./src/allmydata/test/test_backends.py 101
-            if fname == os.path.join(tempdir,'shares'):
+            if fname == os.path.join(storedir, 'shares'):
                 return True
hunk ./src/allmydata/test/test_backends.py 103
-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
                 return True
             else:
                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
hunk ./src/allmydata/test/test_backends.py 109
         mockisdir.side_effect = call_isdir
 
+        mocklistdir.return_value = []
+
         def call_mkdir(fname, mode):
hunk ./src/allmydata/test/test_backends.py 112
-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
             self.failUnlessEqual(0777, mode)
hunk ./src/allmydata/test/test_backends.py 113
-            if fname == tempdir:
-                return None
-            elif fname == os.path.join(tempdir,'shares'):
-                return None
-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
-                return None
-            else:
-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
+            self.failUnlessIn(fname, 
+                              [storedir,
+                               os.path.join(storedir, 'shares'),
+                               os.path.join(storedir, 'shares', 'incoming')], 
+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
         mockmkdir.side_effect = call_mkdir
 
         # Now begin the test.
hunk ./src/allmydata/test/test_backends.py 121
-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
 
         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
 
hunk ./src/allmydata/test/test_backends.py 126
 
-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
+    """ This tests both the StorageServer xyz """
     @mock.patch('__builtin__.open')
     def setUp(self, mockopen):
         def call_open(fname, mode):
hunk ./src/allmydata/test/test_backends.py 131
-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
+            if fname == os.path.join(storedir, 'bucket_counter.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
+            elif fname == os.path.join(storedir, 'lease_checker.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
+            elif fname == os.path.join(storedir, 'lease_checker.history'):
                 return StringIO()
             else:
                 _assert(False, "The tester code doesn't recognize this case.")  
hunk ./src/allmydata/test/test_backends.py 141
 
         mockopen.side_effect = call_open
-        self.backend = DASCore(tempdir, expiration_policy)
+        self.backend = DASCore(storedir, expiration_policy)
         self.ss = StorageServer(testnodeid, self.backend)
hunk ./src/allmydata/test/test_backends.py 143
-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
 
     @mock.patch('time.time')
hunk ./src/allmydata/test/test_backends.py 147
-    def test_write_share(self, mocktime):
-        """ Write a new share. """
-        # Now begin the test.
+    def test_write_and_read_share(self, mocktime):
+        """
+        Write a new share, read it, and test the server's (and FS backend's)
+        handling of simultaneous and successive attempts to write the same
+        share.
+        """
 
         mocktime.return_value = 0
         # Inspect incoming and fail unless it's empty.
hunk ./src/allmydata/test/test_backends.py 159
         incomingset = self.ss.backend.get_incoming('teststorage_index')
         self.failUnlessReallyEqual(incomingset, set())
         
-        # Among other things, populate incoming with the sharenum: 0.
+        # Populate incoming with the sharenum: 0.
         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
 
         # Inspect incoming and fail unless the sharenum: 0 is listed there.
hunk ./src/allmydata/test/test_backends.py 163
-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
         
hunk ./src/allmydata/test/test_backends.py 165
-        # Attempt to create a second share writer with the same share.
+        # Attempt to create a second share writer with the same sharenum.
         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
 
         # Show that no sharewriter results from a remote_allocate_buckets
hunk ./src/allmydata/test/test_backends.py 169
-        # with the same si, until BucketWriter.remote_close() has been called.
+        # with the same si and sharenum, until BucketWriter.remote_close()
+        # has been called.
         self.failIf(bsa)
 
         # Test allocated size. 
hunk ./src/allmydata/test/test_backends.py 187
         # Postclose: (Omnibus) failUnless written data is in final.
         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
         contents = sharesinfinal[0].read_share_data(0,73)
-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
+        self.failUnlessReallyEqual(contents, client_data)
 
hunk ./src/allmydata/test/test_backends.py 189
-        # Cover interior of for share in get_shares loop.
-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        # Exercise the case that the share we're asking to allocate is
+        # already (completely) uploaded.
+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
         
     @mock.patch('time.time')
     @mock.patch('allmydata.util.fileutil.get_available_space')
hunk ./src/allmydata/test/test_backends.py 210
     @mock.patch('os.path.getsize')
     @mock.patch('__builtin__.open')
     @mock.patch('os.listdir')
-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
         """ This tests whether the code correctly finds and reads
         shares written out by old (Tahoe-LAFS <= v1.8.2)
         servers. There is a similar test in test_download, but that one
hunk ./src/allmydata/test/test_backends.py 219
         StorageServer object. """
 
         def call_listdir(dirname):
-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
             return ['0']
 
         mocklistdir.side_effect = call_listdir
hunk ./src/allmydata/test/test_backends.py 226
 
         def call_open(fname, mode):
             self.failUnlessReallyEqual(fname, sharefname)
-            self.failUnless('r' in mode, mode)
+            self.failUnlessEqual(mode[0], 'r', mode)
             self.failUnless('b' in mode, mode)
 
             return StringIO(share_data)
hunk ./src/allmydata/test/test_backends.py 268
         filesystem in only the prescribed ways. """
 
         def call_open(fname, mode):
-            if fname == os.path.join(tempdir,'bucket_counter.state'):
-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
+            if fname == os.path.join(storedir,'bucket_counter.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
+            elif fname == os.path.join(storedir, 'lease_checker.state'):
+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
+            elif fname == os.path.join(storedir, 'lease_checker.history'):
                 return StringIO()
             else:
                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
hunk ./src/allmydata/test/test_backends.py 279
         mockopen.side_effect = call_open
 
         def call_isdir(fname):
-            if fname == os.path.join(tempdir,'shares'):
+            if fname == os.path.join(storedir,'shares'):
                 return True
hunk ./src/allmydata/test/test_backends.py 281
-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
+            elif fname == os.path.join(storedir,'shares', 'incoming'):
                 return True
             else:
                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
hunk ./src/allmydata/test/test_backends.py 290
         def call_mkdir(fname, mode):
             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
             self.failUnlessEqual(0777, mode)
-            if fname == tempdir:
+            if fname == storedir:
                 return None
hunk ./src/allmydata/test/test_backends.py 292
-            elif fname == os.path.join(tempdir,'shares'):
+            elif fname == os.path.join(storedir,'shares'):
                 return None
hunk ./src/allmydata/test/test_backends.py 294
-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
+            elif fname == os.path.join(storedir,'shares', 'incoming'):
                 return None
             else:
                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
hunk ./src/allmydata/util/fileutil.py 5
 Futz with files like a pro.
 """
 
-import sys, exceptions, os, stat, tempfile, time, binascii
+import errno, sys, exceptions, os, stat, tempfile, time, binascii
 
 from twisted.python import log
 
hunk ./src/allmydata/util/fileutil.py 186
             raise tx
         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
 
-def rm_dir(dirname):
+def rmtree(dirname):
     """
     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
     already gone, do nothing and return without raising an exception.  If this
hunk ./src/allmydata/util/fileutil.py 205
             else:
                 remove(fullname)
         os.rmdir(dirname)
-    except Exception, le:
-        # Ignore "No such file or directory"
-        if (not isinstance(le, OSError)) or le.args[0] != 2:
+    except EnvironmentError, le:
+        # Ignore "No such file or directory", collect any other exception.
+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
             excs.append(le)
hunk ./src/allmydata/util/fileutil.py 209
+    except Exception, le:
+        excs.append(le)
 
     # Okay, now we've recursively removed everything, ignoring any "No
     # such file or directory" errors, and collecting any other errors.
hunk ./src/allmydata/util/fileutil.py 222
             raise OSError, "Failed to remove dir for unknown reason."
         raise OSError, excs
 
+def rm_dir(dirname):
+    # Renamed to be like shutil.rmtree and unlike rmdir.
+    return rmtree(dirname)
 
 def remove_if_possible(f):
     try:
}
[work in progress intended to be unrecorded and never committed to trunk
zooko@zooko.com**20110714212139
 Ignore-this: c291aaf2b22c4887ad0ba2caea911537
 switch from os.path.join to filepath
 incomplete refactoring of common "stay in your subtree" tester code into a superclass
 
] {
hunk ./src/allmydata/test/test_backends.py 3
 from twisted.trial import unittest
 
-from twisted.path.filepath import FilePath
+from twisted.python.filepath import FilePath
 
 from StringIO import StringIO
 
hunk ./src/allmydata/test/test_backends.py 10
 from allmydata.test.common_util import ReallyEqualMixin
 from allmydata.util.assertutil import _assert
 
-import mock, os
+import mock
 
 # This is the code that we're going to be testing.
 from allmydata.storage.server import StorageServer
hunk ./src/allmydata/test/test_backends.py 25
 shareversionnumber = '\x00\x00\x00\x01'
 sharedatalength = '\x00\x00\x00\x01'
 numberofleases = '\x00\x00\x00\x01'
+
 shareinputdata = 'a'
 ownernumber = '\x00\x00\x00\x00'
 renewsecret  = 'x'*32
hunk ./src/allmydata/test/test_backends.py 39
 
 
 testnodeid = 'testnodeidxxxxxxxxxx'
-storedir = 'teststoredir'
-storedirfp = FilePath(storedir)
-basedir = os.path.join(storedir, 'shares')
-baseincdir = os.path.join(basedir, 'incoming')
-sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
-sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
-shareincomingname = os.path.join(sharedirincomingname, '0')
-sharefname = os.path.join(sharedirfinalname, '0')
+
+class TestFilesMixin(unittest.TestCase):
+    def setUp(self):
+        self.storedir = FilePath('teststoredir')
+        self.basedir = self.storedir.child('shares')
+        self.baseincdir = self.basedir.child('incoming')
+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
+        self.shareincomingname = self.sharedirincomingname.child('0')
+        self.sharefname = self.sharedirfinalname.child('0')
+
+    def call_open(self, fname, mode):
+        fnamefp = FilePath(fname)
+        if fnamefp == self.storedir.child('bucket_counter.state'):
+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
+        elif fnamefp == self.storedir.child('lease_checker.state'):
+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
+        elif fnamefp == self.storedir.child('lease_checker.history'):
+            return StringIO()
+        else:
+            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
+                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
+
+    def call_isdir(self, fname):
+        fnamefp = FilePath(fname)
+        if fnamefp == self.storedir.child('shares'):
+            return True
+        elif fnamefp == self.storedir.child('shares').child('incoming'):
+            return True
+        else:
+            self.failUnless(self.storedir in fnamefp.parents(),
+                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
+
+    def call_mkdir(self, fname, mode):
+        self.failUnlessEqual(0777, mode)
+        fnamefp = FilePath(fname)
+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
+                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
+
+
+    @mock.patch('os.mkdir')
+    @mock.patch('__builtin__.open')
+    @mock.patch('os.listdir')
+    @mock.patch('os.path.isdir')
+    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
+        mocklistdir.return_value = []
+        mockmkdir.side_effect = self.call_mkdir
+        mockisdir.side_effect = self.call_isdir
+        mockopen.side_effect = self.call_open
+        mocklistdir.return_value = []
+        
+        test_func()
+        
+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
 
 expiration_policy = {'enabled' : False, 
                      'mode' : 'age',
hunk ./src/allmydata/test/test_backends.py 123
         self.failIf(mockopen.called)
         self.failIf(mockmkdir.called)
 
-class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
-    @mock.patch('time.time')
-    @mock.patch('os.mkdir')
-    @mock.patch('__builtin__.open')
-    @mock.patch('os.listdir')
-    @mock.patch('os.path.isdir')
-    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
+class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
+    def test_create_server_fs_backend(self):
         """ This tests whether a server instance can be constructed with a
         filesystem backend. To pass the test, it mustn't use the filesystem
         outside of its configured storedir. """
hunk ./src/allmydata/test/test_backends.py 129
 
-        def call_open(fname, mode):
-            if fname == os.path.join(storedir, 'bucket_counter.state'):
-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
-            elif fname == os.path.join(storedir, 'lease_checker.state'):
-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
-            elif fname == os.path.join(storedir, 'lease_checker.history'):
-                return StringIO()
-            else:
-                fnamefp = FilePath(fname)
-                self.failUnless(storedirfp in fnamefp.parents(),
-                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
-        mockopen.side_effect = call_open
+        def _f():
+            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
 
hunk ./src/allmydata/test/test_backends.py 132
-        def call_isdir(fname):
-            if fname == os.path.join(storedir, 'shares'):
-                return True
-            elif fname == os.path.join(storedir, 'shares', 'incoming'):
-                return True
-            else:
-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
-        mockisdir.side_effect = call_isdir
-
-        mocklistdir.return_value = []
-
-        def call_mkdir(fname, mode):
-            self.failUnlessEqual(0777, mode)
-            self.failUnlessIn(fname, 
-                              [storedir,
-                               os.path.join(storedir, 'shares'),
-                               os.path.join(storedir, 'shares', 'incoming')], 
-                              "Server with FS backend tried to mkdir '%s'" % (fname,))
-        mockmkdir.side_effect = call_mkdir
-
-        # Now begin the test.
-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
-
-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
+        self._help_test_stay_in_your_subtree(_f)
 
 
 class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
}
[another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
zooko@zooko.com**20110715191500
 Ignore-this: af33336789041800761e80510ea2f583
 In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
] {
hunk ./src/allmydata/storage/backends/das/core.py 59
                 log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
                         umid="0wZ27w", level=log.UNUSUAL)
 
-        self.sharedir = os.path.join(self.storedir, "shares")
-        fileutil.make_dirs(self.sharedir)
-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
+        self.sharedir = self.storedir.child("shares")
+        fileutil.fp_make_dirs(self.sharedir)
+        self.incomingdir = self.sharedir.child('incoming')
         self._clean_incomplete()
 
     def _clean_incomplete(self):
hunk ./src/allmydata/storage/backends/das/core.py 65
-        fileutil.rmtree(self.incomingdir)
-        fileutil.make_dirs(self.incomingdir)
+        fileutil.fp_remove(self.incomingdir)
+        fileutil.fp_make_dirs(self.incomingdir)
 
     def _setup_corruption_advisory(self):
         # we don't actually create the corruption-advisory dir until necessary
hunk ./src/allmydata/storage/backends/das/core.py 70
-        self.corruption_advisory_dir = os.path.join(self.storedir,
-                                                    "corruption-advisories")
+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
 
     def _setup_bucket_counter(self):
hunk ./src/allmydata/storage/backends/das/core.py 73
-        statefname = os.path.join(self.storedir, "bucket_counter.state")
+        statefname = self.storedir.child("bucket_counter.state")
         self.bucket_counter = FSBucketCountingCrawler(statefname)
         self.bucket_counter.setServiceParent(self)
 
hunk ./src/allmydata/storage/backends/das/core.py 78
     def _setup_lease_checkerf(self, expiration_policy):
-        statefile = os.path.join(self.storedir, "lease_checker.state")
-        historyfile = os.path.join(self.storedir, "lease_checker.history")
+        statefile = self.storedir.child("lease_checker.state")
+        historyfile = self.storedir.child("lease_checker.history")
         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
         self.lease_checker.setServiceParent(self)
 
hunk ./src/allmydata/storage/backends/das/core.py 83
-    def get_incoming(self, storageindex):
+    def get_incoming_shnums(self, storageindex):
         """Return the set of incoming shnums."""
         try:
hunk ./src/allmydata/storage/backends/das/core.py 86
-            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
-            incominglist = os.listdir(incomingsharesdir)
-            incomingshnums = [int(x) for x in incominglist]
-            return set(incomingshnums)
-        except OSError:
-            # XXX I'd like to make this more specific. If there are no shares at all.
-            return set()
+            
+            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
+            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
+            return frozenset(incomingshnums)
+        except UnlistableError:
+            # There is no shares directory at all.
+            return frozenset()
             
     def get_shares(self, storageindex):
         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
hunk ./src/allmydata/storage/backends/das/core.py 96
-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
+        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
         try:
hunk ./src/allmydata/storage/backends/das/core.py 98
-            for f in os.listdir(finalstoragedir):
-                if NUM_RE.match(f):
-                    filename = os.path.join(finalstoragedir, f)
-                    yield ImmutableShare(filename, storageindex, int(f))
-        except OSError:
-            # Commonly caused by there being no shares at all.
+            for f in finalstoragedir.listdir():
+                if NUM_RE.match(f.basename):
+                    yield ImmutableShare(f, storageindex, int(f))
+        except UnlistableError:
+            # There is no shares directory at all.
             pass
         
     def get_available_space(self):
hunk ./src/allmydata/storage/backends/das/core.py 149
 # then the value stored in this field will be the actual share data length
 # modulo 2**32.
 
-class ImmutableShare:
+class ImmutableShare(object):
     LEASE_SIZE = struct.calcsize(">L32s32sL")
     sharetype = "immutable"
 
hunk ./src/allmydata/storage/backends/das/core.py 166
         if create:
             # touch the file, so later callers will see that we're working on
             # it. Also construct the metadata.
-            assert not os.path.exists(self.finalhome)
-            fileutil.make_dirs(os.path.dirname(self.incominghome))
+            assert not finalhome.exists()
+            fp_make_dirs(self.incominghome)
             f = open(self.incominghome, 'wb')
             # The second field -- the four-byte share data length -- is no
             # longer used as of Tahoe v1.3.0, but we continue to write it in
hunk ./src/allmydata/storage/backends/das/core.py 316
         except IndexError:
             self.add_lease(lease_info)
 
-
     def cancel_lease(self, cancel_secret):
         """Remove a lease with the given cancel_secret. If the last lease is
         cancelled, the file will be removed. Return the number of bytes that
hunk ./src/allmydata/storage/common.py 19
 def si_a2b(ascii_storageindex):
     return base32.a2b(ascii_storageindex)
 
-def storage_index_to_dir(storageindex):
+def storage_index_to_dir(startfp, storageindex):
     sia = si_b2a(storageindex)
     return os.path.join(sia[:2], sia)
hunk ./src/allmydata/storage/server.py 210
 
         # fill incoming with all shares that are incoming use a set operation
         # since there's no need to operate on individual pieces
-        incoming = self.backend.get_incoming(storageindex)
+        incoming = self.backend.get_incoming_shnums(storageindex)
 
         for shnum in ((sharenums - alreadygot) - incoming):
             if (not limited) or (remaining_space >= max_space_per_bucket):
hunk ./src/allmydata/test/test_backends.py 5
 
 from twisted.python.filepath import FilePath
 
+from allmydata.util.log import msg
+
 from StringIO import StringIO
 
 from allmydata.test.common_util import ReallyEqualMixin
hunk ./src/allmydata/test/test_backends.py 42
 
 testnodeid = 'testnodeidxxxxxxxxxx'
 
-class TestFilesMixin(unittest.TestCase):
-    def setUp(self):
-        self.storedir = FilePath('teststoredir')
-        self.basedir = self.storedir.child('shares')
-        self.baseincdir = self.basedir.child('incoming')
-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
-        self.shareincomingname = self.sharedirincomingname.child('0')
-        self.sharefname = self.sharedirfinalname.child('0')
+class MockStat:
+    def __init__(self):
+        self.st_mode = None
 
hunk ./src/allmydata/test/test_backends.py 46
+class MockFiles(unittest.TestCase):
+    """ I simulate a filesystem that the code under test can use. I flag the
+    code under test if it reads or writes outside of its prescribed
+    subtree. I simulate just the parts of the filesystem that the current
+    implementation of DAS backend needs. """
     def call_open(self, fname, mode):
         fnamefp = FilePath(fname)
hunk ./src/allmydata/test/test_backends.py 53
+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
+                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
+
         if fnamefp == self.storedir.child('bucket_counter.state'):
             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
         elif fnamefp == self.storedir.child('lease_checker.state'):
hunk ./src/allmydata/test/test_backends.py 61
             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
         elif fnamefp == self.storedir.child('lease_checker.history'):
+            # This is separated out from the else clause below just because
+            # we know this particular file is going to be used by the
+            # current implementation of DAS backend, and we might want to
+            # use this information in this test in the future...
             return StringIO()
         else:
hunk ./src/allmydata/test/test_backends.py 67
-            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
-                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
+            # Anything else you open inside your subtree appears to be an
+            # empty file.
+            return StringIO()
 
     def call_isdir(self, fname):
         fnamefp = FilePath(fname)
hunk ./src/allmydata/test/test_backends.py 73
-        if fnamefp == self.storedir.child('shares'):
+        return fnamefp.isdir()
+
+        self.failUnless(self.storedir == self or self.storedir in self.parents(),
+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
+
+        # The first two cases are separate from the else clause below just
+        # because we know that the current implementation of the DAS backend
+        # inspects these two directories and we might want to make use of
+        # that information in the tests in the future...
+        if self == self.storedir.child('shares'):
             return True
hunk ./src/allmydata/test/test_backends.py 84
-        elif fnamefp == self.storedir.child('shares').child('incoming'):
+        elif self == self.storedir.child('shares').child('incoming'):
             return True
         else:
hunk ./src/allmydata/test/test_backends.py 87
-            self.failUnless(self.storedir in fnamefp.parents(),
-                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
+            # Anything else you open inside your subtree appears to be a
+            # directory.
+            return True
 
     def call_mkdir(self, fname, mode):
hunk ./src/allmydata/test/test_backends.py 92
-        self.failUnlessEqual(0777, mode)
         fnamefp = FilePath(fname)
         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
                         "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
hunk ./src/allmydata/test/test_backends.py 95
+        self.failUnlessEqual(0777, mode)
 
hunk ./src/allmydata/test/test_backends.py 97
+    def call_listdir(self, fname):
+        fnamefp = FilePath(fname)
+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
 
hunk ./src/allmydata/test/test_backends.py 102
-    @mock.patch('os.mkdir')
-    @mock.patch('__builtin__.open')
-    @mock.patch('os.listdir')
-    @mock.patch('os.path.isdir')
-    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
-        mocklistdir.return_value = []
+    def call_stat(self, fname):
+        fnamefp = FilePath(fname)
+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
+
+        msg("%s.call_stat(%s)" % (self, fname,))
+        mstat = MockStat()
+        mstat.st_mode = 16893 # a directory
+        return mstat
+
+    def setUp(self):
+        msg( "%s.setUp()" % (self,))
+        self.storedir = FilePath('teststoredir')
+        self.basedir = self.storedir.child('shares')
+        self.baseincdir = self.basedir.child('incoming')
+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
+        self.shareincomingname = self.sharedirincomingname.child('0')
+        self.sharefname = self.sharedirfinalname.child('0')
+
+        self.mocklistdirp = mock.patch('os.listdir')
+        mocklistdir = self.mocklistdirp.__enter__()
+        mocklistdir.side_effect = self.call_listdir
+
+        self.mockmkdirp = mock.patch('os.mkdir')
+        mockmkdir = self.mockmkdirp.__enter__()
         mockmkdir.side_effect = self.call_mkdir
hunk ./src/allmydata/test/test_backends.py 129
+
+        self.mockisdirp = mock.patch('os.path.isdir')
+        mockisdir = self.mockisdirp.__enter__()
         mockisdir.side_effect = self.call_isdir
hunk ./src/allmydata/test/test_backends.py 133
+
+        self.mockopenp = mock.patch('__builtin__.open')
+        mockopen = self.mockopenp.__enter__()
         mockopen.side_effect = self.call_open
hunk ./src/allmydata/test/test_backends.py 137
-        mocklistdir.return_value = []
-        
-        test_func()
-        
-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
+
+        self.mockstatp = mock.patch('os.stat')
+        mockstat = self.mockstatp.__enter__()
+        mockstat.side_effect = self.call_stat
+
+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
+        mockfpstat = self.mockfpstatp.__enter__()
+        mockfpstat.side_effect = self.call_stat
+
+    def tearDown(self):
+        msg( "%s.tearDown()" % (self,))
+        self.mockfpstatp.__exit__()
+        self.mockstatp.__exit__()
+        self.mockopenp.__exit__()
+        self.mockisdirp.__exit__()
+        self.mockmkdirp.__exit__()
+        self.mocklistdirp.__exit__()
 
 expiration_policy = {'enabled' : False, 
                      'mode' : 'age',
hunk ./src/allmydata/test/test_backends.py 184
         self.failIf(mockopen.called)
         self.failIf(mockmkdir.called)
 
-class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
+class TestServerConstruction(MockFiles, ReallyEqualMixin):
     def test_create_server_fs_backend(self):
         """ This tests whether a server instance can be constructed with a
         filesystem backend. To pass the test, it mustn't use the filesystem
hunk ./src/allmydata/test/test_backends.py 190
         outside of its configured storedir. """
 
-        def _f():
-            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
 
hunk ./src/allmydata/test/test_backends.py 192
-        self._help_test_stay_in_your_subtree(_f)
-
-
-class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
-    """ This tests both the StorageServer xyz """
-    @mock.patch('__builtin__.open')
-    def setUp(self, mockopen):
-        def call_open(fname, mode):
-            if fname == os.path.join(storedir, 'bucket_counter.state'):
-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
-            elif fname == os.path.join(storedir, 'lease_checker.state'):
-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
-            elif fname == os.path.join(storedir, 'lease_checker.history'):
-                return StringIO()
-            else:
-                _assert(False, "The tester code doesn't recognize this case.")  
-
-        mockopen.side_effect = call_open
-        self.backend = DASCore(storedir, expiration_policy)
-        self.ss = StorageServer(testnodeid, self.backend)
-        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
-        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
+class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
+    """ This tests both the StorageServer and the DAS backend together. """
+    def setUp(self):
+        MockFiles.setUp(self)
+        try:
+            self.backend = DASCore(self.storedir, expiration_policy)
+            self.ss = StorageServer(testnodeid, self.backend)
+            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
+            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
+        except:
+            MockFiles.tearDown(self)
+            raise
 
     @mock.patch('time.time')
     def test_write_and_read_share(self, mocktime):
hunk ./src/allmydata/util/fileutil.py 8
 import errno, sys, exceptions, os, stat, tempfile, time, binascii
 
 from twisted.python import log
+from twisted.python.filepath import UnlistableError
 
 from pycryptopp.cipher.aes import AES
 
hunk ./src/allmydata/util/fileutil.py 187
             raise tx
         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
 
+def fp_make_dirs(dirfp):
+    """
+    An idempotent version of FilePath.makedirs().  If the dir already
+    exists, do nothing and return without raising an exception.  If this
+    call creates the dir, return without raising an exception.  If there is
+    an error that prevents creation or if the directory gets deleted after
+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
+    exists, raise an exception.
+    """
+    log.msg( "xxx 0 %s" % (dirfp,))
+    tx = None
+    try:
+        dirfp.makedirs()
+    except OSError, x:
+        tx = x
+
+    if not dirfp.isdir():
+        if tx:
+            raise tx
+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
+
 def rmtree(dirname):
     """
     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
hunk ./src/allmydata/util/fileutil.py 244
             raise OSError, "Failed to remove dir for unknown reason."
         raise OSError, excs
 
+def fp_remove(dirfp):
+    try:
+        dirfp.remove()
+    except UnlistableError, e:
+        if e.originalException.errno != errno.ENOENT:
+            raise
+
 def rm_dir(dirname):
     # Renamed to be like shutil.rmtree and unlike rmdir.
     return rmtree(dirname)
}
[another temporary patch for sharing work-in-progress
zooko@zooko.com**20110720055918
 Ignore-this: dfa0270476cbc6511cdb54c5c9a55a8e
 A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
 There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
 (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
 
] {
hunk ./src/allmydata/storage/backends/das/core.py 5
 
 from allmydata.interfaces import IStorageBackend
 from allmydata.storage.backends.base import Backend
-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
+from allmydata.storage.common import si_b2a, si_a2b, si_dir
 from allmydata.util.assertutil import precondition
 
 #from foolscap.api import Referenceable
hunk ./src/allmydata/storage/backends/das/core.py 10
 from twisted.application import service
+from twisted.python.filepath import UnlistableError
 
 from zope.interface import implements
 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
hunk ./src/allmydata/storage/backends/das/core.py 17
 from allmydata.util import fileutil, idlib, log, time_format
 import allmydata # for __full_version__
 
-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
+from allmydata.storage.common import si_b2a, si_a2b, si_dir
+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
 from allmydata.storage.lease import LeaseInfo
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
hunk ./src/allmydata/storage/backends/das/core.py 41
 # $SHARENUM matches this regex:
 NUM_RE=re.compile("^[0-9]+$")
 
+def is_num(fp):
+    return NUM_RE.match(fp.basename)
+
 class DASCore(Backend):
     implements(IStorageBackend)
     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
hunk ./src/allmydata/storage/backends/das/core.py 58
         self.storedir = storedir
         self.readonly = readonly
         self.reserved_space = int(reserved_space)
-        if self.reserved_space:
-            if self.get_available_space() is None:
-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
-                        umid="0wZ27w", level=log.UNUSUAL)
-
         self.sharedir = self.storedir.child("shares")
         fileutil.fp_make_dirs(self.sharedir)
         self.incomingdir = self.sharedir.child('incoming')
hunk ./src/allmydata/storage/backends/das/core.py 62
         self._clean_incomplete()
+        if self.reserved_space and (self.get_available_space() is None):
+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
+                    umid="0wZ27w", level=log.UNUSUAL)
+
 
     def _clean_incomplete(self):
         fileutil.fp_remove(self.incomingdir)
hunk ./src/allmydata/storage/backends/das/core.py 87
         self.lease_checker.setServiceParent(self)
 
     def get_incoming_shnums(self, storageindex):
-        """Return the set of incoming shnums."""
+        """ Return a frozenset of the shnum (as ints) of incoming shares. """
+        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
         try:
hunk ./src/allmydata/storage/backends/das/core.py 90
-            
-            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
-            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
-            return frozenset(incomingshnums)
+            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
+            shnums = [ int(fp.basename) for fp in childfps ]
+            return frozenset(shnums)
         except UnlistableError:
             # There is no shares directory at all.
             return frozenset()
hunk ./src/allmydata/storage/backends/das/core.py 98
             
     def get_shares(self, storageindex):
-        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
+        """ Generate ImmutableShare objects for shares we have for this
+        storageindex. ("Shares we have" means completed ones, excluding
+        incoming ones.)"""
         finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
         try:
hunk ./src/allmydata/storage/backends/das/core.py 103
-            for f in finalstoragedir.listdir():
-                if NUM_RE.match(f.basename):
-                    yield ImmutableShare(f, storageindex, int(f))
+            for fp in finalstoragedir.children():
+                if is_num(fp):
+                    yield ImmutableShare(fp, storageindex)
         except UnlistableError:
             # There is no shares directory at all.
             pass
hunk ./src/allmydata/storage/backends/das/core.py 116
         return fileutil.get_available_space(self.storedir, self.reserved_space)
 
     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
+        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
+        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
         return bw
hunk ./src/allmydata/storage/backends/das/expirer.py 50
     slow_start = 360 # wait 6 minutes after startup
     minimum_cycle_time = 12*60*60 # not more than twice per day
 
-    def __init__(self, statefile, historyfile, expiration_policy):
-        self.historyfile = historyfile
+    def __init__(self, statefile, historyfp, expiration_policy):
+        self.historyfp = historyfp
         self.expiration_enabled = expiration_policy['enabled']
         self.mode = expiration_policy['mode']
         self.override_lease_duration = None
hunk ./src/allmydata/storage/backends/das/expirer.py 80
             self.state["cycle-to-date"].setdefault(k, so_far[k])
 
         # initialize history
-        if not os.path.exists(self.historyfile):
+        if not self.historyfp.exists():
             history = {} # cyclenum -> dict
hunk ./src/allmydata/storage/backends/das/expirer.py 82
-            f = open(self.historyfile, "wb")
-            pickle.dump(history, f)
-            f.close()
+            self.historyfp.setContent(pickle.dumps(history))
 
     def create_empty_cycle_dict(self):
         recovered = self.create_empty_recovered_dict()
hunk ./src/allmydata/storage/backends/das/expirer.py 305
         # copy() needs to become a deepcopy
         h["space-recovered"] = s["space-recovered"].copy()
 
-        history = pickle.load(open(self.historyfile, "rb"))
+        history = pickle.load(self.historyfp.getContent())
         history[cycle] = h
         while len(history) > 10:
             oldcycles = sorted(history.keys())
hunk ./src/allmydata/storage/backends/das/expirer.py 310
             del history[oldcycles[0]]
-        f = open(self.historyfile, "wb")
-        pickle.dump(history, f)
-        f.close()
+        self.historyfp.setContent(pickle.dumps(history))
 
     def get_state(self):
         """In addition to the crawler state described in
hunk ./src/allmydata/storage/backends/das/expirer.py 379
         progress = self.get_progress()
 
         state = ShareCrawler.get_state(self) # does a shallow copy
-        history = pickle.load(open(self.historyfile, "rb"))
+        history = pickle.load(self.historyfp.getContent())
         state["history"] = history
 
         if not progress["cycle-in-progress"]:
hunk ./src/allmydata/storage/common.py 19
 def si_a2b(ascii_storageindex):
     return base32.a2b(ascii_storageindex)
 
-def storage_index_to_dir(startfp, storageindex):
+def si_dir(startfp, storageindex):
     sia = si_b2a(storageindex)
hunk ./src/allmydata/storage/common.py 21
-    return os.path.join(sia[:2], sia)
+    return startfp.child(sia[:2]).child(sia)
hunk ./src/allmydata/storage/crawler.py 68
     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
     minimum_cycle_time = 300 # don't run a cycle faster than this
 
-    def __init__(self, statefname, allowed_cpu_percentage=None):
+    def __init__(self, statefp, allowed_cpu_percentage=None):
         service.MultiService.__init__(self)
         if allowed_cpu_percentage is not None:
             self.allowed_cpu_percentage = allowed_cpu_percentage
hunk ./src/allmydata/storage/crawler.py 72
-        self.statefname = statefname
+        self.statefp = statefp
         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
                          for i in range(2**10)]
         self.prefixes.sort()
hunk ./src/allmydata/storage/crawler.py 192
         #                            of the last bucket to be processed, or
         #                            None if we are sleeping between cycles
         try:
-            f = open(self.statefname, "rb")
-            state = pickle.load(f)
-            f.close()
+            state = pickle.loads(self.statefp.getContent())
         except EnvironmentError:
             state = {"version": 1,
                      "last-cycle-finished": None,
hunk ./src/allmydata/storage/crawler.py 228
         else:
             last_complete_prefix = self.prefixes[lcpi]
         self.state["last-complete-prefix"] = last_complete_prefix
-        tmpfile = self.statefname + ".tmp"
-        f = open(tmpfile, "wb")
-        pickle.dump(self.state, f)
-        f.close()
-        fileutil.move_into_place(tmpfile, self.statefname)
+        self.statefp.setContent(pickle.dumps(self.state))
 
     def startService(self):
         # arrange things to look like we were just sleeping, so
hunk ./src/allmydata/storage/crawler.py 440
 
     minimum_cycle_time = 60*60 # we don't need this more than once an hour
 
-    def __init__(self, statefname, num_sample_prefixes=1):
-        FSShareCrawler.__init__(self, statefname)
+    def __init__(self, statefp, num_sample_prefixes=1):
+        FSShareCrawler.__init__(self, statefp)
         self.num_sample_prefixes = num_sample_prefixes
 
     def add_initial_state(self):
hunk ./src/allmydata/storage/server.py 11
 from allmydata.util import fileutil, idlib, log, time_format
 import allmydata # for __full_version__
 
-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
+from allmydata.storage.common import si_b2a, si_a2b, si_dir
+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
 from allmydata.storage.lease import LeaseInfo
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
hunk ./src/allmydata/storage/server.py 173
         # to a particular owner.
         start = time.time()
         self.count("allocate")
-        alreadygot = set()
         incoming = set()
         bucketwriters = {} # k: shnum, v: BucketWriter
 
hunk ./src/allmydata/storage/server.py 199
             remaining_space -= self.allocated_size()
         # self.readonly_storage causes remaining_space <= 0
 
-        # fill alreadygot with all shares that we have, not just the ones
+        # Fill alreadygot with all shares that we have, not just the ones
         # they asked about: this will save them a lot of work. Add or update
         # leases for all of them: if they want us to hold shares for this
hunk ./src/allmydata/storage/server.py 202
-        # file, they'll want us to hold leases for this file.
+        # file, they'll want us to hold leases for all the shares of it.
+        alreadygot = set()
         for share in self.backend.get_shares(storageindex):
hunk ./src/allmydata/storage/server.py 205
-            alreadygot.add(share.shnum)
             share.add_or_renew_lease(lease_info)
hunk ./src/allmydata/storage/server.py 206
+            alreadygot.add(share.shnum)
 
hunk ./src/allmydata/storage/server.py 208
-        # fill incoming with all shares that are incoming use a set operation
-        # since there's no need to operate on individual pieces
+        # all share numbers that are incoming
         incoming = self.backend.get_incoming_shnums(storageindex)
 
         for shnum in ((sharenums - alreadygot) - incoming):
hunk ./src/allmydata/storage/server.py 282
             total_space_freed += sf.cancel_lease(cancel_secret)
 
         if found_buckets:
-            storagedir = os.path.join(self.sharedir,
-                                      storage_index_to_dir(storageindex))
-            if not os.listdir(storagedir):
-                os.rmdir(storagedir)
+            storagedir = si_dir(self.sharedir, storageindex)
+            fp_rmdir_if_empty(storagedir)
 
         if self.stats_provider:
             self.stats_provider.count('storage_server.bytes_freed',
hunk ./src/allmydata/test/test_backends.py 52
     subtree. I simulate just the parts of the filesystem that the current
     implementation of DAS backend needs. """
     def call_open(self, fname, mode):
+        assert isinstance(fname, basestring), fname
         fnamefp = FilePath(fname)
         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
                         "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
hunk ./src/allmydata/test/test_backends.py 104
                         "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
 
     def call_stat(self, fname):
+        assert isinstance(fname, basestring), fname
         fnamefp = FilePath(fname)
         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
hunk ./src/allmydata/test/test_backends.py 217
 
         mocktime.return_value = 0
         # Inspect incoming and fail unless it's empty.
-        incomingset = self.ss.backend.get_incoming('teststorage_index')
-        self.failUnlessReallyEqual(incomingset, set())
+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
+        self.failUnlessReallyEqual(incomingset, frozenset())
         
         # Populate incoming with the sharenum: 0.
hunk ./src/allmydata/test/test_backends.py 221
-        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
 
         # Inspect incoming and fail unless the sharenum: 0 is listed there.
hunk ./src/allmydata/test/test_backends.py 224
-        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
         
         # Attempt to create a second share writer with the same sharenum.
hunk ./src/allmydata/test/test_backends.py 227
-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
 
         # Show that no sharewriter results from a remote_allocate_buckets
         # with the same si and sharenum, until BucketWriter.remote_close()
hunk ./src/allmydata/test/test_backends.py 280
         StorageServer object. """
 
         def call_listdir(dirname):
+            precondition(isinstance(dirname, basestring), dirname)
             self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
             return ['0']
 
hunk ./src/allmydata/test/test_backends.py 287
         mocklistdir.side_effect = call_listdir
 
         def call_open(fname, mode):
+            precondition(isinstance(fname, basestring), fname)
             self.failUnlessReallyEqual(fname, sharefname)
             self.failUnlessEqual(mode[0], 'r', mode)
             self.failUnless('b' in mode, mode)
hunk ./src/allmydata/test/test_backends.py 297
 
         datalen = len(share_data)
         def call_getsize(fname):
+            precondition(isinstance(fname, basestring), fname)
             self.failUnlessReallyEqual(fname, sharefname)
             return datalen
         mockgetsize.side_effect = call_getsize
hunk ./src/allmydata/test/test_backends.py 303
 
         def call_exists(fname):
+            precondition(isinstance(fname, basestring), fname)
             self.failUnlessReallyEqual(fname, sharefname)
             return True
         mockexists.side_effect = call_exists
hunk ./src/allmydata/test/test_backends.py 321
         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
 
 
-class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
-    @mock.patch('time.time')
-    @mock.patch('os.mkdir')
-    @mock.patch('__builtin__.open')
-    @mock.patch('os.listdir')
-    @mock.patch('os.path.isdir')
-    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
+class TestBackendConstruction(MockFiles, ReallyEqualMixin):
+    def test_create_fs_backend(self):
         """ This tests whether a file system backend instance can be
         constructed. To pass the test, it has to use the
         filesystem in only the prescribed ways. """
hunk ./src/allmydata/test/test_backends.py 327
 
-        def call_open(fname, mode):
-            if fname == os.path.join(storedir,'bucket_counter.state'):
-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
-            elif fname == os.path.join(storedir, 'lease_checker.state'):
-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
-            elif fname == os.path.join(storedir, 'lease_checker.history'):
-                return StringIO()
-            else:
-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
-        mockopen.side_effect = call_open
-
-        def call_isdir(fname):
-            if fname == os.path.join(storedir,'shares'):
-                return True
-            elif fname == os.path.join(storedir,'shares', 'incoming'):
-                return True
-            else:
-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
-        mockisdir.side_effect = call_isdir
-
-        def call_mkdir(fname, mode):
-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
-            self.failUnlessEqual(0777, mode)
-            if fname == storedir:
-                return None
-            elif fname == os.path.join(storedir,'shares'):
-                return None
-            elif fname == os.path.join(storedir,'shares', 'incoming'):
-                return None
-            else:
-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
-        mockmkdir.side_effect = call_mkdir
-
         # Now begin the test.
hunk ./src/allmydata/test/test_backends.py 328
-        DASCore('teststoredir', expiration_policy)
-
-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
-
+        DASCore(self.storedir, expiration_policy)
hunk ./src/allmydata/util/fileutil.py 7
 
 import errno, sys, exceptions, os, stat, tempfile, time, binascii
 
+from allmydata.util.assertutil import precondition
+
 from twisted.python import log
hunk ./src/allmydata/util/fileutil.py 10
-from twisted.python.filepath import UnlistableError
+from twisted.python.filepath import FilePath, UnlistableError
 
 from pycryptopp.cipher.aes import AES
 
hunk ./src/allmydata/util/fileutil.py 210
             raise tx
         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
 
+def fp_rmdir_if_empty(dirfp):
+    """ Remove the directory if it is empty. """
+    try:
+        os.rmdir(dirfp.path)
+    except OSError, e:
+        if e.errno != errno.ENOTEMPTY:
+            raise
+    else:
+        dirfp.changed()
+
 def rmtree(dirname):
     """
     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
hunk ./src/allmydata/util/fileutil.py 257
         raise OSError, excs
 
 def fp_remove(dirfp):
+    """
+    An idempotent version of shutil.rmtree().  If the dir is already gone,
+    do nothing and return without raising an exception.  If this call
+    removes the dir, return without raising an exception.  If there is an
+    error that prevents removal or if the directory gets created again by
+    someone else after this deletes it and before this checks that it is
+    gone, raise an exception.
+    """
     try:
         dirfp.remove()
     except UnlistableError, e:
hunk ./src/allmydata/util/fileutil.py 270
         if e.originalException.errno != errno.ENOENT:
             raise
+    except OSError, e:
+        if e.errno != errno.ENOENT:
+            raise
 
 def rm_dir(dirname):
     # Renamed to be like shutil.rmtree and unlike rmdir.
hunk ./src/allmydata/util/fileutil.py 387
         import traceback
         traceback.print_exc()
 
-def get_disk_stats(whichdir, reserved_space=0):
+def get_disk_stats(whichdirfp, reserved_space=0):
     """Return disk statistics for the storage disk, in the form of a dict
     with the following fields.
       total:            total bytes on disk
hunk ./src/allmydata/util/fileutil.py 408
     you can pass how many bytes you would like to leave unused on this
     filesystem as reserved_space.
     """
+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
 
     if have_GetDiskFreeSpaceExW:
         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
hunk ./src/allmydata/util/fileutil.py 419
         n_free_for_nonroot = c_ulonglong(0)
         n_total            = c_ulonglong(0)
         n_free_for_root    = c_ulonglong(0)
-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
                                                byref(n_total),
                                                byref(n_free_for_root))
         if retval == 0:
hunk ./src/allmydata/util/fileutil.py 424
             raise OSError("Windows error %d attempting to get disk statistics for %r"
-                          % (GetLastError(), whichdir))
+                          % (GetLastError(), whichdirfp.path))
         free_for_nonroot = n_free_for_nonroot.value
         total            = n_total.value
         free_for_root    = n_free_for_root.value
hunk ./src/allmydata/util/fileutil.py 433
         # <http://docs.python.org/library/os.html#os.statvfs>
         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
-        s = os.statvfs(whichdir)
+        s = os.statvfs(whichdirfp.path)
 
         # on my mac laptop:
         #  statvfs(2) is a wrapper around statfs(2).
hunk ./src/allmydata/util/fileutil.py 460
              'avail': avail,
            }
 
-def get_available_space(whichdir, reserved_space):
+def get_available_space(whichdirfp, reserved_space):
     """Returns available space for share storage in bytes, or None if no
     API to get this information is available.
 
hunk ./src/allmydata/util/fileutil.py 472
     you can pass how many bytes you would like to leave unused on this
     filesystem as reserved_space.
     """
+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
     try:
hunk ./src/allmydata/util/fileutil.py 474
-        return get_disk_stats(whichdir, reserved_space)['avail']
+        return get_disk_stats(whichdirfp, reserved_space)['avail']
     except AttributeError:
         return None
hunk ./src/allmydata/util/fileutil.py 477
-    except EnvironmentError:
-        log.msg("OS call to get disk statistics failed")
-        return 0
}
[jacp16 or so
wilcoxjg@gmail.com**20110722070036
 Ignore-this: 7548785cad146056eede9a16b93b569f
] {
merger 0.0 (
hunk ./src/allmydata/_auto_deps.py 21
-    "Twisted >= 2.4.0",
+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
+    # support asynchronous close.
+    "Twisted >= 10.1.0",
hunk ./src/allmydata/_auto_deps.py 21
-    "Twisted >= 2.4.0",
+    "Twisted >= 11.0",
)
hunk ./src/allmydata/storage/backends/das/core.py 2
 import os, re, weakref, struct, time, stat
+from twisted.application import service
+from twisted.python.filepath import UnlistableError
+from twisted.python.filepath import FilePath
+from zope.interface import implements
 
hunk ./src/allmydata/storage/backends/das/core.py 7
+import allmydata # for __full_version__
 from allmydata.interfaces import IStorageBackend
 from allmydata.storage.backends.base import Backend
hunk ./src/allmydata/storage/backends/das/core.py 10
-from allmydata.storage.common import si_b2a, si_a2b, si_dir
+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
 from allmydata.util.assertutil import precondition
hunk ./src/allmydata/storage/backends/das/core.py 12
-
-#from foolscap.api import Referenceable
-from twisted.application import service
-from twisted.python.filepath import UnlistableError
-
-from zope.interface import implements
 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
 from allmydata.util import fileutil, idlib, log, time_format
hunk ./src/allmydata/storage/backends/das/core.py 14
-import allmydata # for __full_version__
-
-from allmydata.storage.common import si_b2a, si_a2b, si_dir
-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
 from allmydata.storage.lease import LeaseInfo
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
hunk ./src/allmydata/storage/backends/das/core.py 21
 from allmydata.storage.crawler import FSBucketCountingCrawler
 from allmydata.util.hashutil import constant_time_compare
 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
-
-from zope.interface import implements
+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
 
 # storage/
 # storage/shares/incoming
hunk ./src/allmydata/storage/backends/das/core.py 49
         self._setup_lease_checkerf(expiration_policy)
 
     def _setup_storage(self, storedir, readonly, reserved_space):
+        precondition(isinstance(storedir, FilePath))  
         self.storedir = storedir
         self.readonly = readonly
         self.reserved_space = int(reserved_space)
hunk ./src/allmydata/storage/backends/das/core.py 83
 
     def get_incoming_shnums(self, storageindex):
         """ Return a frozenset of the shnum (as ints) of incoming shares. """
-        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
+        incomingdir = si_si2dir(self.incomingdir, storageindex)
         try:
             childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
             shnums = [ int(fp.basename) for fp in childfps ]
hunk ./src/allmydata/storage/backends/das/core.py 96
         """ Generate ImmutableShare objects for shares we have for this
         storageindex. ("Shares we have" means completed ones, excluding
         incoming ones.)"""
-        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
+        finalstoragedir = si_si2dir(self.sharedir, storageindex)
         try:
             for fp in finalstoragedir.children():
                 if is_num(fp):
hunk ./src/allmydata/storage/backends/das/core.py 111
         return fileutil.get_available_space(self.storedir, self.reserved_space)
 
     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
-        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
-        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
+        finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
+        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
         return bw
hunk ./src/allmydata/storage/backends/null/core.py 18
         return None
 
     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
-        
-        immutableshare = ImmutableShare() 
+        immutableshare = ImmutableShare()
         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
 
     def set_storage_server(self, ss):
hunk ./src/allmydata/storage/backends/null/core.py 24
         self.ss = ss
 
-    def get_incoming(self, storageindex):
-        return set()
+    def get_incoming_shnums(self, storageindex):
+        return frozenset()
 
 class ImmutableShare:
     sharetype = "immutable"
hunk ./src/allmydata/storage/common.py 19
 def si_a2b(ascii_storageindex):
     return base32.a2b(ascii_storageindex)
 
-def si_dir(startfp, storageindex):
+def si_si2dir(startfp, storageindex):
     sia = si_b2a(storageindex)
     return startfp.child(sia[:2]).child(sia)
hunk ./src/allmydata/storage/immutable.py 20
     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
         self.ss = ss
         self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
-
         self._canary = canary
         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
         self.closed = False
hunk ./src/allmydata/storage/lease.py 17
 
     def get_expiration_time(self):
         return self.expiration_time
+
     def get_grant_renew_time_time(self):
         # hack, based upon fixed 31day expiration period
         return self.expiration_time - 31*24*60*60
hunk ./src/allmydata/storage/lease.py 21
+
     def get_age(self):
         return time.time() - self.get_grant_renew_time_time()
 
hunk ./src/allmydata/storage/lease.py 32
          self.expiration_time) = struct.unpack(">L32s32sL", data)
         self.nodeid = None
         return self
+
     def to_immutable_data(self):
         return struct.pack(">L32s32sL",
                            self.owner_num,
hunk ./src/allmydata/storage/lease.py 45
                            int(self.expiration_time),
                            self.renew_secret, self.cancel_secret,
                            self.nodeid)
+
     def from_mutable_data(self, data):
         (self.owner_num,
          self.expiration_time,
hunk ./src/allmydata/storage/server.py 11
 from allmydata.util import fileutil, idlib, log, time_format
 import allmydata # for __full_version__
 
-from allmydata.storage.common import si_b2a, si_a2b, si_dir
-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
 from allmydata.storage.lease import LeaseInfo
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
hunk ./src/allmydata/storage/server.py 88
             else:
                 stats["mean"] = None
 
-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
                              (0.999, "99_9_percentile", 1000)]
 
             for percentile, percentilestring, minnumtoobserve in orderstatlist:
hunk ./src/allmydata/storage/server.py 231
             header = f.read(32)
             f.close()
             if header[:32] == MutableShareFile.MAGIC:
+                # XXX  Can I exploit this code?
                 sf = MutableShareFile(filename, self)
                 # note: if the share has been migrated, the renew_lease()
                 # call will throw an exception, with information to help the
hunk ./src/allmydata/storage/server.py 237
                 # client update the lease.
             elif header[:4] == struct.pack(">L", 1):
+                # Check if version number is "1".
+                # XXX WHAT ABOUT OTHER VERSIONS!!!!!!!?
                 sf = ShareFile(filename)
             else:
                 continue # non-sharefile
hunk ./src/allmydata/storage/server.py 285
             total_space_freed += sf.cancel_lease(cancel_secret)
 
         if found_buckets:
-            storagedir = si_dir(self.sharedir, storageindex)
+            # XXX  Yikes looks like code that shouldn't be in the server!
+            storagedir = si_si2dir(self.sharedir, storageindex)
             fp_rmdir_if_empty(storagedir)
 
         if self.stats_provider:
hunk ./src/allmydata/storage/server.py 301
             self.stats_provider.count('storage_server.bytes_added', consumed_size)
         del self._active_writers[bw]
 
-
     def remote_get_buckets(self, storageindex):
         start = time.time()
         self.count("get")
hunk ./src/allmydata/storage/server.py 329
         except StopIteration:
             return iter([])
 
+    #  XXX  As far as Zancas' grockery has gotten.
     def remote_slot_testv_and_readv_and_writev(self, storageindex,
                                                secrets,
                                                test_and_write_vectors,
hunk ./src/allmydata/storage/server.py 338
         self.count("writev")
         si_s = si_b2a(storageindex)
         log.msg("storage: slot_writev %s" % si_s)
-        si_dir = storage_index_to_dir(storageindex)
+        
         (write_enabler, renew_secret, cancel_secret) = secrets
         # shares exist if there is a file for them
hunk ./src/allmydata/storage/server.py 341
-        bucketdir = os.path.join(self.sharedir, si_dir)
+        bucketdir = si_si2dir(self.sharedir, storageindex)
         shares = {}
         if os.path.isdir(bucketdir):
             for sharenum_s in os.listdir(bucketdir):
hunk ./src/allmydata/storage/server.py 430
         si_s = si_b2a(storageindex)
         lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
                      facility="tahoe.storage", level=log.OPERATIONAL)
-        si_dir = storage_index_to_dir(storageindex)
         # shares exist if there is a file for them
hunk ./src/allmydata/storage/server.py 431
-        bucketdir = os.path.join(self.sharedir, si_dir)
+        bucketdir = si_si2dir(self.sharedir, storageindex)
         if not os.path.isdir(bucketdir):
             self.add_latency("readv", time.time() - start)
             return {}
hunk ./src/allmydata/test/test_backends.py 2
 from twisted.trial import unittest
-
 from twisted.python.filepath import FilePath
hunk ./src/allmydata/test/test_backends.py 3
-
 from allmydata.util.log import msg
hunk ./src/allmydata/test/test_backends.py 4
-
 from StringIO import StringIO
hunk ./src/allmydata/test/test_backends.py 5
-
 from allmydata.test.common_util import ReallyEqualMixin
 from allmydata.util.assertutil import _assert
hunk ./src/allmydata/test/test_backends.py 7
-
 import mock
 
 # This is the code that we're going to be testing.
hunk ./src/allmydata/test/test_backends.py 11
 from allmydata.storage.server import StorageServer
-
 from allmydata.storage.backends.das.core import DASCore
 from allmydata.storage.backends.null.core import NullCore
 
hunk ./src/allmydata/test/test_backends.py 14
-
-# The following share file contents was generated with
+# The following share file content was generated with
 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
hunk ./src/allmydata/test/test_backends.py 16
-# with share data == 'a'.
+# with share data == 'a'. The total size of this input 
+# is 85 bytes.
 shareversionnumber = '\x00\x00\x00\x01'
 sharedatalength = '\x00\x00\x00\x01'
 numberofleases = '\x00\x00\x00\x01'
hunk ./src/allmydata/test/test_backends.py 21
-
 shareinputdata = 'a'
 ownernumber = '\x00\x00\x00\x00'
 renewsecret  = 'x'*32
hunk ./src/allmydata/test/test_backends.py 31
 client_data = shareinputdata + ownernumber + renewsecret + \
     cancelsecret + expirationtime + nextlease
 share_data = containerdata + client_data
-
-
 testnodeid = 'testnodeidxxxxxxxxxx'
 
 class MockStat:
hunk ./src/allmydata/test/test_backends.py 105
         mstat.st_mode = 16893 # a directory
         return mstat
 
+    def call_get_available_space(self, storedir, reservedspace):
+        # The input vector has an input size of 85.
+        return 85 - reservedspace
+
+    def call_exists(self):
+        # I'm only called in the ImmutableShareFile constructor.
+        return False
+
     def setUp(self):
         msg( "%s.setUp()" % (self,))
         self.storedir = FilePath('teststoredir')
hunk ./src/allmydata/test/test_backends.py 147
         mockfpstat = self.mockfpstatp.__enter__()
         mockfpstat.side_effect = self.call_stat
 
+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
+        mockget_available_space = self.mockget_available_space.__enter__()
+        mockget_available_space.side_effect = self.call_get_available_space
+
+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
+        mockfpexists = self.mockfpexists.__enter__()
+        mockfpexists.side_effect = self.call_exists
+
     def tearDown(self):
         msg( "%s.tearDown()" % (self,))
hunk ./src/allmydata/test/test_backends.py 157
+        self.mockfpexists.__exit__()
+        self.mockget_available_space.__exit__()
         self.mockfpstatp.__exit__()
         self.mockstatp.__exit__()
         self.mockopenp.__exit__()
hunk ./src/allmydata/test/test_backends.py 166
         self.mockmkdirp.__exit__()
         self.mocklistdirp.__exit__()
 
+
 expiration_policy = {'enabled' : False, 
                      'mode' : 'age',
                      'override_lease_duration' : None,
hunk ./src/allmydata/test/test_backends.py 182
         self.ss = StorageServer(testnodeid, backend=NullCore())
 
     @mock.patch('os.mkdir')
-
     @mock.patch('__builtin__.open')
     @mock.patch('os.listdir')
     @mock.patch('os.path.isdir')
hunk ./src/allmydata/test/test_backends.py 201
         filesystem backend. To pass the test, it mustn't use the filesystem
         outside of its configured storedir. """
 
-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
+        StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
 
 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
     """ This tests both the StorageServer and the DAS backend together. """
hunk ./src/allmydata/test/test_backends.py 205
+    
     def setUp(self):
         MockFiles.setUp(self)
         try:
hunk ./src/allmydata/test/test_backends.py 211
             self.backend = DASCore(self.storedir, expiration_policy)
             self.ss = StorageServer(testnodeid, self.backend)
-            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
-            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
+            self.backendwithreserve = DASCore(self.storedir, expiration_policy, reserved_space = 1)
+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
         except:
             MockFiles.tearDown(self)
             raise
hunk ./src/allmydata/test/test_backends.py 233
         # Populate incoming with the sharenum: 0.
         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
 
-        # Inspect incoming and fail unless the sharenum: 0 is listed there.
-        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
+        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
         
         # Attempt to create a second share writer with the same sharenum.
         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
hunk ./src/allmydata/test/test_backends.py 257
 
         # Postclose: (Omnibus) failUnless written data is in final.
         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
-        contents = sharesinfinal[0].read_share_data(0,73)
+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
+        contents = sharesinfinal[0].read_share_data(0, 73)
         self.failUnlessReallyEqual(contents, client_data)
 
         # Exercise the case that the share we're asking to allocate is
hunk ./src/allmydata/test/test_backends.py 276
         mockget_available_space.side_effect = call_get_available_space
         
         
-        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
 
     @mock.patch('os.path.exists')
     @mock.patch('os.path.getsize')
}
[jacp17
wilcoxjg@gmail.com**20110722203244
 Ignore-this: e79a5924fb2eb786ee4e9737a8228f87
] {
hunk ./src/allmydata/storage/backends/das/core.py 14
 from allmydata.util.assertutil import precondition
 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
 from allmydata.util import fileutil, idlib, log, time_format
+from allmydata.util.fileutil import fp_make_dirs
 from allmydata.storage.lease import LeaseInfo
 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
      create_mutable_sharefile
hunk ./src/allmydata/storage/backends/das/core.py 19
 from allmydata.storage.immutable import BucketWriter, BucketReader
-from allmydata.storage.crawler import FSBucketCountingCrawler
+from allmydata.storage.crawler import BucketCountingCrawler
 from allmydata.util.hashutil import constant_time_compare
hunk ./src/allmydata/storage/backends/das/core.py 21
-from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
+from allmydata.storage.backends.das.expirer import LeaseCheckingCrawler
 _pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
 
 # storage/
hunk ./src/allmydata/storage/backends/das/core.py 43
     implements(IStorageBackend)
     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
         Backend.__init__(self)
-
         self._setup_storage(storedir, readonly, reserved_space)
         self._setup_corruption_advisory()
         self._setup_bucket_counter()
hunk ./src/allmydata/storage/backends/das/core.py 72
 
     def _setup_bucket_counter(self):
         statefname = self.storedir.child("bucket_counter.state")
-        self.bucket_counter = FSBucketCountingCrawler(statefname)
+        self.bucket_counter = BucketCountingCrawler(statefname)
         self.bucket_counter.setServiceParent(self)
 
     def _setup_lease_checkerf(self, expiration_policy):
hunk ./src/allmydata/storage/backends/das/core.py 78
         statefile = self.storedir.child("lease_checker.state")
         historyfile = self.storedir.child("lease_checker.history")
-        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile, expiration_policy)
         self.lease_checker.setServiceParent(self)
 
     def get_incoming_shnums(self, storageindex):
hunk ./src/allmydata/storage/backends/das/core.py 168
             # it. Also construct the metadata.
             assert not finalhome.exists()
             fp_make_dirs(self.incominghome)
-            f = open(self.incominghome, 'wb')
+            f = self.incominghome.child(str(self.shnum))
             # The second field -- the four-byte share data length -- is no
             # longer used as of Tahoe v1.3.0, but we continue to write it in
             # there in case someone downgrades a storage server from >=
hunk ./src/allmydata/storage/backends/das/core.py 178
             # the largest length that can fit into the field. That way, even
             # if this does happen, the old < v1.3.0 server will still allow
             # clients to read the first part of the share.
-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
-            f.close()
+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
+            #f.close()
             self._lease_offset = max_size + 0x0c
             self._num_leases = 0
         else:
hunk ./src/allmydata/storage/backends/das/core.py 261
         f.write(data)
         f.close()
 
-    def _write_lease_record(self, f, lease_number, lease_info):
+    def _write_lease_record(self, lease_number, lease_info):
         offset = self._lease_offset + lease_number * self.LEASE_SIZE
         f.seek(offset)
         assert f.tell() == offset
hunk ./src/allmydata/storage/backends/das/core.py 290
                 yield LeaseInfo().from_immutable_data(data)
 
     def add_lease(self, lease_info):
-        f = open(self.incominghome, 'rb+')
+        self.incominghome, 'rb+')
         num_leases = self._read_num_leases(f)
         self._write_lease_record(f, num_leases, lease_info)
         self._write_num_leases(f, num_leases+1)
hunk ./src/allmydata/storage/backends/das/expirer.py 1
-import time, os, pickle, struct
-from allmydata.storage.crawler import FSShareCrawler
+import time, os, pickle, struct # os, pickle, and struct will almost certainly be migrated to the backend...
+from allmydata.storage.crawler import ShareCrawler
 from allmydata.storage.common import UnknownMutableContainerVersionError, \
      UnknownImmutableContainerVersionError
 from twisted.python import log as twlog
hunk ./src/allmydata/storage/backends/das/expirer.py 7
 
-class FSLeaseCheckingCrawler(FSShareCrawler):
+class LeaseCheckingCrawler(ShareCrawler):
     """I examine the leases on all shares, determining which are still valid
     and which have expired. I can remove the expired leases (if so
     configured), and the share will be deleted when the last lease is
hunk ./src/allmydata/storage/backends/das/expirer.py 66
         else:
             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
         self.sharetypes_to_expire = expiration_policy['sharetypes']
-        FSShareCrawler.__init__(self, statefile)
+        ShareCrawler.__init__(self, statefile)
 
     def add_initial_state(self):
         # we fill ["cycle-to-date"] here (even though they will be reset in
hunk ./src/allmydata/storage/crawler.py 1
-
 import os, time, struct
 import cPickle as pickle
 from twisted.internet import reactor
hunk ./src/allmydata/storage/crawler.py 11
 class TimeSliceExceeded(Exception):
     pass
 
-class FSShareCrawler(service.MultiService):
-    """A subcless of ShareCrawler is attached to a StorageServer, and
+class ShareCrawler(service.MultiService):
+    """A subclass of ShareCrawler is attached to a StorageServer, and
     periodically walks all of its shares, processing each one in some
     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
     since large servers can easily have a terabyte of shares, in several
hunk ./src/allmydata/storage/crawler.py 426
         pass
 
 
-class FSBucketCountingCrawler(FSShareCrawler):
+class BucketCountingCrawler(ShareCrawler):
     """I keep track of how many buckets are being managed by this server.
     This is equivalent to the number of distributed files and directories for
     which I am providing storage. The actual number of files+directories in
hunk ./src/allmydata/storage/crawler.py 440
     minimum_cycle_time = 60*60 # we don't need this more than once an hour
 
     def __init__(self, statefp, num_sample_prefixes=1):
-        FSShareCrawler.__init__(self, statefp)
+        ShareCrawler.__init__(self, statefp)
         self.num_sample_prefixes = num_sample_prefixes
 
     def add_initial_state(self):
hunk ./src/allmydata/test/test_backends.py 113
         # I'm only called in the ImmutableShareFile constructor.
         return False
 
+    def call_setContent(self, inputstring):
+        # XXX Good enough for expirer, not sure about elsewhere... 
+        return True 
+
     def setUp(self):
         msg( "%s.setUp()" % (self,))
         self.storedir = FilePath('teststoredir')
hunk ./src/allmydata/test/test_backends.py 159
         mockfpexists = self.mockfpexists.__enter__()
         mockfpexists.side_effect = self.call_exists
 
+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
+        mocksetContent = self.mocksetContent.__enter__()
+        mocksetContent.side_effect = self.call_setContent
+
     def tearDown(self):
         msg( "%s.tearDown()" % (self,))
hunk ./src/allmydata/test/test_backends.py 165
+        self.mocksetContent.__exit__()
         self.mockfpexists.__exit__()
         self.mockget_available_space.__exit__()
         self.mockfpstatp.__exit__()
}
[jacp18
wilcoxjg@gmail.com**20110723031915
 Ignore-this: 21e7f22ac20e3f8af22ea2e9b755d6a5
] {
hunk ./src/allmydata/_auto_deps.py 21
     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
 
-    "Twisted >= 2.4.0",
+v v v v v v v
+    "Twisted >= 11.0",
+*************
+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
+    # support asynchronous close.
+    "Twisted >= 10.1.0",
+^ ^ ^ ^ ^ ^ ^
 
     # foolscap < 0.5.1 had a performance bug which spent
     # O(N**2) CPU for transferring large mutable files
hunk ./src/allmydata/storage/backends/das/core.py 168
             # it. Also construct the metadata.
             assert not finalhome.exists()
             fp_make_dirs(self.incominghome)
-            f = self.incominghome.child(str(self.shnum))
+            f = self.incominghome
             # The second field -- the four-byte share data length -- is no
             # longer used as of Tahoe v1.3.0, but we continue to write it in
             # there in case someone downgrades a storage server from >=
hunk ./src/allmydata/storage/backends/das/core.py 178
             # the largest length that can fit into the field. That way, even
             # if this does happen, the old < v1.3.0 server will still allow
             # clients to read the first part of the share.
-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
-            #f.close()
+            print 'f: ',f
+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
             self._lease_offset = max_size + 0x0c
             self._num_leases = 0
         else:
hunk ./src/allmydata/storage/backends/das/core.py 263
 
     def _write_lease_record(self, lease_number, lease_info):
         offset = self._lease_offset + lease_number * self.LEASE_SIZE
-        f.seek(offset)
-        assert f.tell() == offset
-        f.write(lease_info.to_immutable_data())
+        fh = f.open()
+        try:
+            fh.seek(offset)
+            assert fh.tell() == offset
+            fh.write(lease_info.to_immutable_data())
+        finally:
+            fh.close()
 
     def _read_num_leases(self, f):
hunk ./src/allmydata/storage/backends/das/core.py 272
-        f.seek(0x08)
-        (num_leases,) = struct.unpack(">L", f.read(4))
+        fh = f.open()
+        try:
+            fh.seek(0x08)
+            ro = fh.read(4)
+            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
+            (num_leases,) = struct.unpack(">L", ro)
+        finally:
+            fh.close()
         return num_leases
 
     def _write_num_leases(self, f, num_leases):
hunk ./src/allmydata/storage/backends/das/core.py 283
-        f.seek(0x08)
-        f.write(struct.pack(">L", num_leases))
+        fh = f.open()
+        try:
+            fh.seek(0x08)
+            fh.write(struct.pack(">L", num_leases))
+        finally:
+            fh.close()
 
     def _truncate_leases(self, f, num_leases):
         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
hunk ./src/allmydata/storage/backends/das/core.py 304
                 yield LeaseInfo().from_immutable_data(data)
 
     def add_lease(self, lease_info):
-        self.incominghome, 'rb+')
-        num_leases = self._read_num_leases(f)
+        f = self.incominghome
+        num_leases = self._read_num_leases(self.incominghome)
         self._write_lease_record(f, num_leases, lease_info)
         self._write_num_leases(f, num_leases+1)
hunk ./src/allmydata/storage/backends/das/core.py 308
-        f.close()
-
+        
     def renew_lease(self, renew_secret, new_expire_time):
         for i,lease in enumerate(self.get_leases()):
             if constant_time_compare(lease.renew_secret, renew_secret):
hunk ./src/allmydata/test/test_backends.py 33
 share_data = containerdata + client_data
 testnodeid = 'testnodeidxxxxxxxxxx'
 
+
 class MockStat:
     def __init__(self):
         self.st_mode = None
hunk ./src/allmydata/test/test_backends.py 43
     code under test if it reads or writes outside of its prescribed
     subtree. I simulate just the parts of the filesystem that the current
     implementation of DAS backend needs. """
+
+    def setUp(self):
+        msg( "%s.setUp()" % (self,))
+        self.storedir = FilePath('teststoredir')
+        self.basedir = self.storedir.child('shares')
+        self.baseincdir = self.basedir.child('incoming')
+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
+        self.shareincomingname = self.sharedirincomingname.child('0')
+        self.sharefilename = self.sharedirfinalname.child('0')
+        self.sharefilecontents = StringIO(share_data)
+
+        self.mocklistdirp = mock.patch('os.listdir')
+        mocklistdir = self.mocklistdirp.__enter__()
+        mocklistdir.side_effect = self.call_listdir
+
+        self.mockmkdirp = mock.patch('os.mkdir')
+        mockmkdir = self.mockmkdirp.__enter__()
+        mockmkdir.side_effect = self.call_mkdir
+
+        self.mockisdirp = mock.patch('os.path.isdir')
+        mockisdir = self.mockisdirp.__enter__()
+        mockisdir.side_effect = self.call_isdir
+
+        self.mockopenp = mock.patch('__builtin__.open')
+        mockopen = self.mockopenp.__enter__()
+        mockopen.side_effect = self.call_open
+
+        self.mockstatp = mock.patch('os.stat')
+        mockstat = self.mockstatp.__enter__()
+        mockstat.side_effect = self.call_stat
+
+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
+        mockfpstat = self.mockfpstatp.__enter__()
+        mockfpstat.side_effect = self.call_stat
+
+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
+        mockget_available_space = self.mockget_available_space.__enter__()
+        mockget_available_space.side_effect = self.call_get_available_space
+
+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
+        mockfpexists = self.mockfpexists.__enter__()
+        mockfpexists.side_effect = self.call_exists
+
+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
+        mocksetContent = self.mocksetContent.__enter__()
+        mocksetContent.side_effect = self.call_setContent
+
     def call_open(self, fname, mode):
         assert isinstance(fname, basestring), fname
         fnamefp = FilePath(fname)
hunk ./src/allmydata/test/test_backends.py 107
             # current implementation of DAS backend, and we might want to
             # use this information in this test in the future...
             return StringIO()
+        elif fnamefp == self.shareincomingname:
+            print "repr(fnamefp): ", repr(fnamefp)
         else:
             # Anything else you open inside your subtree appears to be an
             # empty file.
hunk ./src/allmydata/test/test_backends.py 168
         # XXX Good enough for expirer, not sure about elsewhere... 
         return True 
 
-    def setUp(self):
-        msg( "%s.setUp()" % (self,))
-        self.storedir = FilePath('teststoredir')
-        self.basedir = self.storedir.child('shares')
-        self.baseincdir = self.basedir.child('incoming')
-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
-        self.shareincomingname = self.sharedirincomingname.child('0')
-        self.sharefname = self.sharedirfinalname.child('0')
-
-        self.mocklistdirp = mock.patch('os.listdir')
-        mocklistdir = self.mocklistdirp.__enter__()
-        mocklistdir.side_effect = self.call_listdir
-
-        self.mockmkdirp = mock.patch('os.mkdir')
-        mockmkdir = self.mockmkdirp.__enter__()
-        mockmkdir.side_effect = self.call_mkdir
-
-        self.mockisdirp = mock.patch('os.path.isdir')
-        mockisdir = self.mockisdirp.__enter__()
-        mockisdir.side_effect = self.call_isdir
-
-        self.mockopenp = mock.patch('__builtin__.open')
-        mockopen = self.mockopenp.__enter__()
-        mockopen.side_effect = self.call_open
-
-        self.mockstatp = mock.patch('os.stat')
-        mockstat = self.mockstatp.__enter__()
-        mockstat.side_effect = self.call_stat
-
-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
-        mockfpstat = self.mockfpstatp.__enter__()
-        mockfpstat.side_effect = self.call_stat
-
-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
-        mockget_available_space = self.mockget_available_space.__enter__()
-        mockget_available_space.side_effect = self.call_get_available_space
-
-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
-        mockfpexists = self.mockfpexists.__enter__()
-        mockfpexists.side_effect = self.call_exists
-
-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
-        mocksetContent = self.mocksetContent.__enter__()
-        mocksetContent.side_effect = self.call_setContent
 
     def tearDown(self):
         msg( "%s.tearDown()" % (self,))
hunk ./src/allmydata/test/test_backends.py 239
         handling of simultaneous and successive attempts to write the same
         share.
         """
-
         mocktime.return_value = 0
         # Inspect incoming and fail unless it's empty.
         incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
}
[jacp19orso
wilcoxjg@gmail.com**20110724034230
 Ignore-this: f001093c467225c289489636a61935fe
] {
hunk ./src/allmydata/_auto_deps.py 21
     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
 
-v v v v v v v
-    "Twisted >= 11.0",
-*************
+
     # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
     # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
     # support asynchronous close.
hunk ./src/allmydata/_auto_deps.py 26
     "Twisted >= 10.1.0",
-^ ^ ^ ^ ^ ^ ^
+
 
     # foolscap < 0.5.1 had a performance bug which spent
     # O(N**2) CPU for transferring large mutable files
hunk ./src/allmydata/storage/backends/das/core.py 153
     LEASE_SIZE = struct.calcsize(">L32s32sL")
     sharetype = "immutable"
 
-    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
+    def __init__(self, finalhome, storageindex, shnum, incominghome, max_size=None, create=False):
         """ If max_size is not None then I won't allow more than
         max_size to be written to me. If create=True then max_size
         must not be None. """
hunk ./src/allmydata/storage/backends/das/core.py 167
             # touch the file, so later callers will see that we're working on
             # it. Also construct the metadata.
             assert not finalhome.exists()
-            fp_make_dirs(self.incominghome)
-            f = self.incominghome
+            fp_make_dirs(self.incominghome.parent())
             # The second field -- the four-byte share data length -- is no
             # longer used as of Tahoe v1.3.0, but we continue to write it in
             # there in case someone downgrades a storage server from >=
hunk ./src/allmydata/storage/backends/das/core.py 177
             # the largest length that can fit into the field. That way, even
             # if this does happen, the old < v1.3.0 server will still allow
             # clients to read the first part of the share.
-            print 'f: ',f
-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
+            self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
             self._lease_offset = max_size + 0x0c
             self._num_leases = 0
         else:
hunk ./src/allmydata/storage/backends/das/core.py 182
             f = open(self.finalhome, 'rb')
-            filesize = os.path.getsize(self.finalhome)
             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
             f.close()
hunk ./src/allmydata/storage/backends/das/core.py 184
+            filesize = self.finalhome.getsize()
             if version != 1:
                 msg = "sharefile %s had version %d but we wanted 1" % \
                       (self.finalhome, version)
hunk ./src/allmydata/storage/backends/das/core.py 259
         f.write(data)
         f.close()
 
-    def _write_lease_record(self, lease_number, lease_info):
+    def _write_lease_record(self, f, lease_number, lease_info):
         offset = self._lease_offset + lease_number * self.LEASE_SIZE
         fh = f.open()
hunk ./src/allmydata/storage/backends/das/core.py 262
+        print fh
         try:
             fh.seek(offset)
             assert fh.tell() == offset
hunk ./src/allmydata/storage/backends/das/core.py 271
             fh.close()
 
     def _read_num_leases(self, f):
-        fh = f.open()
+        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
         try:
             fh.seek(0x08)
             ro = fh.read(4)
hunk ./src/allmydata/storage/backends/das/core.py 275
-            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
             (num_leases,) = struct.unpack(">L", ro)
         finally:
             fh.close()
hunk ./src/allmydata/storage/backends/das/core.py 302
                 yield LeaseInfo().from_immutable_data(data)
 
     def add_lease(self, lease_info):
-        f = self.incominghome
         num_leases = self._read_num_leases(self.incominghome)
hunk ./src/allmydata/storage/backends/das/core.py 303
-        self._write_lease_record(f, num_leases, lease_info)
-        self._write_num_leases(f, num_leases+1)
+        self._write_lease_record(self.incominghome, num_leases, lease_info)
+        self._write_num_leases(self.incominghome, num_leases+1)
         
     def renew_lease(self, renew_secret, new_expire_time):
         for i,lease in enumerate(self.get_leases()):
hunk ./src/allmydata/test/test_backends.py 52
         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
         self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
         self.shareincomingname = self.sharedirincomingname.child('0')
-        self.sharefilename = self.sharedirfinalname.child('0')
-        self.sharefilecontents = StringIO(share_data)
+        self.sharefinalname = self.sharedirfinalname.child('0')
 
hunk ./src/allmydata/test/test_backends.py 54
-        self.mocklistdirp = mock.patch('os.listdir')
-        mocklistdir = self.mocklistdirp.__enter__()
-        mocklistdir.side_effect = self.call_listdir
+        # Make patcher, patch, and make effects for fs using functions.
+        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
+        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
+        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
 
hunk ./src/allmydata/test/test_backends.py 59
-        self.mockmkdirp = mock.patch('os.mkdir')
-        mockmkdir = self.mockmkdirp.__enter__()
-        mockmkdir.side_effect = self.call_mkdir
+        #self.mockmkdirp = mock.patch('os.mkdir')
+        #mockmkdir = self.mockmkdirp.__enter__()
+        #mockmkdir.side_effect = self.call_mkdir
 
hunk ./src/allmydata/test/test_backends.py 63
-        self.mockisdirp = mock.patch('os.path.isdir')
+        self.mockisdirp = mock.patch('FilePath.isdir')
         mockisdir = self.mockisdirp.__enter__()
         mockisdir.side_effect = self.call_isdir
 
hunk ./src/allmydata/test/test_backends.py 67
-        self.mockopenp = mock.patch('__builtin__.open')
+        self.mockopenp = mock.patch('FilePath.open')
         mockopen = self.mockopenp.__enter__()
         mockopen.side_effect = self.call_open
 
hunk ./src/allmydata/test/test_backends.py 71
-        self.mockstatp = mock.patch('os.stat')
+        self.mockstatp = mock.patch('filepath.stat')
         mockstat = self.mockstatp.__enter__()
         mockstat.side_effect = self.call_stat
 
hunk ./src/allmydata/test/test_backends.py 91
         mocksetContent = self.mocksetContent.__enter__()
         mocksetContent.side_effect = self.call_setContent
 
+    #  The behavior of mocked filesystem using functions
     def call_open(self, fname, mode):
         assert isinstance(fname, basestring), fname
         fnamefp = FilePath(fname)
hunk ./src/allmydata/test/test_backends.py 109
             # use this information in this test in the future...
             return StringIO()
         elif fnamefp == self.shareincomingname:
-            print "repr(fnamefp): ", repr(fnamefp)
+            self.incomingsharefilecontents.closed = False
+            return self.incomingsharefilecontents
         else:
             # Anything else you open inside your subtree appears to be an
             # empty file.
hunk ./src/allmydata/test/test_backends.py 152
         fnamefp = FilePath(fname)
         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
-
         msg("%s.call_stat(%s)" % (self, fname,))
         mstat = MockStat()
         mstat.st_mode = 16893 # a directory
hunk ./src/allmydata/test/test_backends.py 166
         return False
 
     def call_setContent(self, inputstring):
-        # XXX Good enough for expirer, not sure about elsewhere... 
-        return True 
-
+        self.incomingsharefilecontents = StringIO(inputstring)
 
     def tearDown(self):
         msg( "%s.tearDown()" % (self,))
}
[jacp19
wilcoxjg@gmail.com**20110727080553
 Ignore-this: 851b1ebdeeee712abfbda557af142726
] {
hunk ./src/allmydata/storage/backends/das/core.py 1
-import os, re, weakref, struct, time, stat
+import re, weakref, struct, time, stat
 from twisted.application import service
 from twisted.python.filepath import UnlistableError
hunk ./src/allmydata/storage/backends/das/core.py 4
+from twisted.python import filepath
 from twisted.python.filepath import FilePath
 from zope.interface import implements
 
hunk ./src/allmydata/storage/backends/das/core.py 50
         self._setup_lease_checkerf(expiration_policy)
 
     def _setup_storage(self, storedir, readonly, reserved_space):
-        precondition(isinstance(storedir, FilePath))  
+        precondition(isinstance(storedir, FilePath), storedir, FilePath)  
         self.storedir = storedir
         self.readonly = readonly
         self.reserved_space = int(reserved_space)
hunk ./src/allmydata/storage/backends/das/core.py 195
         self._data_offset = 0xc
 
     def close(self):
-        fileutil.make_dirs(os.path.dirname(self.finalhome))
-        fileutil.rename(self.incominghome, self.finalhome)
+        fileutil.fp_make_dirs(self.finalhome.parent())
+        self.incominghome.moveTo(self.finalhome)
         try:
             # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
             # We try to delete the parent (.../ab/abcde) to avoid leaving
hunk ./src/allmydata/storage/backends/das/core.py 209
             # their children to know when they should do the rmdir. This
             # approach is simpler, but relies on os.rmdir refusing to delete
             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
-            #print "os.path.dirname(self.incominghome): "
-            #print os.path.dirname(self.incominghome)
-            os.rmdir(os.path.dirname(self.incominghome))
+            fileutil.fp_rmdir_if_empty(self.incominghome.parent())
             # we also delete the grandparent (prefix) directory, .../ab ,
             # again to avoid leaving directories lying around. This might
             # fail if there is another bucket open that shares a prefix (like
hunk ./src/allmydata/storage/backends/das/core.py 214
             # ab/abfff).
-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
+            fileutil.fp_rmdir_if_empty(self.incominghome.parent().parent())
             # we leave the great-grandparent (incoming/) directory in place.
         except EnvironmentError:
             # ignore the "can't rmdir because the directory is not empty"
hunk ./src/allmydata/storage/backends/das/core.py 224
         pass
         
     def stat(self):
-        return os.stat(self.finalhome)[stat.ST_SIZE]
-        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
+        return filepath.stat(self.finalhome)[stat.ST_SIZE]
 
     def get_shnum(self):
         return self.shnum
hunk ./src/allmydata/storage/backends/das/core.py 230
 
     def unlink(self):
-        os.unlink(self.finalhome)
+        self.finalhome.remove()
 
     def read_share_data(self, offset, length):
         precondition(offset >= 0)
hunk ./src/allmydata/storage/backends/das/core.py 237
         # Reads beyond the end of the data are truncated. Reads that start
         # beyond the end of the data return an empty string.
         seekpos = self._data_offset+offset
-        fsize = os.path.getsize(self.finalhome)
+        fsize = self.finalhome.getsize()
         actuallength = max(0, min(length, fsize-seekpos))
         if actuallength == 0:
             return ""
hunk ./src/allmydata/storage/backends/das/core.py 241
-        f = open(self.finalhome, 'rb')
-        f.seek(seekpos)
-        return f.read(actuallength)
+        try:
+            fh = open(self.finalhome, 'rb')
+            fh.seek(seekpos)
+            sharedata = fh.read(actuallength)
+        finally:
+            fh.close()
+        return sharedata
 
     def write_share_data(self, offset, data):
         length = len(data)
hunk ./src/allmydata/storage/backends/das/core.py 264
     def _write_lease_record(self, f, lease_number, lease_info):
         offset = self._lease_offset + lease_number * self.LEASE_SIZE
         fh = f.open()
-        print fh
         try:
             fh.seek(offset)
             assert fh.tell() == offset
hunk ./src/allmydata/storage/backends/das/core.py 269
             fh.write(lease_info.to_immutable_data())
         finally:
+            print dir(fh)
             fh.close()
 
     def _read_num_leases(self, f):
hunk ./src/allmydata/storage/backends/das/core.py 273
-        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
+        fh = f.open() #XXX  Should be mocking FilePath.open()
         try:
             fh.seek(0x08)
             ro = fh.read(4)
hunk ./src/allmydata/storage/backends/das/core.py 280
             (num_leases,) = struct.unpack(">L", ro)
         finally:
             fh.close()
+            print "end of _read_num_leases"
         return num_leases
 
     def _write_num_leases(self, f, num_leases):
hunk ./src/allmydata/storage/crawler.py 6
 from twisted.internet import reactor
 from twisted.application import service
 from allmydata.storage.common import si_b2a
-from allmydata.util import fileutil
 
 class TimeSliceExceeded(Exception):
     pass
hunk ./src/allmydata/storage/crawler.py 478
             old_cycle,buckets = self.state["storage-index-samples"][prefix]
             if old_cycle != cycle:
                 del self.state["storage-index-samples"][prefix]
-
hunk ./src/allmydata/test/test_backends.py 1
+import os
 from twisted.trial import unittest
 from twisted.python.filepath import FilePath
 from allmydata.util.log import msg
hunk ./src/allmydata/test/test_backends.py 9
 from allmydata.test.common_util import ReallyEqualMixin
 from allmydata.util.assertutil import _assert
 import mock
+from mock import Mock
 
 # This is the code that we're going to be testing.
 from allmydata.storage.server import StorageServer
hunk ./src/allmydata/test/test_backends.py 40
     def __init__(self):
         self.st_mode = None
 
+class MockFilePath:
+    def __init__(self, PathString):
+        self.PathName = PathString
+    def child(self, ChildString):
+        return MockFilePath(os.path.join(self.PathName, ChildString))
+    def parent(self):
+        return MockFilePath(os.path.dirname(self.PathName))
+    def makedirs(self):
+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
+        pass
+    def isdir(self):
+        return True
+    def remove(self):
+        pass
+    def children(self):
+        return []
+    def exists(self):
+        return False
+    def setContent(self, ContentString):
+        self.File = MockFile(ContentString)
+    def open(self):
+        return self.File.open()
+
+class MockFile:
+    def __init__(self, ContentString):
+        self.Contents = ContentString
+    def open(self):
+        return self
+    def close(self):
+        pass
+    def seek(self, position):
+        pass
+    def read(self, amount):
+        pass
+
+
+class MockBCC:
+    def setServiceParent(self, Parent):
+        pass
+
+class MockLCC:
+    def setServiceParent(self, Parent):
+        pass
+
 class MockFiles(unittest.TestCase):
     """ I simulate a filesystem that the code under test can use. I flag the
     code under test if it reads or writes outside of its prescribed
hunk ./src/allmydata/test/test_backends.py 91
     implementation of DAS backend needs. """
 
     def setUp(self):
+        # Make patcher, patch, and make effects for fs using functions.
         msg( "%s.setUp()" % (self,))
hunk ./src/allmydata/test/test_backends.py 93
-        self.storedir = FilePath('teststoredir')
+        self.storedir = MockFilePath('teststoredir')
         self.basedir = self.storedir.child('shares')
         self.baseincdir = self.basedir.child('incoming')
         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
hunk ./src/allmydata/test/test_backends.py 101
         self.shareincomingname = self.sharedirincomingname.child('0')
         self.sharefinalname = self.sharedirfinalname.child('0')
 
-        # Make patcher, patch, and make effects for fs using functions.
-        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
-        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
-        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
-
-        #self.mockmkdirp = mock.patch('os.mkdir')
-        #mockmkdir = self.mockmkdirp.__enter__()
-        #mockmkdir.side_effect = self.call_mkdir
-
-        self.mockisdirp = mock.patch('FilePath.isdir')
-        mockisdir = self.mockisdirp.__enter__()
-        mockisdir.side_effect = self.call_isdir
+        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
+        FakePath = self.FilePathFake.__enter__()
 
hunk ./src/allmydata/test/test_backends.py 104
-        self.mockopenp = mock.patch('FilePath.open')
-        mockopen = self.mockopenp.__enter__()
-        mockopen.side_effect = self.call_open
+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
+        FakeBCC = self.BCountingCrawler.__enter__()
+        FakeBCC.side_effect = self.call_FakeBCC
 
hunk ./src/allmydata/test/test_backends.py 108
-        self.mockstatp = mock.patch('filepath.stat')
-        mockstat = self.mockstatp.__enter__()
-        mockstat.side_effect = self.call_stat
+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
+        FakeLCC.side_effect = self.call_FakeLCC
 
hunk ./src/allmydata/test/test_backends.py 112
-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
-        mockfpstat = self.mockfpstatp.__enter__()
-        mockfpstat.side_effect = self.call_stat
+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
+        GetSpace = self.get_available_space.__enter__()
+        GetSpace.side_effect = self.call_get_available_space
 
hunk ./src/allmydata/test/test_backends.py 116
-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
-        mockget_available_space = self.mockget_available_space.__enter__()
-        mockget_available_space.side_effect = self.call_get_available_space
+    def call_FakeBCC(self, StateFile):
+        return MockBCC()
 
hunk ./src/allmydata/test/test_backends.py 119
-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
-        mockfpexists = self.mockfpexists.__enter__()
-        mockfpexists.side_effect = self.call_exists
-
-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
-        mocksetContent = self.mocksetContent.__enter__()
-        mocksetContent.side_effect = self.call_setContent
-
-    #  The behavior of mocked filesystem using functions
-    def call_open(self, fname, mode):
-        assert isinstance(fname, basestring), fname
-        fnamefp = FilePath(fname)
-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
-                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
-
-        if fnamefp == self.storedir.child('bucket_counter.state'):
-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
-        elif fnamefp == self.storedir.child('lease_checker.state'):
-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
-        elif fnamefp == self.storedir.child('lease_checker.history'):
-            # This is separated out from the else clause below just because
-            # we know this particular file is going to be used by the
-            # current implementation of DAS backend, and we might want to
-            # use this information in this test in the future...
-            return StringIO()
-        elif fnamefp == self.shareincomingname:
-            self.incomingsharefilecontents.closed = False
-            return self.incomingsharefilecontents
-        else:
-            # Anything else you open inside your subtree appears to be an
-            # empty file.
-            return StringIO()
-
-    def call_isdir(self, fname):
-        fnamefp = FilePath(fname)
-        return fnamefp.isdir()
-
-        self.failUnless(self.storedir == self or self.storedir in self.parents(),
-                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
-
-        # The first two cases are separate from the else clause below just
-        # because we know that the current implementation of the DAS backend
-        # inspects these two directories and we might want to make use of
-        # that information in the tests in the future...
-        if self == self.storedir.child('shares'):
-            return True
-        elif self == self.storedir.child('shares').child('incoming'):
-            return True
-        else:
-            # Anything else you open inside your subtree appears to be a
-            # directory.
-            return True
-
-    def call_mkdir(self, fname, mode):
-        fnamefp = FilePath(fname)
-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
-                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
-        self.failUnlessEqual(0777, mode)
+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
+        return MockLCC()
 
     def call_listdir(self, fname):
         fnamefp = FilePath(fname)
hunk ./src/allmydata/test/test_backends.py 150
 
     def tearDown(self):
         msg( "%s.tearDown()" % (self,))
-        self.mocksetContent.__exit__()
-        self.mockfpexists.__exit__()
-        self.mockget_available_space.__exit__()
-        self.mockfpstatp.__exit__()
-        self.mockstatp.__exit__()
-        self.mockopenp.__exit__()
-        self.mockisdirp.__exit__()
-        self.mockmkdirp.__exit__()
-        self.mocklistdirp.__exit__()
-
+        FakePath = self.FilePathFake.__exit__()        
+        FakeBCC = self.BCountingCrawler.__exit__()
 
 expiration_policy = {'enabled' : False, 
                      'mode' : 'age',
hunk ./src/allmydata/test/test_backends.py 222
         # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
         
         # Attempt to create a second share writer with the same sharenum.
-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
+        # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
 
         # Show that no sharewriter results from a remote_allocate_buckets
         # with the same si and sharenum, until BucketWriter.remote_close()
hunk ./src/allmydata/test/test_backends.py 227
         # has been called.
-        self.failIf(bsa)
+        # self.failIf(bsa)
 
         # Test allocated size. 
hunk ./src/allmydata/test/test_backends.py 230
-        spaceint = self.ss.allocated_size()
-        self.failUnlessReallyEqual(spaceint, 1)
+        # spaceint = self.ss.allocated_size()
+        # self.failUnlessReallyEqual(spaceint, 1)
 
         # Write 'a' to shnum 0. Only tested together with close and read.
hunk ./src/allmydata/test/test_backends.py 234
-        bs[0].remote_write(0, 'a')
+        # bs[0].remote_write(0, 'a')
         
         # Preclose: Inspect final, failUnless nothing there.
hunk ./src/allmydata/test/test_backends.py 237
-        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
-        bs[0].remote_close()
+        # self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
+        # bs[0].remote_close()
 
         # Postclose: (Omnibus) failUnless written data is in final.
hunk ./src/allmydata/test/test_backends.py 241
-        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
-        self.failUnlessReallyEqual(len(sharesinfinal), 1)
-        contents = sharesinfinal[0].read_share_data(0, 73)
-        self.failUnlessReallyEqual(contents, client_data)
+        # sharesinfinal = list(self.backend.get_shares('teststorage_index'))
+        # self.failUnlessReallyEqual(len(sharesinfinal), 1)
+        # contents = sharesinfinal[0].read_share_data(0, 73)
+        # self.failUnlessReallyEqual(contents, client_data)
 
         # Exercise the case that the share we're asking to allocate is
         # already (completely) uploaded.
hunk ./src/allmydata/test/test_backends.py 248
-        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
+        # self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
         
     @mock.patch('time.time')
     @mock.patch('allmydata.util.fileutil.get_available_space')
}
[jacp20
wilcoxjg@gmail.com**20110728072514
 Ignore-this: 6a03289023c3c79b8d09e2711183ea82
] {
hunk ./src/allmydata/storage/backends/das/core.py 52
     def _setup_storage(self, storedir, readonly, reserved_space):
         precondition(isinstance(storedir, FilePath), storedir, FilePath)  
         self.storedir = storedir
+        print "self.storedir: ", self.storedir
         self.readonly = readonly
         self.reserved_space = int(reserved_space)
         self.sharedir = self.storedir.child("shares")
hunk ./src/allmydata/storage/backends/das/core.py 85
 
     def get_incoming_shnums(self, storageindex):
         """ Return a frozenset of the shnum (as ints) of incoming shares. """
-        incomingdir = si_si2dir(self.incomingdir, storageindex)
+        print "self.incomingdir.children(): ", self.incomingdir.children()
+        print "self.incomingdir.pathname: ", self.incomingdir.pathname
+        incomingthissi = si_si2dir(self.incomingdir, storageindex)
+        print "incomingthissi.children(): ", incomingthissi.children()
         try:
hunk ./src/allmydata/storage/backends/das/core.py 90
-            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
+            childfps = [ fp for fp in incomingthissi.children() if is_num(fp) ]
             shnums = [ int(fp.basename) for fp in childfps ]
             return frozenset(shnums)
         except UnlistableError:
hunk ./src/allmydata/storage/backends/das/core.py 117
 
     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
         finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
-        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
+        incominghome = si_si2dir(self.incomingdir, storageindex).child(str(shnum))
         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
         return bw
hunk ./src/allmydata/storage/backends/das/core.py 183
             # if this does happen, the old < v1.3.0 server will still allow
             # clients to read the first part of the share.
             self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
+            print "We got here right?"
             self._lease_offset = max_size + 0x0c
             self._num_leases = 0
         else:
hunk ./src/allmydata/storage/backends/das/core.py 274
             assert fh.tell() == offset
             fh.write(lease_info.to_immutable_data())
         finally:
-            print dir(fh)
             fh.close()
 
     def _read_num_leases(self, f):
hunk ./src/allmydata/storage/backends/das/core.py 284
             (num_leases,) = struct.unpack(">L", ro)
         finally:
             fh.close()
-            print "end of _read_num_leases"
         return num_leases
 
     def _write_num_leases(self, f, num_leases):
hunk ./src/allmydata/storage/common.py 21
 
 def si_si2dir(startfp, storageindex):
     sia = si_b2a(storageindex)
-    return startfp.child(sia[:2]).child(sia)
+    print "I got here right?  sia =", sia
+    print "What the fuck is startfp? ", startfp
+    print "What the fuck is startfp.pathname? ", startfp.pathname
+    newfp = startfp.child(sia[:2])
+    print "Did I get here?"
+    return newfp.child(sia)
hunk ./src/allmydata/test/test_backends.py 5
 from twisted.trial import unittest
 from twisted.python.filepath import FilePath
 from allmydata.util.log import msg
-from StringIO import StringIO
+from tempfile import TemporaryFile
 from allmydata.test.common_util import ReallyEqualMixin
 from allmydata.util.assertutil import _assert
 import mock
hunk ./src/allmydata/test/test_backends.py 34
     cancelsecret + expirationtime + nextlease
 share_data = containerdata + client_data
 testnodeid = 'testnodeidxxxxxxxxxx'
+fakefilepaths = {}
 
 
 class MockStat:
hunk ./src/allmydata/test/test_backends.py 41
     def __init__(self):
         self.st_mode = None
 
+
 class MockFilePath:
hunk ./src/allmydata/test/test_backends.py 43
-    def __init__(self, PathString):
-        self.PathName = PathString
-    def child(self, ChildString):
-        return MockFilePath(os.path.join(self.PathName, ChildString))
+    def __init__(self, pathstring):
+        self.pathname = pathstring
+        self.spawn = {}
+        self.antecedent = os.path.dirname(self.pathname)
+    def child(self, childstring):
+        arg2child = os.path.join(self.pathname, childstring)
+        print "arg2child: ", arg2child
+        if fakefilepaths.has_key(arg2child):
+            child = fakefilepaths[arg2child]
+            print "Should have gotten here."
+        else:
+            child = MockFilePath(arg2child)
+        return child
     def parent(self):
hunk ./src/allmydata/test/test_backends.py 57
-        return MockFilePath(os.path.dirname(self.PathName))
+        if fakefilepaths.has_key(self.antecedent):
+            parent = fakefilepaths[self.antecedent]
+        else:
+            parent = MockFilePath(self.antecedent)
+        return parent
+    def children(self):
+        childrenfromffs = frozenset(fakefilepaths.values()) 
+        return list(childrenfromffs | frozenset(self.spawn.values()))  
     def makedirs(self):
         # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
         pass
hunk ./src/allmydata/test/test_backends.py 72
         return True
     def remove(self):
         pass
-    def children(self):
-        return []
     def exists(self):
         return False
hunk ./src/allmydata/test/test_backends.py 74
-    def setContent(self, ContentString):
-        self.File = MockFile(ContentString)
     def open(self):
         return self.File.open()
hunk ./src/allmydata/test/test_backends.py 76
+    def setparents(self):
+        antecedents = []
+        def f(fps, antecedents):
+            newfps = os.path.split(fps)[0] 
+            if newfps:
+                antecedents.append(newfps)
+                f(newfps, antecedents)
+        f(self.pathname, antecedents)
+        for fps in antecedents:
+            if not fakefilepaths.has_key(fps):
+                fakefilepaths[fps] = MockFilePath(fps)
+    def setContent(self, contentstring):
+        print "I am self.pathname: ", self.pathname
+        fakefilepaths[self.pathname] = self
+        self.File = MockFile(contentstring)
+        self.setparents()
+    def create(self):
+        fakefilepaths[self.pathname] = self 
+        self.setparents()
+            
 
 class MockFile:
hunk ./src/allmydata/test/test_backends.py 98
-    def __init__(self, ContentString):
-        self.Contents = ContentString
+    def __init__(self, contentstring):
+        self.buffer = contentstring
+        self.pos = 0
     def open(self):
         return self
hunk ./src/allmydata/test/test_backends.py 103
+    def write(self, instring):
+        begin = self.pos
+        padlen = begin - len(self.buffer)
+        if padlen > 0:
+            self.buffer += '\x00' * padlen
+            end = self.pos + len(instring)
+            self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
+            self.pos = end
     def close(self):
         pass
hunk ./src/allmydata/test/test_backends.py 113
-    def seek(self, position):
-        pass
-    def read(self, amount):
-        pass
+    def seek(self, pos):
+        self.pos = pos
+    def read(self, numberbytes):
+        return self.buffer[self.pos:self.pos+numberbytes]
+    def tell(self):
+        return self.pos
 
 
 class MockBCC:
hunk ./src/allmydata/test/test_backends.py 125
     def setServiceParent(self, Parent):
         pass
 
+
 class MockLCC:
     def setServiceParent(self, Parent):
         pass
hunk ./src/allmydata/test/test_backends.py 130
 
+
 class MockFiles(unittest.TestCase):
     """ I simulate a filesystem that the code under test can use. I flag the
     code under test if it reads or writes outside of its prescribed
hunk ./src/allmydata/test/test_backends.py 193
         return False
 
     def call_setContent(self, inputstring):
-        self.incomingsharefilecontents = StringIO(inputstring)
+        self.incomingsharefilecontents = TemporaryFile(inputstring)
 
     def tearDown(self):
         msg( "%s.tearDown()" % (self,))
hunk ./src/allmydata/test/test_backends.py 206
                      'cutoff_date' : None,
                      'sharetypes' : None}
 
+
 class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
     """ NullBackend is just for testing and executable documentation, so
     this test is actually a test of StorageServer in which we're using
hunk ./src/allmydata/test/test_backends.py 229
         self.failIf(mockopen.called)
         self.failIf(mockmkdir.called)
 
+
 class TestServerConstruction(MockFiles, ReallyEqualMixin):
     def test_create_server_fs_backend(self):
         """ This tests whether a server instance can be constructed with a
hunk ./src/allmydata/test/test_backends.py 238
 
         StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
 
+
 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
     """ This tests both the StorageServer and the DAS backend together. """
     
hunk ./src/allmydata/test/test_backends.py 262
         """
         mocktime.return_value = 0
         # Inspect incoming and fail unless it's empty.
-        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
-        self.failUnlessReallyEqual(incomingset, frozenset())
+        # incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
+        # self.failUnlessReallyEqual(incomingset, frozenset())
         
         # Populate incoming with the sharenum: 0.
         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
hunk ./src/allmydata/test/test_backends.py 269
 
         # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
-        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
         
         # Attempt to create a second share writer with the same sharenum.
         # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
hunk ./src/allmydata/test/test_backends.py 274
 
+        # print bsa
         # Show that no sharewriter results from a remote_allocate_buckets
         # with the same si and sharenum, until BucketWriter.remote_close()
         # has been called.
hunk ./src/allmydata/test/test_backends.py 339
             self.failUnlessEqual(mode[0], 'r', mode)
             self.failUnless('b' in mode, mode)
 
-            return StringIO(share_data)
+            return TemporaryFile(share_data)
         mockopen.side_effect = call_open
 
         datalen = len(share_data)
}

Context:

[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
david-sarah@jacaranda.org**20110721234941
 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
] 
[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
david-sarah@jacaranda.org**20110722000320
 Ignore-this: 55cd558b791526113db3f83c00ec328a
] 
[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
david-sarah@jacaranda.org**20110721233658
 Ignore-this: 81b41745477163c9b39c0b59db91cc62
] 
[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
david-sarah@jacaranda.org**20110722035402
 Ignore-this: 5d03f544c4154f088e26c7107494bf39
] 
[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
david-sarah@jacaranda.org**20110722024907
 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
] 
[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
david-sarah@jacaranda.org**20110718005949
 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
] 
[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
david-sarah@jacaranda.org**20110717194315
 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
] 
[README.txt: say that quickstart.rst is in the docs directory.
david-sarah@jacaranda.org**20110717192400
 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
] 
[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
zooko@zooko.com**20110717114226
 Ignore-this: df222120d41447ce4102616921626c82
 fixes #1383
] 
[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
david-sarah@jacaranda.org**20110716181813
 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
] 
[docs: add missing link in NEWS.rst
zooko@zooko.com**20110712153307
 Ignore-this: be7b7eb81c03700b739daa1027d72b35
] 
[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
zooko@zooko.com**20110712153229
 Ignore-this: 723c4f9e2211027c79d711715d972c5
 Also remove a couple of vestigial references to figleaf, which is long gone.
 fixes #1409 (remove contrib/fuse)
] 
[add Protovis.js-based download-status timeline visualization
Brian Warner <warner@lothar.com>**20110629222606
 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
 
 provide status overlap info on the webapi t=json output, add decode/decrypt
 rate tooltips, add zoomin/zoomout buttons
] 
[add more download-status data, fix tests
Brian Warner <warner@lothar.com>**20110629222555
 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
] 
[prepare for viz: improve DownloadStatus events
Brian Warner <warner@lothar.com>**20110629222542
 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
 
 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
] 
[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
zooko@zooko.com**20110629185711
 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
] 
[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
david-sarah@jacaranda.org**20110130235809
 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
] 
[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
david-sarah@jacaranda.org**20110626054124
 Ignore-this: abb864427a1b91bd10d5132b4589fd90
] 
[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
david-sarah@jacaranda.org**20110623205528
 Ignore-this: c63e23146c39195de52fb17c7c49b2da
] 
[Rename test_package_initialization.py to (much shorter) test_import.py .
Brian Warner <warner@lothar.com>**20110611190234
 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
 
 The former name was making my 'ls' listings hard to read, by forcing them
 down to just two columns.
] 
[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
zooko@zooko.com**20110611163741
 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
 fixes #1412
] 
[wui: right-align the size column in the WUI
zooko@zooko.com**20110611153758
 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
 fixes #1412
] 
[docs: three minor fixes
zooko@zooko.com**20110610121656
 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
 CREDITS for arc for stats tweak
 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
 English usage tweak
] 
[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
david-sarah@jacaranda.org**20110609223719
 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
] 
[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
wilcoxjg@gmail.com**20110527120135
 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
 NEWS.rst, stats.py: documentation of change to get_latencies
 stats.rst: now documents percentile modification in get_latencies
 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
 fixes #1392
] 
[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
david-sarah@jacaranda.org**20110517011214
 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
] 
[docs: convert NEWS to NEWS.rst and change all references to it.
david-sarah@jacaranda.org**20110517010255
 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
] 
[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
david-sarah@jacaranda.org**20110512140559
 Ignore-this: 784548fc5367fac5450df1c46890876d
] 
[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
david-sarah@jacaranda.org**20110130164923
 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
] 
[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
zooko@zooko.com**20110128142006
 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
] 
[M-x whitespace-cleanup
zooko@zooko.com**20110510193653
 Ignore-this: dea02f831298c0f65ad096960e7df5c7
] 
[docs: fix typo in running.rst, thanks to arch_o_median
zooko@zooko.com**20110510193633
 Ignore-this: ca06de166a46abbc61140513918e79e8
] 
[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
david-sarah@jacaranda.org**20110204204902
 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
] 
[relnotes.txt: forseeable -> foreseeable. refs #1342
david-sarah@jacaranda.org**20110204204116
 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
] 
[replace remaining .html docs with .rst docs
zooko@zooko.com**20110510191650
 Ignore-this: d557d960a986d4ac8216d1677d236399
 Remove install.html (long since deprecated).
 Also replace some obsolete references to install.html with references to quickstart.rst.
 Fix some broken internal references within docs/historical/historical_known_issues.txt.
 Thanks to Ravi Pinjala and Patrick McDonald.
 refs #1227
] 
[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
zooko@zooko.com**20110428055232
 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
] 
[munin tahoe_files plugin: fix incorrect file count
francois@ctrlaltdel.ch**20110428055312
 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
 fixes #1391
] 
[corrected "k must never be smaller than N" to "k must never be greater than N"
secorp@allmydata.org**20110425010308
 Ignore-this: 233129505d6c70860087f22541805eac
] 
[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
david-sarah@jacaranda.org**20110411190738
 Ignore-this: 7847d26bc117c328c679f08a7baee519
] 
[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
david-sarah@jacaranda.org**20110410155844
 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
] 
[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
david-sarah@jacaranda.org**20110410155705
 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
] 
[remove unused variable detected by pyflakes
zooko@zooko.com**20110407172231
 Ignore-this: 7344652d5e0720af822070d91f03daf9
] 
[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
david-sarah@jacaranda.org**20110401202750
 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
] 
[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
Brian Warner <warner@lothar.com>**20110325232511
 Ignore-this: d5307faa6900f143193bfbe14e0f01a
] 
[control.py: remove all uses of s.get_serverid()
warner@lothar.com**20110227011203
 Ignore-this: f80a787953bd7fa3d40e828bde00e855
] 
[web: remove some uses of s.get_serverid(), not all
warner@lothar.com**20110227011159
 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
] 
[immutable/downloader/fetcher.py: remove all get_serverid() calls
warner@lothar.com**20110227011156
 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
] 
[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
warner@lothar.com**20110227011153
 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
 
 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
 _shares_from_server dict was being popped incorrectly (using shnum as the
 index instead of serverid). I'm still thinking through the consequences of
 this bug. It was probably benign and really hard to detect. I think it would
 cause us to incorrectly believe that we're pulling too many shares from a
 server, and thus prefer a different server rather than asking for a second
 share from the first server. The diversity code is intended to spread out the
 number of shares simultaneously being requested from each server, but with
 this bug, it might be spreading out the total number of shares requested at
 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
 segment, so the effect doesn't last very long).
] 
[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
warner@lothar.com**20110227011150
 Ignore-this: d8d56dd8e7b280792b40105e13664554
 
 test_download.py: create+check MyShare instances better, make sure they share
 Server objects, now that finder.py cares
] 
[immutable/downloader/finder.py: reduce use of get_serverid(), one left
warner@lothar.com**20110227011146
 Ignore-this: 5785be173b491ae8a78faf5142892020
] 
[immutable/offloaded.py: reduce use of get_serverid() a bit more
warner@lothar.com**20110227011142
 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
] 
[immutable/upload.py: reduce use of get_serverid()
warner@lothar.com**20110227011138
 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
] 
[immutable/checker.py: remove some uses of s.get_serverid(), not all
warner@lothar.com**20110227011134
 Ignore-this: e480a37efa9e94e8016d826c492f626e
] 
[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
warner@lothar.com**20110227011132
 Ignore-this: 6078279ddf42b179996a4b53bee8c421
 MockIServer stubs
] 
[upload.py: rearrange _make_trackers a bit, no behavior changes
warner@lothar.com**20110227011128
 Ignore-this: 296d4819e2af452b107177aef6ebb40f
] 
[happinessutil.py: finally rename merge_peers to merge_servers
warner@lothar.com**20110227011124
 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
] 
[test_upload.py: factor out FakeServerTracker
warner@lothar.com**20110227011120
 Ignore-this: 6c182cba90e908221099472cc159325b
] 
[test_upload.py: server-vs-tracker cleanup
warner@lothar.com**20110227011115
 Ignore-this: 2915133be1a3ba456e8603885437e03
] 
[happinessutil.py: server-vs-tracker cleanup
warner@lothar.com**20110227011111
 Ignore-this: b856c84033562d7d718cae7cb01085a9
] 
[upload.py: more tracker-vs-server cleanup
warner@lothar.com**20110227011107
 Ignore-this: bb75ed2afef55e47c085b35def2de315
] 
[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
warner@lothar.com**20110227011103
 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
] 
[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
warner@lothar.com**20110227011100
 Ignore-this: 7ea858755cbe5896ac212a925840fe68
 
 No behavioral changes, just updating variable/method names and log messages.
 The effects outside these three files should be minimal: some exception
 messages changed (to say "server" instead of "peer"), and some internal class
 names were changed. A few things still use "peer" to minimize external
 changes, like UploadResults.timings["peer_selection"] and
 happinessutil.merge_peers, which can be changed later.
] 
[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
warner@lothar.com**20110227011056
 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
] 
[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
warner@lothar.com**20110227011051
 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
] 
[test: increase timeout on a network test because Francois's ARM machine hit that timeout
zooko@zooko.com**20110317165909
 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
] 
[docs/configuration.rst: add a "Frontend Configuration" section
Brian Warner <warner@lothar.com>**20110222014323
 Ignore-this: 657018aa501fe4f0efef9851628444ca
 
 this points to docs/frontends/*.rst, which were previously underlinked
] 
[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
"Brian Warner <warner@lothar.com>"**20110221061544
 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
] 
[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
david-sarah@jacaranda.org**20110221015817
 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
] 
[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
david-sarah@jacaranda.org**20110221020125
 Ignore-this: b0744ed58f161bf188e037bad077fc48
] 
[Refactor StorageFarmBroker handling of servers
Brian Warner <warner@lothar.com>**20110221015804
 Ignore-this: 842144ed92f5717699b8f580eab32a51
 
 Pass around IServer instance instead of (peerid, rref) tuple. Replace
 "descriptor" with "server". Other replacements:
 
  get_all_servers -> get_connected_servers/get_known_servers
  get_servers_for_index -> get_servers_for_psi (now returns IServers)
 
 This change still needs to be pushed further down: lots of code is now
 getting the IServer and then distributing (peerid, rref) internally.
 Instead, it ought to distribute the IServer internally and delay
 extracting a serverid or rref until the last moment.
 
 no_network.py was updated to retain parallelism.
] 
[TAG allmydata-tahoe-1.8.2
warner@lothar.com**20110131020101] 
Patch bundle hash:
5112625929162114ea48588e65726436e5c6a7c0
