Ticket #778: tests.txt

File tests.txt, 165.3 KB (added by kevan, at 2010-03-19T05:24:17Z)

tests updated to be current

Line 
1Sat Oct 17 18:30:13 PDT 2009  Kevan Carstensen <kevan@isnotajoke.com>
2  * Alter NoNetworkGrid to allow the creation of readonly servers for testing purposes.
3
4Fri Oct 30 02:19:08 PDT 2009  "Kevan Carstensen" <kevan@isnotajoke.com>
5  * Refactor some behavior into a mixin, and add tests for the behavior described in #778
6
7Tue Nov  3 19:36:02 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
8  * Alter tests to use the new form of set_shareholders
9
10Tue Nov  3 19:42:32 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
11  * Minor tweak to an existing test -- make the first server read-write, instead of read-only
12
13Wed Nov  4 03:13:24 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
14  * Add a test for upload.shares_by_server
15
16Wed Nov  4 03:28:49 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
17  * Add more tests for comment:53 in ticket #778
18
19Sun Nov  8 16:37:35 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
20  * Test Tahoe2PeerSelector to make sure that it recognizeses existing shares on readonly servers
21
22Mon Nov 16 11:23:34 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
23  * Re-work 'test_upload.py' to be more readable; add more tests for #778
24
25Sun Nov 22 17:20:08 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
26  * Add tests for the behavior described in #834.
27
28Fri Dec  4 20:34:53 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
29  * Replace "UploadHappinessError" with "UploadUnhappinessError" in tests.
30
31Thu Jan  7 10:13:25 PST 2010  Kevan Carstensen <kevan@isnotajoke.com>
32  * Alter various unit tests to work with the new happy behavior
33
34Thu Mar 18 22:06:53 PDT 2010  Kevan Carstensen <kevan@isnotajoke.com>
35  * Revisions of the #778 tests, per reviewers' comments
36 
37  - Fix comments and confusing naming.
38  - Add tests for the new error messages suggested by David-Sarah
39    and Zooko.
40  - Alter existing tests for new error messages.
41  - Make sure that the tests continue to work with the trunk.
42  - Add a test for a mutual disjointedness assertion that I added to
43    upload.servers_of_happiness.
44  - Fix the comments to correctly reflect read-onlyness
45  - Add a test for an edge case in should_add_server
46  - Add an assertion to make sure that share redistribution works as it
47    should
48  - Alter tests to work with revised servers_of_happiness semantics
49  - Remove tests for should_add_server, since that function no longer exists.
50  - Alter tests to know about merge_peers, and to use it before calling
51    servers_of_happiness.
52  - Add tests for merge_peers.
53  - Add Zooko's puzzles to the tests.
54 
55
56New patches:
57
58[Alter NoNetworkGrid to allow the creation of readonly servers for testing purposes.
59Kevan Carstensen <kevan@isnotajoke.com>**20091018013013
60 Ignore-this: e12cd7c4ddeb65305c5a7e08df57c754
61] {
62hunk ./src/allmydata/test/no_network.py 219
63             c.setServiceParent(self)
64             self.clients.append(c)
65 
66-    def make_server(self, i):
67+    def make_server(self, i, readonly=False):
68         serverid = hashutil.tagged_hash("serverid", str(i))[:20]
69         serverdir = os.path.join(self.basedir, "servers",
70                                  idlib.shortnodeid_b2a(serverid))
71hunk ./src/allmydata/test/no_network.py 224
72         fileutil.make_dirs(serverdir)
73-        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats())
74+        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(),
75+                           readonly_storage=readonly)
76         return ss
77 
78     def add_server(self, i, ss):
79}
80[Refactor some behavior into a mixin, and add tests for the behavior described in #778
81"Kevan Carstensen" <kevan@isnotajoke.com>**20091030091908
82 Ignore-this: a6f9797057ca135579b249af3b2b66ac
83] {
84hunk ./src/allmydata/test/test_upload.py 2
85 
86-import os
87+import os, shutil
88 from cStringIO import StringIO
89 from twisted.trial import unittest
90 from twisted.python.failure import Failure
91hunk ./src/allmydata/test/test_upload.py 12
92 
93 import allmydata # for __full_version__
94 from allmydata import uri, monitor, client
95-from allmydata.immutable import upload
96+from allmydata.immutable import upload, encode
97 from allmydata.interfaces import FileTooLargeError, NoSharesError, \
98      NotEnoughSharesError
99 from allmydata.util.assertutil import precondition
100hunk ./src/allmydata/test/test_upload.py 20
101 from no_network import GridTestMixin
102 from common_util import ShouldFailMixin
103 from allmydata.storage_client import StorageFarmBroker
104+from allmydata.storage.server import storage_index_to_dir
105 
106 MiB = 1024*1024
107 
108hunk ./src/allmydata/test/test_upload.py 91
109 class ServerError(Exception):
110     pass
111 
112+class SetDEPMixin:
113+    def set_encoding_parameters(self, k, happy, n, max_segsize=1*MiB):
114+        p = {"k": k,
115+             "happy": happy,
116+             "n": n,
117+             "max_segment_size": max_segsize,
118+             }
119+        self.node.DEFAULT_ENCODING_PARAMETERS = p
120+
121 class FakeStorageServer:
122     def __init__(self, mode):
123         self.mode = mode
124hunk ./src/allmydata/test/test_upload.py 247
125     u = upload.FileHandle(fh, convergence=None)
126     return uploader.upload(u)
127 
128-class GoodServer(unittest.TestCase, ShouldFailMixin):
129+class GoodServer(unittest.TestCase, ShouldFailMixin, SetDEPMixin):
130     def setUp(self):
131         self.node = FakeClient(mode="good")
132         self.u = upload.Uploader()
133hunk ./src/allmydata/test/test_upload.py 254
134         self.u.running = True
135         self.u.parent = self.node
136 
137-    def set_encoding_parameters(self, k, happy, n, max_segsize=1*MiB):
138-        p = {"k": k,
139-             "happy": happy,
140-             "n": n,
141-             "max_segment_size": max_segsize,
142-             }
143-        self.node.DEFAULT_ENCODING_PARAMETERS = p
144-
145     def _check_small(self, newuri, size):
146         u = uri.from_string(newuri)
147         self.failUnless(isinstance(u, uri.LiteralFileURI))
148hunk ./src/allmydata/test/test_upload.py 377
149         d.addCallback(self._check_large, SIZE_LARGE)
150         return d
151 
152-class ServerErrors(unittest.TestCase, ShouldFailMixin):
153+class ServerErrors(unittest.TestCase, ShouldFailMixin, SetDEPMixin):
154     def make_node(self, mode, num_servers=10):
155         self.node = FakeClient(mode, num_servers)
156         self.u = upload.Uploader()
157hunk ./src/allmydata/test/test_upload.py 677
158         d.addCallback(_done)
159         return d
160 
161-class EncodingParameters(GridTestMixin, unittest.TestCase):
162+class EncodingParameters(GridTestMixin, unittest.TestCase, SetDEPMixin,
163+    ShouldFailMixin):
164+    def _do_upload_with_broken_servers(self, servers_to_break):
165+        """
166+        I act like a normal upload, but before I send the results of
167+        Tahoe2PeerSelector to the Encoder, I break the first servers_to_break
168+        PeerTrackers in the used_peers part of the return result.
169+        """
170+        assert self.g, "I tried to find a grid at self.g, but failed"
171+        broker = self.g.clients[0].storage_broker
172+        sh     = self.g.clients[0]._secret_holder
173+        data = upload.Data("data" * 10000, convergence="")
174+        data.encoding_param_k = 3
175+        data.encoding_param_happy = 4
176+        data.encoding_param_n = 10
177+        uploadable = upload.EncryptAnUploadable(data)
178+        encoder = encode.Encoder()
179+        encoder.set_encrypted_uploadable(uploadable)
180+        status = upload.UploadStatus()
181+        selector = upload.Tahoe2PeerSelector("dglev", "test", status)
182+        storage_index = encoder.get_param("storage_index")
183+        share_size = encoder.get_param("share_size")
184+        block_size = encoder.get_param("block_size")
185+        num_segments = encoder.get_param("num_segments")
186+        d = selector.get_shareholders(broker, sh, storage_index,
187+                                      share_size, block_size, num_segments,
188+                                      10, 4)
189+        def _have_shareholders((used_peers, already_peers)):
190+            assert servers_to_break <= len(used_peers)
191+            for index in xrange(servers_to_break):
192+                server = list(used_peers)[index]
193+                for share in server.buckets.keys():
194+                    server.buckets[share].abort()
195+            buckets = {}
196+            for peer in used_peers:
197+                buckets.update(peer.buckets)
198+            encoder.set_shareholders(buckets)
199+            d = encoder.start()
200+            return d
201+        d.addCallback(_have_shareholders)
202+        return d
203+
204+    def _add_server_with_share(self, server_number, share_number=None,
205+                               readonly=False):
206+        assert self.g, "I tried to find a grid at self.g, but failed"
207+        assert self.shares, "I tried to find shares at self.shares, but failed"
208+        ss = self.g.make_server(server_number, readonly)
209+        self.g.add_server(server_number, ss)
210+        if share_number:
211+            # Copy share i from the directory associated with the first
212+            # storage server to the directory associated with this one.
213+            old_share_location = self.shares[share_number][2]
214+            new_share_location = os.path.join(ss.storedir, "shares")
215+            si = uri.from_string(self.uri).get_storage_index()
216+            new_share_location = os.path.join(new_share_location,
217+                                              storage_index_to_dir(si))
218+            if not os.path.exists(new_share_location):
219+                os.makedirs(new_share_location)
220+            new_share_location = os.path.join(new_share_location,
221+                                              str(share_number))
222+            shutil.copy(old_share_location, new_share_location)
223+            shares = self.find_shares(self.uri)
224+            # Make sure that the storage server has the share.
225+            self.failUnless((share_number, ss.my_nodeid, new_share_location)
226+                            in shares)
227+
228+    def _setup_and_upload(self):
229+        """
230+        I set up a NoNetworkGrid with a single server and client,
231+        upload a file to it, store its uri in self.uri, and store its
232+        sharedata in self.shares.
233+        """
234+        self.set_up_grid(num_clients=1, num_servers=1)
235+        client = self.g.clients[0]
236+        client.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
237+        data = upload.Data("data" * 10000, convergence="")
238+        self.data = data
239+        d = client.upload(data)
240+        def _store_uri(ur):
241+            self.uri = ur.uri
242+        d.addCallback(_store_uri)
243+        d.addCallback(lambda ign:
244+            self.find_shares(self.uri))
245+        def _store_shares(shares):
246+            self.shares = shares
247+        d.addCallback(_store_shares)
248+        return d
249+
250     def test_configure_parameters(self):
251         self.basedir = self.mktemp()
252         hooks = {0: self._set_up_nodes_extra_config}
253hunk ./src/allmydata/test/test_upload.py 784
254         d.addCallback(_check)
255         return d
256 
257+    def _setUp(self, ns):
258+        # Used by test_happy_semantics and test_prexisting_share_behavior
259+        # to set up the grid.
260+        self.node = FakeClient(mode="good", num_servers=ns)
261+        self.u = upload.Uploader()
262+        self.u.running = True
263+        self.u.parent = self.node
264+
265+    def test_happy_semantics(self):
266+        self._setUp(2)
267+        DATA = upload.Data("kittens" * 10000, convergence="")
268+        # These parameters are unsatisfiable with the client that we've made
269+        # -- we'll use them to test that the semnatics work correctly.
270+        self.set_encoding_parameters(k=3, happy=5, n=10)
271+        d = self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
272+                            "shares could only be placed on 2 servers "
273+                            "(5 were requested)",
274+                            self.u.upload, DATA)
275+        # Let's reset the client to have 10 servers
276+        d.addCallback(lambda ign:
277+            self._setUp(10))
278+        # These parameters are satisfiable with the client we've made.
279+        d.addCallback(lambda ign:
280+            self.set_encoding_parameters(k=3, happy=5, n=10))
281+        # this should work
282+        d.addCallback(lambda ign:
283+            self.u.upload(DATA))
284+        # Let's reset the client to have 7 servers
285+        # (this is less than n, but more than h)
286+        d.addCallback(lambda ign:
287+            self._setUp(7))
288+        # These encoding parameters should still be satisfiable with our
289+        # client setup
290+        d.addCallback(lambda ign:
291+            self.set_encoding_parameters(k=3, happy=5, n=10))
292+        # This, then, should work.
293+        d.addCallback(lambda ign:
294+            self.u.upload(DATA))
295+        return d
296+
297+    def test_problem_layouts(self):
298+        self.basedir = self.mktemp()
299+        # This scenario is at
300+        # http://allmydata.org/trac/tahoe/ticket/778#comment:52
301+        #
302+        # The scenario in comment:52 proposes that we have a layout
303+        # like:
304+        # server 1: share 1
305+        # server 2: share 1
306+        # server 3: share 1
307+        # server 4: shares 2 - 10
308+        # To get access to the shares, we will first upload to one
309+        # server, which will then have shares 1 - 10. We'll then
310+        # add three new servers, configure them to not accept any new
311+        # shares, then write share 1 directly into the serverdir of each.
312+        # Then each of servers 1 - 3 will report that they have share 1,
313+        # and will not accept any new share, while server 4 will report that
314+        # it has shares 2 - 10 and will accept new shares.
315+        # We'll then set 'happy' = 4, and see that an upload fails
316+        # (as it should)
317+        d = self._setup_and_upload()
318+        d.addCallback(lambda ign:
319+            self._add_server_with_share(1, 0, True))
320+        d.addCallback(lambda ign:
321+            self._add_server_with_share(2, 0, True))
322+        d.addCallback(lambda ign:
323+            self._add_server_with_share(3, 0, True))
324+        # Remove the first share from server 0.
325+        def _remove_share_0():
326+            share_location = self.shares[0][2]
327+            os.remove(share_location)
328+        d.addCallback(lambda ign:
329+            _remove_share_0())
330+        # Set happy = 4 in the client.
331+        def _prepare():
332+            client = self.g.clients[0]
333+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
334+            return client
335+        d.addCallback(lambda ign:
336+            _prepare())
337+        # Uploading data should fail
338+        d.addCallback(lambda client:
339+            self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
340+                            "shares could only be placed on 1 servers "
341+                            "(4 were requested)",
342+                            client.upload, upload.Data("data" * 10000,
343+                                                       convergence="")))
344+
345+
346+        # This scenario is at
347+        # http://allmydata.org/trac/tahoe/ticket/778#comment:53
348+        #
349+        # Set up the grid to have one server
350+        def _change_basedir(ign):
351+            self.basedir = self.mktemp()
352+        d.addCallback(_change_basedir)
353+        d.addCallback(lambda ign:
354+            self._setup_and_upload())
355+        # We want to have a layout like this:
356+        # server 1: share 1
357+        # server 2: share 2
358+        # server 3: share 3
359+        # server 4: shares 1 - 10
360+        # (this is an expansion of Zooko's example because it is easier
361+        #  to code, but it will fail in the same way)
362+        # To start, we'll create a server with shares 1-10 of the data
363+        # we're about to upload.
364+        # Next, we'll add three new servers to our NoNetworkGrid. We'll add
365+        # one share from our initial upload to each of these.
366+        # The counterintuitive ordering of the share numbers is to deal with
367+        # the permuting of these servers -- distributing the shares this
368+        # way ensures that the Tahoe2PeerSelector sees them in the order
369+        # described above.
370+        d.addCallback(lambda ign:
371+            self._add_server_with_share(server_number=1, share_number=2))
372+        d.addCallback(lambda ign:
373+            self._add_server_with_share(server_number=2, share_number=0))
374+        d.addCallback(lambda ign:
375+            self._add_server_with_share(server_number=3, share_number=1))
376+        # So, we now have the following layout:
377+        # server 0: shares 1 - 10
378+        # server 1: share 0
379+        # server 2: share 1
380+        # server 3: share 2
381+        # We want to change the 'happy' parameter in the client to 4.
382+        # We then want to feed the upload process a list of peers that
383+        # server 0 is at the front of, so we trigger Zooko's scenario.
384+        # Ideally, a reupload of our original data should work.
385+        def _reset_encoding_parameters(ign):
386+            client = self.g.clients[0]
387+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
388+            return client
389+        d.addCallback(_reset_encoding_parameters)
390+        # We need this to get around the fact that the old Data
391+        # instance already has a happy parameter set.
392+        d.addCallback(lambda client:
393+            client.upload(upload.Data("data" * 10000, convergence="")))
394+        return d
395+
396+
397+    def test_dropped_servers_in_encoder(self):
398+        def _set_basedir(ign=None):
399+            self.basedir = self.mktemp()
400+        _set_basedir()
401+        d = self._setup_and_upload();
402+        # Add 5 servers, with one share each from the original
403+        # Add a readonly server
404+        def _do_server_setup(ign):
405+            self._add_server_with_share(1, 1, True)
406+            self._add_server_with_share(2)
407+            self._add_server_with_share(3)
408+            self._add_server_with_share(4)
409+            self._add_server_with_share(5)
410+        d.addCallback(_do_server_setup)
411+        # remove the original server
412+        # (necessary to ensure that the Tahoe2PeerSelector will distribute
413+        #  all the shares)
414+        def _remove_server(ign):
415+            server = self.g.servers_by_number[0]
416+            self.g.remove_server(server.my_nodeid)
417+        d.addCallback(_remove_server)
418+        # This should succeed.
419+        d.addCallback(lambda ign:
420+            self._do_upload_with_broken_servers(1))
421+        # Now, do the same thing over again, but drop 2 servers instead
422+        # of 1. This should fail.
423+        d.addCallback(_set_basedir)
424+        d.addCallback(lambda ign:
425+            self._setup_and_upload())
426+        d.addCallback(_do_server_setup)
427+        d.addCallback(_remove_server)
428+        d.addCallback(lambda ign:
429+            self.shouldFail(NotEnoughSharesError,
430+                            "test_dropped_server_in_encoder", "",
431+                            self._do_upload_with_broken_servers, 2))
432+        return d
433+
434+
435+    def test_servers_with_unique_shares(self):
436+        # servers_with_unique_shares expects a dict of
437+        # shnum => peerid as a preexisting shares argument.
438+        test1 = {
439+                 1 : "server1",
440+                 2 : "server2",
441+                 3 : "server3",
442+                 4 : "server4"
443+                }
444+        unique_servers = upload.servers_with_unique_shares(test1)
445+        self.failUnlessEqual(4, len(unique_servers))
446+        for server in ["server1", "server2", "server3", "server4"]:
447+            self.failUnlessIn(server, unique_servers)
448+        test1[4] = "server1"
449+        # Now there should only be 3 unique servers.
450+        unique_servers = upload.servers_with_unique_shares(test1)
451+        self.failUnlessEqual(3, len(unique_servers))
452+        for server in ["server1", "server2", "server3"]:
453+            self.failUnlessIn(server, unique_servers)
454+        # servers_with_unique_shares expects a set of PeerTracker
455+        # instances as a used_peers argument, but only uses the peerid
456+        # instance variable to assess uniqueness. So we feed it some fake
457+        # PeerTrackers whose only important characteristic is that they
458+        # have peerid set to something.
459+        class FakePeerTracker:
460+            pass
461+        trackers = []
462+        for server in ["server5", "server6", "server7", "server8"]:
463+            t = FakePeerTracker()
464+            t.peerid = server
465+            trackers.append(t)
466+        # Recall that there are 3 unique servers in test1. Since none of
467+        # those overlap with the ones in trackers, we should get 7 back
468+        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
469+        self.failUnlessEqual(7, len(unique_servers))
470+        expected_servers = ["server" + str(i) for i in xrange(1, 9)]
471+        expected_servers.remove("server4")
472+        for server in expected_servers:
473+            self.failUnlessIn(server, unique_servers)
474+        # Now add an overlapping server to trackers.
475+        t = FakePeerTracker()
476+        t.peerid = "server1"
477+        trackers.append(t)
478+        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
479+        self.failUnlessEqual(7, len(unique_servers))
480+        for server in expected_servers:
481+            self.failUnlessIn(server, unique_servers)
482+
483+
484     def _set_up_nodes_extra_config(self, clientdir):
485         cfgfn = os.path.join(clientdir, "tahoe.cfg")
486         oldcfg = open(cfgfn, "r").read()
487}
488[Alter tests to use the new form of set_shareholders
489Kevan Carstensen <kevan@isnotajoke.com>**20091104033602
490 Ignore-this: 3deac11fc831618d11441317463ef830
491] {
492hunk ./src/allmydata/test/test_encode.py 301
493                      (NUM_SEGMENTS-1)*segsize, len(data), NUM_SEGMENTS*segsize)
494 
495             shareholders = {}
496+            servermap = {}
497             for shnum in range(NUM_SHARES):
498                 peer = FakeBucketReaderWriterProxy()
499                 shareholders[shnum] = peer
500hunk ./src/allmydata/test/test_encode.py 305
501+                servermap[shnum] = str(shnum)
502                 all_shareholders.append(peer)
503hunk ./src/allmydata/test/test_encode.py 307
504-            e.set_shareholders(shareholders)
505+            e.set_shareholders(shareholders, servermap)
506             return e.start()
507         d.addCallback(_ready)
508 
509merger 0.0 (
510hunk ./src/allmydata/test/test_encode.py 462
511-            all_peers = []
512hunk ./src/allmydata/test/test_encode.py 463
513+            servermap = {}
514)
515hunk ./src/allmydata/test/test_encode.py 467
516                 mode = bucket_modes.get(shnum, "good")
517                 peer = FakeBucketReaderWriterProxy(mode)
518                 shareholders[shnum] = peer
519-            e.set_shareholders(shareholders)
520+                servermap[shnum] = str(shnum)
521+            e.set_shareholders(shareholders, servermap)
522             return e.start()
523         d.addCallback(_ready)
524         def _sent(res):
525hunk ./src/allmydata/test/test_upload.py 711
526                 for share in server.buckets.keys():
527                     server.buckets[share].abort()
528             buckets = {}
529+            servermap = already_peers.copy()
530             for peer in used_peers:
531                 buckets.update(peer.buckets)
532hunk ./src/allmydata/test/test_upload.py 714
533-            encoder.set_shareholders(buckets)
534+                for bucket in peer.buckets:
535+                    servermap[bucket] = peer.peerid
536+            encoder.set_shareholders(buckets, servermap)
537             d = encoder.start()
538             return d
539         d.addCallback(_have_shareholders)
540hunk ./src/allmydata/test/test_upload.py 933
541         _set_basedir()
542         d = self._setup_and_upload();
543         # Add 5 servers, with one share each from the original
544-        # Add a readonly server
545         def _do_server_setup(ign):
546             self._add_server_with_share(1, 1, True)
547             self._add_server_with_share(2)
548}
549[Minor tweak to an existing test -- make the first server read-write, instead of read-only
550Kevan Carstensen <kevan@isnotajoke.com>**20091104034232
551 Ignore-this: a951a46c93f7f58dd44d93d8623b2aee
552] hunk ./src/allmydata/test/test_upload.py 934
553         d = self._setup_and_upload();
554         # Add 5 servers, with one share each from the original
555         def _do_server_setup(ign):
556-            self._add_server_with_share(1, 1, True)
557+            self._add_server_with_share(1, 1)
558             self._add_server_with_share(2)
559             self._add_server_with_share(3)
560             self._add_server_with_share(4)
561[Add a test for upload.shares_by_server
562Kevan Carstensen <kevan@isnotajoke.com>**20091104111324
563 Ignore-this: f9802e82d6982a93e00f92e0b276f018
564] hunk ./src/allmydata/test/test_upload.py 1013
565             self.failUnlessIn(server, unique_servers)
566 
567 
568+    def test_shares_by_server(self):
569+        test = {
570+                    1 : "server1",
571+                    2 : "server2",
572+                    3 : "server3",
573+                    4 : "server4"
574+               }
575+        shares_by_server = upload.shares_by_server(test)
576+        self.failUnlessEqual(set([1]), shares_by_server["server1"])
577+        self.failUnlessEqual(set([2]), shares_by_server["server2"])
578+        self.failUnlessEqual(set([3]), shares_by_server["server3"])
579+        self.failUnlessEqual(set([4]), shares_by_server["server4"])
580+        test1 = {
581+                    1 : "server1",
582+                    2 : "server1",
583+                    3 : "server1",
584+                    4 : "server2",
585+                    5 : "server2"
586+                }
587+        shares_by_server = upload.shares_by_server(test1)
588+        self.failUnlessEqual(set([1, 2, 3]), shares_by_server["server1"])
589+        self.failUnlessEqual(set([4, 5]), shares_by_server["server2"])
590+
591+
592     def _set_up_nodes_extra_config(self, clientdir):
593         cfgfn = os.path.join(clientdir, "tahoe.cfg")
594         oldcfg = open(cfgfn, "r").read()
595[Add more tests for comment:53 in ticket #778
596Kevan Carstensen <kevan@isnotajoke.com>**20091104112849
597 Ignore-this: 3bb2edd299a944cc9586e14d5d83ec8c
598] {
599hunk ./src/allmydata/test/test_upload.py 722
600         d.addCallback(_have_shareholders)
601         return d
602 
603-    def _add_server_with_share(self, server_number, share_number=None,
604-                               readonly=False):
605+    def _add_server(self, server_number, readonly=False):
606         assert self.g, "I tried to find a grid at self.g, but failed"
607         assert self.shares, "I tried to find shares at self.shares, but failed"
608         ss = self.g.make_server(server_number, readonly)
609hunk ./src/allmydata/test/test_upload.py 727
610         self.g.add_server(server_number, ss)
611+
612+    def _add_server_with_share(self, server_number, share_number=None,
613+                               readonly=False):
614+        self._add_server(server_number, readonly)
615         if share_number:
616hunk ./src/allmydata/test/test_upload.py 732
617-            # Copy share i from the directory associated with the first
618-            # storage server to the directory associated with this one.
619-            old_share_location = self.shares[share_number][2]
620-            new_share_location = os.path.join(ss.storedir, "shares")
621-            si = uri.from_string(self.uri).get_storage_index()
622-            new_share_location = os.path.join(new_share_location,
623-                                              storage_index_to_dir(si))
624-            if not os.path.exists(new_share_location):
625-                os.makedirs(new_share_location)
626-            new_share_location = os.path.join(new_share_location,
627-                                              str(share_number))
628-            shutil.copy(old_share_location, new_share_location)
629-            shares = self.find_shares(self.uri)
630-            # Make sure that the storage server has the share.
631-            self.failUnless((share_number, ss.my_nodeid, new_share_location)
632-                            in shares)
633+            self._copy_share_to_server(share_number, server_number)
634+
635+    def _copy_share_to_server(self, share_number, server_number):
636+        ss = self.g.servers_by_number[server_number]
637+        # Copy share i from the directory associated with the first
638+        # storage server to the directory associated with this one.
639+        assert self.g, "I tried to find a grid at self.g, but failed"
640+        assert self.shares, "I tried to find shares at self.shares, but failed"
641+        old_share_location = self.shares[share_number][2]
642+        new_share_location = os.path.join(ss.storedir, "shares")
643+        si = uri.from_string(self.uri).get_storage_index()
644+        new_share_location = os.path.join(new_share_location,
645+                                          storage_index_to_dir(si))
646+        if not os.path.exists(new_share_location):
647+            os.makedirs(new_share_location)
648+        new_share_location = os.path.join(new_share_location,
649+                                          str(share_number))
650+        shutil.copy(old_share_location, new_share_location)
651+        shares = self.find_shares(self.uri)
652+        # Make sure that the storage server has the share.
653+        self.failUnless((share_number, ss.my_nodeid, new_share_location)
654+                        in shares)
655+
656 
657     def _setup_and_upload(self):
658         """
659hunk ./src/allmydata/test/test_upload.py 917
660         d.addCallback(lambda ign:
661             self._add_server_with_share(server_number=3, share_number=1))
662         # So, we now have the following layout:
663-        # server 0: shares 1 - 10
664+        # server 0: shares 0 - 9
665         # server 1: share 0
666         # server 2: share 1
667         # server 3: share 2
668hunk ./src/allmydata/test/test_upload.py 934
669         # instance already has a happy parameter set.
670         d.addCallback(lambda client:
671             client.upload(upload.Data("data" * 10000, convergence="")))
672+
673+
674+        # This scenario is basically comment:53, but with the order reversed;
675+        # this means that the Tahoe2PeerSelector sees
676+        # server 0: shares 1-10
677+        # server 1: share 1
678+        # server 2: share 2
679+        # server 3: share 3
680+        d.addCallback(_change_basedir)
681+        d.addCallback(lambda ign:
682+            self._setup_and_upload())
683+        d.addCallback(lambda ign:
684+            self._add_server_with_share(server_number=2, share_number=0))
685+        d.addCallback(lambda ign:
686+            self._add_server_with_share(server_number=3, share_number=1))
687+        d.addCallback(lambda ign:
688+            self._add_server_with_share(server_number=1, share_number=2))
689+        # Copy all of the other shares to server number 2
690+        def _copy_shares(ign):
691+            for i in xrange(1, 10):
692+                self._copy_share_to_server(i, 2)
693+        d.addCallback(_copy_shares)
694+        # Remove the first server, and add a placeholder with share 0
695+        d.addCallback(lambda ign:
696+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
697+        d.addCallback(lambda ign:
698+            self._add_server_with_share(server_number=0, share_number=0))
699+        # Now try uploading.
700+        d.addCallback(_reset_encoding_parameters)
701+        d.addCallback(lambda client:
702+            client.upload(upload.Data("data" * 10000, convergence="")))
703+        # Try the same thing, but with empty servers after the first one
704+        # We want to make sure that Tahoe2PeerSelector will redistribute
705+        # shares as necessary, not simply discover an existing layout.
706+        d.addCallback(_change_basedir)
707+        d.addCallback(lambda ign:
708+            self._setup_and_upload())
709+        d.addCallback(lambda ign:
710+            self._add_server(server_number=2))
711+        d.addCallback(lambda ign:
712+            self._add_server(server_number=3))
713+        d.addCallback(lambda ign:
714+            self._add_server(server_number=1))
715+        d.addCallback(_copy_shares)
716+        d.addCallback(lambda ign:
717+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
718+        d.addCallback(lambda ign:
719+            self._add_server(server_number=0))
720+        d.addCallback(_reset_encoding_parameters)
721+        d.addCallback(lambda client:
722+            client.upload(upload.Data("data" * 10000, convergence="")))
723+        # Try the following layout
724+        # server 0: shares 1-10
725+        # server 1: share 1, read-only
726+        # server 2: share 2, read-only
727+        # server 3: share 3, read-only
728+        d.addCallback(_change_basedir)
729+        d.addCallback(lambda ign:
730+            self._setup_and_upload())
731+        d.addCallback(lambda ign:
732+            self._add_server_with_share(server_number=2, share_number=0))
733+        d.addCallback(lambda ign:
734+            self._add_server_with_share(server_number=3, share_number=1,
735+                                        readonly=True))
736+        d.addCallback(lambda ign:
737+            self._add_server_with_share(server_number=1, share_number=2,
738+                                        readonly=True))
739+        # Copy all of the other shares to server number 2
740+        d.addCallback(_copy_shares)
741+        # Remove server 0, and add another in its place
742+        d.addCallback(lambda ign:
743+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
744+        d.addCallback(lambda ign:
745+            self._add_server_with_share(server_number=0, share_number=0,
746+                                        readonly=True))
747+        d.addCallback(_reset_encoding_parameters)
748+        d.addCallback(lambda client:
749+            client.upload(upload.Data("data" * 10000, convergence="")))
750         return d
751 
752 
753}
754[Test Tahoe2PeerSelector to make sure that it recognizeses existing shares on readonly servers
755Kevan Carstensen <kevan@isnotajoke.com>**20091109003735
756 Ignore-this: 12f9b4cff5752fca7ed32a6ebcff6446
757] hunk ./src/allmydata/test/test_upload.py 1125
758         self.failUnlessEqual(set([4, 5]), shares_by_server["server2"])
759 
760 
761+    def test_existing_share_detection(self):
762+        self.basedir = self.mktemp()
763+        d = self._setup_and_upload()
764+        # Our final setup should look like this:
765+        # server 1: shares 1 - 10, read-only
766+        # server 2: empty
767+        # server 3: empty
768+        # server 4: empty
769+        # The purpose of this test is to make sure that the peer selector
770+        # knows about the shares on server 1, even though it is read-only.
771+        # It used to simply filter these out, which would cause the test
772+        # to fail when servers_of_happiness = 4.
773+        d.addCallback(lambda ign:
774+            self._add_server_with_share(1, 0, True))
775+        d.addCallback(lambda ign:
776+            self._add_server_with_share(2))
777+        d.addCallback(lambda ign:
778+            self._add_server_with_share(3))
779+        d.addCallback(lambda ign:
780+            self._add_server_with_share(4))
781+        def _copy_shares(ign):
782+            for i in xrange(1, 10):
783+                self._copy_share_to_server(i, 1)
784+        d.addCallback(_copy_shares)
785+        d.addCallback(lambda ign:
786+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
787+        def _prepare_client(ign):
788+            client = self.g.clients[0]
789+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
790+            return client
791+        d.addCallback(_prepare_client)
792+        d.addCallback(lambda client:
793+            client.upload(upload.Data("data" * 10000, convergence="")))
794+        return d
795+
796+
797     def _set_up_nodes_extra_config(self, clientdir):
798         cfgfn = os.path.join(clientdir, "tahoe.cfg")
799         oldcfg = open(cfgfn, "r").read()
800[Re-work 'test_upload.py' to be more readable; add more tests for #778
801Kevan Carstensen <kevan@isnotajoke.com>**20091116192334
802 Ignore-this: 7e8565f92fe51dece5ae28daf442d659
803] {
804hunk ./src/allmydata/test/test_upload.py 722
805         d.addCallback(_have_shareholders)
806         return d
807 
808+
809     def _add_server(self, server_number, readonly=False):
810         assert self.g, "I tried to find a grid at self.g, but failed"
811         assert self.shares, "I tried to find shares at self.shares, but failed"
812hunk ./src/allmydata/test/test_upload.py 729
813         ss = self.g.make_server(server_number, readonly)
814         self.g.add_server(server_number, ss)
815 
816+
817     def _add_server_with_share(self, server_number, share_number=None,
818                                readonly=False):
819         self._add_server(server_number, readonly)
820hunk ./src/allmydata/test/test_upload.py 733
821-        if share_number:
822+        if share_number is not None:
823             self._copy_share_to_server(share_number, server_number)
824 
825hunk ./src/allmydata/test/test_upload.py 736
826+
827     def _copy_share_to_server(self, share_number, server_number):
828         ss = self.g.servers_by_number[server_number]
829         # Copy share i from the directory associated with the first
830hunk ./src/allmydata/test/test_upload.py 752
831             os.makedirs(new_share_location)
832         new_share_location = os.path.join(new_share_location,
833                                           str(share_number))
834-        shutil.copy(old_share_location, new_share_location)
835+        if old_share_location != new_share_location:
836+            shutil.copy(old_share_location, new_share_location)
837         shares = self.find_shares(self.uri)
838         # Make sure that the storage server has the share.
839         self.failUnless((share_number, ss.my_nodeid, new_share_location)
840hunk ./src/allmydata/test/test_upload.py 782
841         d.addCallback(_store_shares)
842         return d
843 
844+
845     def test_configure_parameters(self):
846         self.basedir = self.mktemp()
847         hooks = {0: self._set_up_nodes_extra_config}
848hunk ./src/allmydata/test/test_upload.py 802
849         d.addCallback(_check)
850         return d
851 
852+
853     def _setUp(self, ns):
854         # Used by test_happy_semantics and test_prexisting_share_behavior
855         # to set up the grid.
856hunk ./src/allmydata/test/test_upload.py 811
857         self.u.running = True
858         self.u.parent = self.node
859 
860+
861     def test_happy_semantics(self):
862         self._setUp(2)
863         DATA = upload.Data("kittens" * 10000, convergence="")
864hunk ./src/allmydata/test/test_upload.py 844
865             self.u.upload(DATA))
866         return d
867 
868-    def test_problem_layouts(self):
869-        self.basedir = self.mktemp()
870+
871+    def test_problem_layout_comment_52(self):
872+        def _basedir():
873+            self.basedir = self.mktemp()
874+        _basedir()
875         # This scenario is at
876         # http://allmydata.org/trac/tahoe/ticket/778#comment:52
877         #
878hunk ./src/allmydata/test/test_upload.py 890
879         # Uploading data should fail
880         d.addCallback(lambda client:
881             self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
882-                            "shares could only be placed on 1 servers "
883+                            "shares could only be placed on 2 servers "
884                             "(4 were requested)",
885                             client.upload, upload.Data("data" * 10000,
886                                                        convergence="")))
887hunk ./src/allmydata/test/test_upload.py 895
888 
889+        # Do comment:52, but like this:
890+        # server 2: empty
891+        # server 3: share 0, read-only
892+        # server 1: share 0, read-only
893+        # server 0: shares 0-9
894+        d.addCallback(lambda ign:
895+            _basedir())
896+        d.addCallback(lambda ign:
897+            self._setup_and_upload())
898+        d.addCallback(lambda ign:
899+            self._add_server_with_share(server_number=2))
900+        d.addCallback(lambda ign:
901+            self._add_server_with_share(server_number=3, share_number=0,
902+                                        readonly=True))
903+        d.addCallback(lambda ign:
904+            self._add_server_with_share(server_number=1, share_number=0,
905+                                        readonly=True))
906+        def _prepare2():
907+            client = self.g.clients[0]
908+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
909+            return client
910+        d.addCallback(lambda ign:
911+            _prepare2())
912+        d.addCallback(lambda client:
913+            self.shouldFail(NotEnoughSharesError, "test_happy_sematics",
914+                            "shares could only be placed on 2 servers "
915+                            "(3 were requested)",
916+                            client.upload, upload.Data("data" * 10000,
917+                                                       convergence="")))
918+        return d
919+
920 
921hunk ./src/allmydata/test/test_upload.py 927
922+    def test_problem_layout_comment_53(self):
923         # This scenario is at
924         # http://allmydata.org/trac/tahoe/ticket/778#comment:53
925         #
926hunk ./src/allmydata/test/test_upload.py 934
927         # Set up the grid to have one server
928         def _change_basedir(ign):
929             self.basedir = self.mktemp()
930-        d.addCallback(_change_basedir)
931-        d.addCallback(lambda ign:
932-            self._setup_and_upload())
933-        # We want to have a layout like this:
934-        # server 1: share 1
935-        # server 2: share 2
936-        # server 3: share 3
937-        # server 4: shares 1 - 10
938-        # (this is an expansion of Zooko's example because it is easier
939-        #  to code, but it will fail in the same way)
940-        # To start, we'll create a server with shares 1-10 of the data
941-        # we're about to upload.
942+        _change_basedir(None)
943+        d = self._setup_and_upload()
944+        # We start by uploading all of the shares to one server (which has
945+        # already been done above).
946         # Next, we'll add three new servers to our NoNetworkGrid. We'll add
947         # one share from our initial upload to each of these.
948         # The counterintuitive ordering of the share numbers is to deal with
949hunk ./src/allmydata/test/test_upload.py 952
950             self._add_server_with_share(server_number=3, share_number=1))
951         # So, we now have the following layout:
952         # server 0: shares 0 - 9
953-        # server 1: share 0
954-        # server 2: share 1
955-        # server 3: share 2
956+        # server 1: share 2
957+        # server 2: share 0
958+        # server 3: share 1
959         # We want to change the 'happy' parameter in the client to 4.
960hunk ./src/allmydata/test/test_upload.py 956
961-        # We then want to feed the upload process a list of peers that
962-        # server 0 is at the front of, so we trigger Zooko's scenario.
963+        # The Tahoe2PeerSelector will see the peers permuted as:
964+        # 2, 3, 1, 0
965         # Ideally, a reupload of our original data should work.
966hunk ./src/allmydata/test/test_upload.py 959
967-        def _reset_encoding_parameters(ign):
968+        def _reset_encoding_parameters(ign, happy=4):
969             client = self.g.clients[0]
970hunk ./src/allmydata/test/test_upload.py 961
971-            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
972+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
973             return client
974         d.addCallback(_reset_encoding_parameters)
975hunk ./src/allmydata/test/test_upload.py 964
976-        # We need this to get around the fact that the old Data
977-        # instance already has a happy parameter set.
978         d.addCallback(lambda client:
979             client.upload(upload.Data("data" * 10000, convergence="")))
980 
981hunk ./src/allmydata/test/test_upload.py 970
982 
983         # This scenario is basically comment:53, but with the order reversed;
984         # this means that the Tahoe2PeerSelector sees
985-        # server 0: shares 1-10
986-        # server 1: share 1
987-        # server 2: share 2
988-        # server 3: share 3
989+        # server 2: shares 1-10
990+        # server 3: share 1
991+        # server 1: share 2
992+        # server 4: share 3
993         d.addCallback(_change_basedir)
994         d.addCallback(lambda ign:
995             self._setup_and_upload())
996hunk ./src/allmydata/test/test_upload.py 992
997         d.addCallback(lambda ign:
998             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
999         d.addCallback(lambda ign:
1000-            self._add_server_with_share(server_number=0, share_number=0))
1001+            self._add_server_with_share(server_number=4, share_number=0))
1002         # Now try uploading.
1003         d.addCallback(_reset_encoding_parameters)
1004         d.addCallback(lambda client:
1005hunk ./src/allmydata/test/test_upload.py 1013
1006         d.addCallback(lambda ign:
1007             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1008         d.addCallback(lambda ign:
1009-            self._add_server(server_number=0))
1010+            self._add_server(server_number=4))
1011         d.addCallback(_reset_encoding_parameters)
1012         d.addCallback(lambda client:
1013             client.upload(upload.Data("data" * 10000, convergence="")))
1014hunk ./src/allmydata/test/test_upload.py 1017
1015+        return d
1016+
1017+
1018+    def test_happiness_with_some_readonly_peers(self):
1019         # Try the following layout
1020hunk ./src/allmydata/test/test_upload.py 1022
1021-        # server 0: shares 1-10
1022-        # server 1: share 1, read-only
1023-        # server 2: share 2, read-only
1024-        # server 3: share 3, read-only
1025-        d.addCallback(_change_basedir)
1026-        d.addCallback(lambda ign:
1027-            self._setup_and_upload())
1028+        # server 2: shares 0-9
1029+        # server 4: share 0, read-only
1030+        # server 3: share 1, read-only
1031+        # server 1: share 2, read-only
1032+        self.basedir = self.mktemp()
1033+        d = self._setup_and_upload()
1034         d.addCallback(lambda ign:
1035             self._add_server_with_share(server_number=2, share_number=0))
1036         d.addCallback(lambda ign:
1037hunk ./src/allmydata/test/test_upload.py 1037
1038             self._add_server_with_share(server_number=1, share_number=2,
1039                                         readonly=True))
1040         # Copy all of the other shares to server number 2
1041+        def _copy_shares(ign):
1042+            for i in xrange(1, 10):
1043+                self._copy_share_to_server(i, 2)
1044         d.addCallback(_copy_shares)
1045         # Remove server 0, and add another in its place
1046         d.addCallback(lambda ign:
1047hunk ./src/allmydata/test/test_upload.py 1045
1048             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1049         d.addCallback(lambda ign:
1050-            self._add_server_with_share(server_number=0, share_number=0,
1051+            self._add_server_with_share(server_number=4, share_number=0,
1052                                         readonly=True))
1053hunk ./src/allmydata/test/test_upload.py 1047
1054+        def _reset_encoding_parameters(ign, happy=4):
1055+            client = self.g.clients[0]
1056+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
1057+            return client
1058+        d.addCallback(_reset_encoding_parameters)
1059+        d.addCallback(lambda client:
1060+            client.upload(upload.Data("data" * 10000, convergence="")))
1061+        return d
1062+
1063+
1064+    def test_happiness_with_all_readonly_peers(self):
1065+        # server 3: share 1, read-only
1066+        # server 1: share 2, read-only
1067+        # server 2: shares 0-9, read-only
1068+        # server 4: share 0, read-only
1069+        # The idea with this test is to make sure that the survey of
1070+        # read-only peers doesn't undercount servers of happiness
1071+        self.basedir = self.mktemp()
1072+        d = self._setup_and_upload()
1073+        d.addCallback(lambda ign:
1074+            self._add_server_with_share(server_number=4, share_number=0,
1075+                                        readonly=True))
1076+        d.addCallback(lambda ign:
1077+            self._add_server_with_share(server_number=3, share_number=1,
1078+                                        readonly=True))
1079+        d.addCallback(lambda ign:
1080+            self._add_server_with_share(server_number=1, share_number=2,
1081+                                        readonly=True))
1082+        d.addCallback(lambda ign:
1083+            self._add_server_with_share(server_number=2, share_number=0,
1084+                                        readonly=True))
1085+        def _copy_shares(ign):
1086+            for i in xrange(1, 10):
1087+                self._copy_share_to_server(i, 2)
1088+        d.addCallback(_copy_shares)
1089+        d.addCallback(lambda ign:
1090+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1091+        def _reset_encoding_parameters(ign, happy=4):
1092+            client = self.g.clients[0]
1093+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
1094+            return client
1095         d.addCallback(_reset_encoding_parameters)
1096         d.addCallback(lambda client:
1097             client.upload(upload.Data("data" * 10000, convergence="")))
1098hunk ./src/allmydata/test/test_upload.py 1099
1099             self.basedir = self.mktemp()
1100         _set_basedir()
1101         d = self._setup_and_upload();
1102-        # Add 5 servers, with one share each from the original
1103+        # Add 5 servers
1104         def _do_server_setup(ign):
1105hunk ./src/allmydata/test/test_upload.py 1101
1106-            self._add_server_with_share(1, 1)
1107+            self._add_server_with_share(1)
1108             self._add_server_with_share(2)
1109             self._add_server_with_share(3)
1110             self._add_server_with_share(4)
1111hunk ./src/allmydata/test/test_upload.py 1126
1112         d.addCallback(_remove_server)
1113         d.addCallback(lambda ign:
1114             self.shouldFail(NotEnoughSharesError,
1115-                            "test_dropped_server_in_encoder", "",
1116+                            "test_dropped_servers_in_encoder",
1117+                            "lost too many servers during upload "
1118+                            "(still have 3, want 4)",
1119+                            self._do_upload_with_broken_servers, 2))
1120+        # Now do the same thing over again, but make some of the servers
1121+        # readonly, break some of the ones that aren't, and make sure that
1122+        # happiness accounting is preserved.
1123+        d.addCallback(_set_basedir)
1124+        d.addCallback(lambda ign:
1125+            self._setup_and_upload())
1126+        def _do_server_setup_2(ign):
1127+            self._add_server_with_share(1)
1128+            self._add_server_with_share(2)
1129+            self._add_server_with_share(3)
1130+            self._add_server_with_share(4, 7, readonly=True)
1131+            self._add_server_with_share(5, 8, readonly=True)
1132+        d.addCallback(_do_server_setup_2)
1133+        d.addCallback(_remove_server)
1134+        d.addCallback(lambda ign:
1135+            self._do_upload_with_broken_servers(1))
1136+        d.addCallback(_set_basedir)
1137+        d.addCallback(lambda ign:
1138+            self._setup_and_upload())
1139+        d.addCallback(_do_server_setup_2)
1140+        d.addCallback(_remove_server)
1141+        d.addCallback(lambda ign:
1142+            self.shouldFail(NotEnoughSharesError,
1143+                            "test_dropped_servers_in_encoder",
1144+                            "lost too many servers during upload "
1145+                            "(still have 3, want 4)",
1146                             self._do_upload_with_broken_servers, 2))
1147         return d
1148 
1149hunk ./src/allmydata/test/test_upload.py 1179
1150         self.failUnlessEqual(3, len(unique_servers))
1151         for server in ["server1", "server2", "server3"]:
1152             self.failUnlessIn(server, unique_servers)
1153-        # servers_with_unique_shares expects a set of PeerTracker
1154-        # instances as a used_peers argument, but only uses the peerid
1155-        # instance variable to assess uniqueness. So we feed it some fake
1156-        # PeerTrackers whose only important characteristic is that they
1157-        # have peerid set to something.
1158+        # servers_with_unique_shares expects to receive some object with
1159+        # a peerid attribute. So we make a FakePeerTracker whose only
1160+        # job is to have a peerid attribute.
1161         class FakePeerTracker:
1162             pass
1163         trackers = []
1164hunk ./src/allmydata/test/test_upload.py 1185
1165-        for server in ["server5", "server6", "server7", "server8"]:
1166+        for (i, server) in [(i, "server%d" % i) for i in xrange(5, 9)]:
1167             t = FakePeerTracker()
1168             t.peerid = server
1169hunk ./src/allmydata/test/test_upload.py 1188
1170+            t.buckets = [i]
1171             trackers.append(t)
1172         # Recall that there are 3 unique servers in test1. Since none of
1173         # those overlap with the ones in trackers, we should get 7 back
1174hunk ./src/allmydata/test/test_upload.py 1201
1175         # Now add an overlapping server to trackers.
1176         t = FakePeerTracker()
1177         t.peerid = "server1"
1178+        t.buckets = [1]
1179         trackers.append(t)
1180         unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
1181         self.failUnlessEqual(7, len(unique_servers))
1182hunk ./src/allmydata/test/test_upload.py 1207
1183         for server in expected_servers:
1184             self.failUnlessIn(server, unique_servers)
1185+        test = {}
1186+        unique_servers = upload.servers_with_unique_shares(test)
1187+        self.failUnlessEqual(0, len(test))
1188 
1189 
1190     def test_shares_by_server(self):
1191hunk ./src/allmydata/test/test_upload.py 1213
1192-        test = {
1193-                    1 : "server1",
1194-                    2 : "server2",
1195-                    3 : "server3",
1196-                    4 : "server4"
1197-               }
1198+        test = dict([(i, "server%d" % i) for i in xrange(1, 5)])
1199         shares_by_server = upload.shares_by_server(test)
1200         self.failUnlessEqual(set([1]), shares_by_server["server1"])
1201         self.failUnlessEqual(set([2]), shares_by_server["server2"])
1202hunk ./src/allmydata/test/test_upload.py 1267
1203         return d
1204 
1205 
1206+    def test_should_add_server(self):
1207+        shares = dict([(i, "server%d" % i) for i in xrange(10)])
1208+        self.failIf(upload.should_add_server(shares, "server1", 4))
1209+        shares[4] = "server1"
1210+        self.failUnless(upload.should_add_server(shares, "server4", 4))
1211+        shares = {}
1212+        self.failUnless(upload.should_add_server(shares, "server1", 1))
1213+
1214+
1215     def _set_up_nodes_extra_config(self, clientdir):
1216         cfgfn = os.path.join(clientdir, "tahoe.cfg")
1217         oldcfg = open(cfgfn, "r").read()
1218}
1219[Add tests for the behavior described in #834.
1220Kevan Carstensen <kevan@isnotajoke.com>**20091123012008
1221 Ignore-this: d8e0aa0f3f7965ce9b5cea843c6d6f9f
1222] {
1223hunk ./src/allmydata/test/test_encode.py 12
1224 from allmydata.util.assertutil import _assert
1225 from allmydata.util.consumer import MemoryConsumer
1226 from allmydata.interfaces import IStorageBucketWriter, IStorageBucketReader, \
1227-     NotEnoughSharesError, IStorageBroker
1228+     NotEnoughSharesError, IStorageBroker, UploadHappinessError
1229 from allmydata.monitor import Monitor
1230 import common_util as testutil
1231 
1232hunk ./src/allmydata/test/test_encode.py 794
1233         d = self.send_and_recover((4,8,10), bucket_modes=modemap)
1234         def _done(res):
1235             self.failUnless(isinstance(res, Failure))
1236-            self.failUnless(res.check(NotEnoughSharesError), res)
1237+            self.failUnless(res.check(UploadHappinessError), res)
1238         d.addBoth(_done)
1239         return d
1240 
1241hunk ./src/allmydata/test/test_encode.py 805
1242         d = self.send_and_recover((4,8,10), bucket_modes=modemap)
1243         def _done(res):
1244             self.failUnless(isinstance(res, Failure))
1245-            self.failUnless(res.check(NotEnoughSharesError))
1246+            self.failUnless(res.check(UploadHappinessError))
1247         d.addBoth(_done)
1248         return d
1249hunk ./src/allmydata/test/test_upload.py 13
1250 import allmydata # for __full_version__
1251 from allmydata import uri, monitor, client
1252 from allmydata.immutable import upload, encode
1253-from allmydata.interfaces import FileTooLargeError, NoSharesError, \
1254-     NotEnoughSharesError
1255+from allmydata.interfaces import FileTooLargeError, UploadHappinessError
1256 from allmydata.util.assertutil import precondition
1257 from allmydata.util.deferredutil import DeferredListShouldSucceed
1258 from no_network import GridTestMixin
1259hunk ./src/allmydata/test/test_upload.py 402
1260 
1261     def test_first_error_all(self):
1262         self.make_node("first-fail")
1263-        d = self.shouldFail(NoSharesError, "first_error_all",
1264+        d = self.shouldFail(UploadHappinessError, "first_error_all",
1265                             "peer selection failed",
1266                             upload_data, self.u, DATA)
1267         def _check((f,)):
1268hunk ./src/allmydata/test/test_upload.py 434
1269 
1270     def test_second_error_all(self):
1271         self.make_node("second-fail")
1272-        d = self.shouldFail(NotEnoughSharesError, "second_error_all",
1273+        d = self.shouldFail(UploadHappinessError, "second_error_all",
1274                             "peer selection failed",
1275                             upload_data, self.u, DATA)
1276         def _check((f,)):
1277hunk ./src/allmydata/test/test_upload.py 452
1278         self.u.parent = self.node
1279 
1280     def _should_fail(self, f):
1281-        self.failUnless(isinstance(f, Failure) and f.check(NoSharesError), f)
1282+        self.failUnless(isinstance(f, Failure) and f.check(UploadHappinessError), f)
1283 
1284     def test_data_large(self):
1285         data = DATA
1286hunk ./src/allmydata/test/test_upload.py 817
1287         # These parameters are unsatisfiable with the client that we've made
1288         # -- we'll use them to test that the semnatics work correctly.
1289         self.set_encoding_parameters(k=3, happy=5, n=10)
1290-        d = self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
1291+        d = self.shouldFail(UploadHappinessError, "test_happy_semantics",
1292                             "shares could only be placed on 2 servers "
1293                             "(5 were requested)",
1294                             self.u.upload, DATA)
1295hunk ./src/allmydata/test/test_upload.py 888
1296             _prepare())
1297         # Uploading data should fail
1298         d.addCallback(lambda client:
1299-            self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
1300+            self.shouldFail(UploadHappinessError, "test_happy_semantics",
1301                             "shares could only be placed on 2 servers "
1302                             "(4 were requested)",
1303                             client.upload, upload.Data("data" * 10000,
1304hunk ./src/allmydata/test/test_upload.py 918
1305         d.addCallback(lambda ign:
1306             _prepare2())
1307         d.addCallback(lambda client:
1308-            self.shouldFail(NotEnoughSharesError, "test_happy_sematics",
1309+            self.shouldFail(UploadHappinessError, "test_happy_sematics",
1310                             "shares could only be placed on 2 servers "
1311                             "(3 were requested)",
1312                             client.upload, upload.Data("data" * 10000,
1313hunk ./src/allmydata/test/test_upload.py 1124
1314         d.addCallback(_do_server_setup)
1315         d.addCallback(_remove_server)
1316         d.addCallback(lambda ign:
1317-            self.shouldFail(NotEnoughSharesError,
1318+            self.shouldFail(UploadHappinessError,
1319                             "test_dropped_servers_in_encoder",
1320                             "lost too many servers during upload "
1321                             "(still have 3, want 4)",
1322hunk ./src/allmydata/test/test_upload.py 1151
1323         d.addCallback(_do_server_setup_2)
1324         d.addCallback(_remove_server)
1325         d.addCallback(lambda ign:
1326-            self.shouldFail(NotEnoughSharesError,
1327+            self.shouldFail(UploadHappinessError,
1328                             "test_dropped_servers_in_encoder",
1329                             "lost too many servers during upload "
1330                             "(still have 3, want 4)",
1331hunk ./src/allmydata/test/test_upload.py 1275
1332         self.failUnless(upload.should_add_server(shares, "server1", 1))
1333 
1334 
1335+    def test_exception_messages_during_peer_selection(self):
1336+        # server 1: readonly, no shares
1337+        # server 2: readonly, no shares
1338+        # server 3: readonly, no shares
1339+        # server 4: readonly, no shares
1340+        # server 5: readonly, no shares
1341+        # This will fail, but we want to make sure that the log messages
1342+        # are informative about why it has failed.
1343+        self.basedir = self.mktemp()
1344+        d = self._setup_and_upload()
1345+        d.addCallback(lambda ign:
1346+            self._add_server_with_share(server_number=1, readonly=True))
1347+        d.addCallback(lambda ign:
1348+            self._add_server_with_share(server_number=2, readonly=True))
1349+        d.addCallback(lambda ign:
1350+            self._add_server_with_share(server_number=3, readonly=True))
1351+        d.addCallback(lambda ign:
1352+            self._add_server_with_share(server_number=4, readonly=True))
1353+        d.addCallback(lambda ign:
1354+            self._add_server_with_share(server_number=5, readonly=True))
1355+        d.addCallback(lambda ign:
1356+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1357+        def _reset_encoding_parameters(ign):
1358+            client = self.g.clients[0]
1359+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
1360+            return client
1361+        d.addCallback(_reset_encoding_parameters)
1362+        d.addCallback(lambda client:
1363+            self.shouldFail(UploadHappinessError, "test_selection_exceptions",
1364+                            "peer selection failed for <Tahoe2PeerSelector "
1365+                            "for upload dglev>: placed 0 shares out of 10 "
1366+                            "total (10 homeless), want to place on 4 servers,"
1367+                            " sent 5 queries to 5 peers, 0 queries placed "
1368+                            "some shares, 5 placed none "
1369+                            "(of which 5 placed none due to the server being "
1370+                            "full and 0 placed none due to an error)",
1371+                            client.upload,
1372+                            upload.Data("data" * 10000, convergence="")))
1373+
1374+
1375+        # server 1: readonly, no shares
1376+        # server 2: broken, no shares
1377+        # server 3: readonly, no shares
1378+        # server 4: readonly, no shares
1379+        # server 5: readonly, no shares
1380+        def _reset(ign):
1381+            self.basedir = self.mktemp()
1382+        d.addCallback(_reset)
1383+        d.addCallback(lambda ign:
1384+            self._setup_and_upload())
1385+        d.addCallback(lambda ign:
1386+            self._add_server_with_share(server_number=1, readonly=True))
1387+        d.addCallback(lambda ign:
1388+            self._add_server_with_share(server_number=2))
1389+        def _break_server_2(ign):
1390+            server = self.g.servers_by_number[2].my_nodeid
1391+            # We have to break the server in servers_by_id,
1392+            # because the ones in servers_by_number isn't wrapped,
1393+            # and doesn't look at its broken attribute
1394+            self.g.servers_by_id[server].broken = True
1395+        d.addCallback(_break_server_2)
1396+        d.addCallback(lambda ign:
1397+            self._add_server_with_share(server_number=3, readonly=True))
1398+        d.addCallback(lambda ign:
1399+            self._add_server_with_share(server_number=4, readonly=True))
1400+        d.addCallback(lambda ign:
1401+            self._add_server_with_share(server_number=5, readonly=True))
1402+        d.addCallback(lambda ign:
1403+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1404+        def _reset_encoding_parameters(ign):
1405+            client = self.g.clients[0]
1406+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
1407+            return client
1408+        d.addCallback(_reset_encoding_parameters)
1409+        d.addCallback(lambda client:
1410+            self.shouldFail(UploadHappinessError, "test_selection_exceptions",
1411+                            "peer selection failed for <Tahoe2PeerSelector "
1412+                            "for upload dglev>: placed 0 shares out of 10 "
1413+                            "total (10 homeless), want to place on 4 servers,"
1414+                            " sent 5 queries to 5 peers, 0 queries placed "
1415+                            "some shares, 5 placed none "
1416+                            "(of which 4 placed none due to the server being "
1417+                            "full and 1 placed none due to an error)",
1418+                            client.upload,
1419+                            upload.Data("data" * 10000, convergence="")))
1420+        return d
1421+
1422+
1423     def _set_up_nodes_extra_config(self, clientdir):
1424         cfgfn = os.path.join(clientdir, "tahoe.cfg")
1425         oldcfg = open(cfgfn, "r").read()
1426}
1427[Replace "UploadHappinessError" with "UploadUnhappinessError" in tests.
1428Kevan Carstensen <kevan@isnotajoke.com>**20091205043453
1429 Ignore-this: 83f4bc50c697d21b5f4e2a4cd91862ca
1430] {
1431replace ./src/allmydata/test/test_encode.py [A-Za-z_0-9] UploadHappinessError UploadUnhappinessError
1432replace ./src/allmydata/test/test_upload.py [A-Za-z_0-9] UploadHappinessError UploadUnhappinessError
1433}
1434[Alter various unit tests to work with the new happy behavior
1435Kevan Carstensen <kevan@isnotajoke.com>**20100107181325
1436 Ignore-this: 132032bbf865e63a079f869b663be34a
1437] {
1438hunk ./src/allmydata/test/common.py 941
1439             # We need multiple segments to test crypttext hash trees that are
1440             # non-trivial (i.e. they have more than just one hash in them).
1441             cl0.DEFAULT_ENCODING_PARAMETERS['max_segment_size'] = 12
1442+            # Tests that need to test servers of happiness using this should
1443+            # set their own value for happy -- the default (7) breaks stuff.
1444+            cl0.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1445             d2 = cl0.upload(immutable.upload.Data(TEST_DATA, convergence=""))
1446             def _after_upload(u):
1447                 filecap = u.uri
1448hunk ./src/allmydata/test/test_checker.py 283
1449         self.basedir = "checker/AddLease/875"
1450         self.set_up_grid(num_servers=1)
1451         c0 = self.g.clients[0]
1452+        c0.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1453         self.uris = {}
1454         DATA = "data" * 100
1455         d = c0.upload(Data(DATA, convergence=""))
1456hunk ./src/allmydata/test/test_system.py 93
1457         d = self.set_up_nodes()
1458         def _check_connections(res):
1459             for c in self.clients:
1460+                c.DEFAULT_ENCODING_PARAMETERS['happy'] = 5
1461                 all_peerids = c.get_storage_broker().get_all_serverids()
1462                 self.failUnlessEqual(len(all_peerids), self.numclients)
1463                 sb = c.storage_broker
1464hunk ./src/allmydata/test/test_system.py 205
1465                                                       add_to_sparent=True))
1466         def _added(extra_node):
1467             self.extra_node = extra_node
1468+            self.extra_node.DEFAULT_ENCODING_PARAMETERS['happy'] = 5
1469         d.addCallback(_added)
1470 
1471         HELPER_DATA = "Data that needs help to upload" * 1000
1472hunk ./src/allmydata/test/test_system.py 705
1473         self.basedir = "system/SystemTest/test_filesystem"
1474         self.data = LARGE_DATA
1475         d = self.set_up_nodes(use_stats_gatherer=True)
1476+        def _new_happy_semantics(ign):
1477+            for c in self.clients:
1478+                c.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1479+        d.addCallback(_new_happy_semantics)
1480         d.addCallback(self._test_introweb)
1481         d.addCallback(self.log, "starting publish")
1482         d.addCallback(self._do_publish1)
1483hunk ./src/allmydata/test/test_system.py 1129
1484         d.addCallback(self.failUnlessEqual, "new.txt contents")
1485         # and again with something large enough to use multiple segments,
1486         # and hopefully trigger pauseProducing too
1487+        def _new_happy_semantics(ign):
1488+            for c in self.clients:
1489+                # these get reset somewhere? Whatever.
1490+                c.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1491+        d.addCallback(_new_happy_semantics)
1492         d.addCallback(lambda res: self.PUT(public + "/subdir3/big.txt",
1493                                            "big" * 500000)) # 1.5MB
1494         d.addCallback(lambda res: self.GET(public + "/subdir3/big.txt"))
1495hunk ./src/allmydata/test/test_upload.py 178
1496 
1497 class FakeClient:
1498     DEFAULT_ENCODING_PARAMETERS = {"k":25,
1499-                                   "happy": 75,
1500+                                   "happy": 25,
1501                                    "n": 100,
1502                                    "max_segment_size": 1*MiB,
1503                                    }
1504hunk ./src/allmydata/test/test_upload.py 316
1505         data = self.get_data(SIZE_LARGE)
1506         segsize = int(SIZE_LARGE / 2.5)
1507         # we want 3 segments, since that's not a power of two
1508-        self.set_encoding_parameters(25, 75, 100, segsize)
1509+        self.set_encoding_parameters(25, 25, 100, segsize)
1510         d = upload_data(self.u, data)
1511         d.addCallback(extract_uri)
1512         d.addCallback(self._check_large, SIZE_LARGE)
1513hunk ./src/allmydata/test/test_upload.py 395
1514     def test_first_error(self):
1515         mode = dict([(0,"good")] + [(i,"first-fail") for i in range(1,10)])
1516         self.make_node(mode)
1517+        self.set_encoding_parameters(k=25, happy=1, n=50)
1518         d = upload_data(self.u, DATA)
1519         d.addCallback(extract_uri)
1520         d.addCallback(self._check_large, SIZE_LARGE)
1521hunk ./src/allmydata/test/test_upload.py 513
1522 
1523         self.make_client()
1524         data = self.get_data(SIZE_LARGE)
1525-        self.set_encoding_parameters(50, 75, 100)
1526+        # if there are 50 peers, then happy needs to be <= 50
1527+        self.set_encoding_parameters(50, 50, 100)
1528         d = upload_data(self.u, data)
1529         d.addCallback(extract_uri)
1530         d.addCallback(self._check_large, SIZE_LARGE)
1531hunk ./src/allmydata/test/test_upload.py 560
1532 
1533         self.make_client()
1534         data = self.get_data(SIZE_LARGE)
1535-        self.set_encoding_parameters(100, 150, 200)
1536+        # if there are 50 peers, then happy should be no more than 50 if
1537+        # we want this to work.
1538+        self.set_encoding_parameters(100, 50, 200)
1539         d = upload_data(self.u, data)
1540         d.addCallback(extract_uri)
1541         d.addCallback(self._check_large, SIZE_LARGE)
1542hunk ./src/allmydata/test/test_upload.py 580
1543 
1544         self.make_client(3)
1545         data = self.get_data(SIZE_LARGE)
1546-        self.set_encoding_parameters(3, 5, 10)
1547+        self.set_encoding_parameters(3, 3, 10)
1548         d = upload_data(self.u, data)
1549         d.addCallback(extract_uri)
1550         d.addCallback(self._check_large, SIZE_LARGE)
1551hunk ./src/allmydata/test/test_web.py 4073
1552         self.basedir = "web/Grid/exceptions"
1553         self.set_up_grid(num_clients=1, num_servers=2)
1554         c0 = self.g.clients[0]
1555+        c0.DEFAULT_ENCODING_PARAMETERS['happy'] = 2
1556         self.fileurls = {}
1557         DATA = "data" * 100
1558         d = c0.create_dirnode()
1559}
1560[Revisions of the #778 tests, per reviewers' comments
1561Kevan Carstensen <kevan@isnotajoke.com>**20100319050653
1562 Ignore-this: 617307cec6bde9427211354e0e58734d
1563 
1564 - Fix comments and confusing naming.
1565 - Add tests for the new error messages suggested by David-Sarah
1566   and Zooko.
1567 - Alter existing tests for new error messages.
1568 - Make sure that the tests continue to work with the trunk.
1569 - Add a test for a mutual disjointedness assertion that I added to
1570   upload.servers_of_happiness.
1571 - Fix the comments to correctly reflect read-onlyness
1572 - Add a test for an edge case in should_add_server
1573 - Add an assertion to make sure that share redistribution works as it
1574   should
1575 - Alter tests to work with revised servers_of_happiness semantics
1576 - Remove tests for should_add_server, since that function no longer exists.
1577 - Alter tests to know about merge_peers, and to use it before calling
1578   servers_of_happiness.
1579 - Add tests for merge_peers.
1580 - Add Zooko's puzzles to the tests.
1581 
1582] {
1583hunk ./src/allmydata/test/test_encode.py 28
1584 class FakeBucketReaderWriterProxy:
1585     implements(IStorageBucketWriter, IStorageBucketReader)
1586     # these are used for both reading and writing
1587-    def __init__(self, mode="good"):
1588+    def __init__(self, mode="good", peerid="peer"):
1589         self.mode = mode
1590         self.blocks = {}
1591         self.plaintext_hashes = []
1592hunk ./src/allmydata/test/test_encode.py 36
1593         self.block_hashes = None
1594         self.share_hashes = None
1595         self.closed = False
1596+        self.peerid = peerid
1597 
1598     def get_peerid(self):
1599hunk ./src/allmydata/test/test_encode.py 39
1600-        return "peerid"
1601+        return self.peerid
1602 
1603     def _start(self):
1604         if self.mode == "lost-early":
1605hunk ./src/allmydata/test/test_encode.py 306
1606             for shnum in range(NUM_SHARES):
1607                 peer = FakeBucketReaderWriterProxy()
1608                 shareholders[shnum] = peer
1609-                servermap[shnum] = str(shnum)
1610+                servermap.setdefault(shnum, set()).add(peer.get_peerid())
1611                 all_shareholders.append(peer)
1612             e.set_shareholders(shareholders, servermap)
1613             return e.start()
1614hunk ./src/allmydata/test/test_encode.py 463
1615         def _ready(res):
1616             k,happy,n = e.get_param("share_counts")
1617             assert n == NUM_SHARES # else we'll be completely confused
1618-            all_peers = []
1619+            servermap = {}
1620             for shnum in range(NUM_SHARES):
1621                 mode = bucket_modes.get(shnum, "good")
1622hunk ./src/allmydata/test/test_encode.py 466
1623-                peer = FakeBucketReaderWriterProxy(mode)
1624+                peer = FakeBucketReaderWriterProxy(mode, "peer%d" % shnum)
1625                 shareholders[shnum] = peer
1626hunk ./src/allmydata/test/test_encode.py 468
1627-                servermap[shnum] = str(shnum)
1628+                servermap.setdefault(shnum, set()).add(peer.get_peerid())
1629             e.set_shareholders(shareholders, servermap)
1630             return e.start()
1631         d.addCallback(_ready)
1632hunk ./src/allmydata/test/test_upload.py 16
1633 from allmydata.interfaces import FileTooLargeError, UploadUnhappinessError
1634 from allmydata.util.assertutil import precondition
1635 from allmydata.util.deferredutil import DeferredListShouldSucceed
1636+from allmydata.util.happinessutil import servers_of_happiness, \
1637+                                         shares_by_server, merge_peers
1638 from no_network import GridTestMixin
1639 from common_util import ShouldFailMixin
1640 from allmydata.storage_client import StorageFarmBroker
1641hunk ./src/allmydata/test/test_upload.py 708
1642         num_segments = encoder.get_param("num_segments")
1643         d = selector.get_shareholders(broker, sh, storage_index,
1644                                       share_size, block_size, num_segments,
1645-                                      10, 4)
1646+                                      10, 3, 4)
1647         def _have_shareholders((used_peers, already_peers)):
1648             assert servers_to_break <= len(used_peers)
1649             for index in xrange(servers_to_break):
1650hunk ./src/allmydata/test/test_upload.py 720
1651             for peer in used_peers:
1652                 buckets.update(peer.buckets)
1653                 for bucket in peer.buckets:
1654-                    servermap[bucket] = peer.peerid
1655+                    servermap.setdefault(bucket, set()).add(peer.peerid)
1656             encoder.set_shareholders(buckets, servermap)
1657             d = encoder.start()
1658             return d
1659hunk ./src/allmydata/test/test_upload.py 764
1660         self.failUnless((share_number, ss.my_nodeid, new_share_location)
1661                         in shares)
1662 
1663+    def _setup_grid(self):
1664+        """
1665+        I set up a NoNetworkGrid with a single server and client.
1666+        """
1667+        self.set_up_grid(num_clients=1, num_servers=1)
1668 
1669     def _setup_and_upload(self):
1670         """
1671hunk ./src/allmydata/test/test_upload.py 776
1672         upload a file to it, store its uri in self.uri, and store its
1673         sharedata in self.shares.
1674         """
1675-        self.set_up_grid(num_clients=1, num_servers=1)
1676+        self._setup_grid()
1677         client = self.g.clients[0]
1678         client.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1679         data = upload.Data("data" * 10000, convergence="")
1680hunk ./src/allmydata/test/test_upload.py 814
1681 
1682 
1683     def _setUp(self, ns):
1684-        # Used by test_happy_semantics and test_prexisting_share_behavior
1685+        # Used by test_happy_semantics and test_preexisting_share_behavior
1686         # to set up the grid.
1687         self.node = FakeClient(mode="good", num_servers=ns)
1688         self.u = upload.Uploader()
1689hunk ./src/allmydata/test/test_upload.py 825
1690     def test_happy_semantics(self):
1691         self._setUp(2)
1692         DATA = upload.Data("kittens" * 10000, convergence="")
1693-        # These parameters are unsatisfiable with the client that we've made
1694-        # -- we'll use them to test that the semnatics work correctly.
1695+        # These parameters are unsatisfiable with only 2 servers.
1696         self.set_encoding_parameters(k=3, happy=5, n=10)
1697         d = self.shouldFail(UploadUnhappinessError, "test_happy_semantics",
1698hunk ./src/allmydata/test/test_upload.py 828
1699-                            "shares could only be placed on 2 servers "
1700-                            "(5 were requested)",
1701+                            "shares could only be placed or found on 2 "
1702+                            "server(s). We were asked to place shares on "
1703+                            "at least 5 server(s) such that any 3 of them "
1704+                            "have enough shares to recover the file",
1705                             self.u.upload, DATA)
1706         # Let's reset the client to have 10 servers
1707         d.addCallback(lambda ign:
1708hunk ./src/allmydata/test/test_upload.py 836
1709             self._setUp(10))
1710-        # These parameters are satisfiable with the client we've made.
1711+        # These parameters are satisfiable with 10 servers.
1712         d.addCallback(lambda ign:
1713             self.set_encoding_parameters(k=3, happy=5, n=10))
1714hunk ./src/allmydata/test/test_upload.py 839
1715-        # this should work
1716         d.addCallback(lambda ign:
1717             self.u.upload(DATA))
1718         # Let's reset the client to have 7 servers
1719hunk ./src/allmydata/test/test_upload.py 845
1720         # (this is less than n, but more than h)
1721         d.addCallback(lambda ign:
1722             self._setUp(7))
1723-        # These encoding parameters should still be satisfiable with our
1724-        # client setup
1725+        # These parameters are satisfiable with 7 servers.
1726         d.addCallback(lambda ign:
1727             self.set_encoding_parameters(k=3, happy=5, n=10))
1728hunk ./src/allmydata/test/test_upload.py 848
1729-        # This, then, should work.
1730         d.addCallback(lambda ign:
1731             self.u.upload(DATA))
1732         return d
1733hunk ./src/allmydata/test/test_upload.py 862
1734         #
1735         # The scenario in comment:52 proposes that we have a layout
1736         # like:
1737-        # server 1: share 1
1738-        # server 2: share 1
1739-        # server 3: share 1
1740-        # server 4: shares 2 - 10
1741+        # server 0: shares 1 - 9
1742+        # server 1: share 0, read-only
1743+        # server 2: share 0, read-only
1744+        # server 3: share 0, read-only
1745         # To get access to the shares, we will first upload to one
1746hunk ./src/allmydata/test/test_upload.py 867
1747-        # server, which will then have shares 1 - 10. We'll then
1748+        # server, which will then have shares 0 - 9. We'll then
1749         # add three new servers, configure them to not accept any new
1750hunk ./src/allmydata/test/test_upload.py 869
1751-        # shares, then write share 1 directly into the serverdir of each.
1752-        # Then each of servers 1 - 3 will report that they have share 1,
1753-        # and will not accept any new share, while server 4 will report that
1754-        # it has shares 2 - 10 and will accept new shares.
1755+        # shares, then write share 0 directly into the serverdir of each,
1756+        # and then remove share 0 from server 0 in the same way.
1757+        # Then each of servers 1 - 3 will report that they have share 0,
1758+        # and will not accept any new share, while server 0 will report that
1759+        # it has shares 1 - 9 and will accept new shares.
1760         # We'll then set 'happy' = 4, and see that an upload fails
1761         # (as it should)
1762         d = self._setup_and_upload()
1763hunk ./src/allmydata/test/test_upload.py 878
1764         d.addCallback(lambda ign:
1765-            self._add_server_with_share(1, 0, True))
1766+            self._add_server_with_share(server_number=1, share_number=0,
1767+                                        readonly=True))
1768         d.addCallback(lambda ign:
1769hunk ./src/allmydata/test/test_upload.py 881
1770-            self._add_server_with_share(2, 0, True))
1771+            self._add_server_with_share(server_number=2, share_number=0,
1772+                                        readonly=True))
1773         d.addCallback(lambda ign:
1774hunk ./src/allmydata/test/test_upload.py 884
1775-            self._add_server_with_share(3, 0, True))
1776+            self._add_server_with_share(server_number=3, share_number=0,
1777+                                        readonly=True))
1778         # Remove the first share from server 0.
1779hunk ./src/allmydata/test/test_upload.py 887
1780-        def _remove_share_0():
1781+        def _remove_share_0_from_server_0():
1782             share_location = self.shares[0][2]
1783             os.remove(share_location)
1784         d.addCallback(lambda ign:
1785hunk ./src/allmydata/test/test_upload.py 891
1786-            _remove_share_0())
1787+            _remove_share_0_from_server_0())
1788         # Set happy = 4 in the client.
1789         def _prepare():
1790             client = self.g.clients[0]
1791hunk ./src/allmydata/test/test_upload.py 901
1792             _prepare())
1793         # Uploading data should fail
1794         d.addCallback(lambda client:
1795-            self.shouldFail(UploadUnhappinessError, "test_happy_semantics",
1796-                            "shares could only be placed on 2 servers "
1797-                            "(4 were requested)",
1798+            self.shouldFail(UploadUnhappinessError,
1799+                            "test_problem_layout_comment_52_test_1",
1800+                            "shares could be placed or found on 4 server(s), "
1801+                            "but they are not spread out evenly enough to "
1802+                            "ensure that any 3 of these servers would have "
1803+                            "enough shares to recover the file. "
1804+                            "We were asked to place shares on at "
1805+                            "least 4 servers such that any 3 of them have "
1806+                            "enough shares to recover the file",
1807                             client.upload, upload.Data("data" * 10000,
1808                                                        convergence="")))
1809 
1810hunk ./src/allmydata/test/test_upload.py 932
1811                                         readonly=True))
1812         def _prepare2():
1813             client = self.g.clients[0]
1814-            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
1815+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
1816             return client
1817         d.addCallback(lambda ign:
1818             _prepare2())
1819hunk ./src/allmydata/test/test_upload.py 937
1820         d.addCallback(lambda client:
1821-            self.shouldFail(UploadUnhappinessError, "test_happy_sematics",
1822-                            "shares could only be placed on 2 servers "
1823-                            "(3 were requested)",
1824+            self.shouldFail(UploadUnhappinessError,
1825+                            "test_problem_layout_comment_52_test_2",
1826+                            "shares could only be placed on 3 server(s) such "
1827+                            "that any 3 of them have enough shares to recover "
1828+                            "the file, but we were asked to use at least 4 "
1829+                            "such servers.",
1830                             client.upload, upload.Data("data" * 10000,
1831                                                        convergence="")))
1832         return d
1833hunk ./src/allmydata/test/test_upload.py 956
1834         def _change_basedir(ign):
1835             self.basedir = self.mktemp()
1836         _change_basedir(None)
1837-        d = self._setup_and_upload()
1838-        # We start by uploading all of the shares to one server (which has
1839-        # already been done above).
1840+        # We start by uploading all of the shares to one server.
1841         # Next, we'll add three new servers to our NoNetworkGrid. We'll add
1842         # one share from our initial upload to each of these.
1843         # The counterintuitive ordering of the share numbers is to deal with
1844hunk ./src/allmydata/test/test_upload.py 962
1845         # the permuting of these servers -- distributing the shares this
1846         # way ensures that the Tahoe2PeerSelector sees them in the order
1847-        # described above.
1848+        # described below.
1849+        d = self._setup_and_upload()
1850         d.addCallback(lambda ign:
1851             self._add_server_with_share(server_number=1, share_number=2))
1852         d.addCallback(lambda ign:
1853hunk ./src/allmydata/test/test_upload.py 975
1854         # server 1: share 2
1855         # server 2: share 0
1856         # server 3: share 1
1857-        # We want to change the 'happy' parameter in the client to 4.
1858+        # We change the 'happy' parameter in the client to 4.
1859         # The Tahoe2PeerSelector will see the peers permuted as:
1860         # 2, 3, 1, 0
1861         # Ideally, a reupload of our original data should work.
1862hunk ./src/allmydata/test/test_upload.py 988
1863             client.upload(upload.Data("data" * 10000, convergence="")))
1864 
1865 
1866-        # This scenario is basically comment:53, but with the order reversed;
1867-        # this means that the Tahoe2PeerSelector sees
1868-        # server 2: shares 1-10
1869-        # server 3: share 1
1870-        # server 1: share 2
1871-        # server 4: share 3
1872+        # This scenario is basically comment:53, but changed so that the
1873+        # Tahoe2PeerSelector sees the server with all of the shares before
1874+        # any of the other servers.
1875+        # The layout is:
1876+        # server 2: shares 0 - 9
1877+        # server 3: share 0
1878+        # server 1: share 1
1879+        # server 4: share 2
1880+        # The Tahoe2PeerSelector sees the peers permuted as:
1881+        # 2, 3, 1, 4
1882+        # Note that server 0 has been replaced by server 4; this makes it
1883+        # easier to ensure that the last server seen by Tahoe2PeerSelector
1884+        # has only one share.
1885         d.addCallback(_change_basedir)
1886         d.addCallback(lambda ign:
1887             self._setup_and_upload())
1888hunk ./src/allmydata/test/test_upload.py 1012
1889             self._add_server_with_share(server_number=1, share_number=2))
1890         # Copy all of the other shares to server number 2
1891         def _copy_shares(ign):
1892-            for i in xrange(1, 10):
1893+            for i in xrange(0, 10):
1894                 self._copy_share_to_server(i, 2)
1895         d.addCallback(_copy_shares)
1896         # Remove the first server, and add a placeholder with share 0
1897hunk ./src/allmydata/test/test_upload.py 1024
1898         d.addCallback(_reset_encoding_parameters)
1899         d.addCallback(lambda client:
1900             client.upload(upload.Data("data" * 10000, convergence="")))
1901+
1902+
1903         # Try the same thing, but with empty servers after the first one
1904         # We want to make sure that Tahoe2PeerSelector will redistribute
1905         # shares as necessary, not simply discover an existing layout.
1906hunk ./src/allmydata/test/test_upload.py 1029
1907+        # The layout is:
1908+        # server 2: shares 0 - 9
1909+        # server 3: empty
1910+        # server 1: empty
1911+        # server 4: empty
1912         d.addCallback(_change_basedir)
1913         d.addCallback(lambda ign:
1914             self._setup_and_upload())
1915hunk ./src/allmydata/test/test_upload.py 1043
1916             self._add_server(server_number=3))
1917         d.addCallback(lambda ign:
1918             self._add_server(server_number=1))
1919+        d.addCallback(lambda ign:
1920+            self._add_server(server_number=4))
1921         d.addCallback(_copy_shares)
1922         d.addCallback(lambda ign:
1923             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1924hunk ./src/allmydata/test/test_upload.py 1048
1925-        d.addCallback(lambda ign:
1926-            self._add_server(server_number=4))
1927         d.addCallback(_reset_encoding_parameters)
1928         d.addCallback(lambda client:
1929             client.upload(upload.Data("data" * 10000, convergence="")))
1930hunk ./src/allmydata/test/test_upload.py 1051
1931+        # Make sure that only as many shares as necessary to satisfy
1932+        # servers of happiness were pushed.
1933+        d.addCallback(lambda results:
1934+            self.failUnlessEqual(results.pushed_shares, 3))
1935         return d
1936 
1937 
1938hunk ./src/allmydata/test/test_upload.py 1133
1939 
1940 
1941     def test_dropped_servers_in_encoder(self):
1942+        # The Encoder does its own "servers_of_happiness" check if it
1943+        # happens to lose a bucket during an upload (it assumes that
1944+        # the layout presented to it satisfies "servers_of_happiness"
1945+        # until a failure occurs)
1946+        #
1947+        # This test simulates an upload where servers break after peer
1948+        # selection, but before they are written to.
1949         def _set_basedir(ign=None):
1950             self.basedir = self.mktemp()
1951         _set_basedir()
1952hunk ./src/allmydata/test/test_upload.py 1146
1953         d = self._setup_and_upload();
1954         # Add 5 servers
1955         def _do_server_setup(ign):
1956-            self._add_server_with_share(1)
1957-            self._add_server_with_share(2)
1958-            self._add_server_with_share(3)
1959-            self._add_server_with_share(4)
1960-            self._add_server_with_share(5)
1961+            self._add_server_with_share(server_number=1)
1962+            self._add_server_with_share(server_number=2)
1963+            self._add_server_with_share(server_number=3)
1964+            self._add_server_with_share(server_number=4)
1965+            self._add_server_with_share(server_number=5)
1966         d.addCallback(_do_server_setup)
1967         # remove the original server
1968         # (necessary to ensure that the Tahoe2PeerSelector will distribute
1969hunk ./src/allmydata/test/test_upload.py 1159
1970             server = self.g.servers_by_number[0]
1971             self.g.remove_server(server.my_nodeid)
1972         d.addCallback(_remove_server)
1973-        # This should succeed.
1974+        # This should succeed; we still have 4 servers, and the
1975+        # happiness of the upload is 4.
1976         d.addCallback(lambda ign:
1977             self._do_upload_with_broken_servers(1))
1978         # Now, do the same thing over again, but drop 2 servers instead
1979hunk ./src/allmydata/test/test_upload.py 1164
1980-        # of 1. This should fail.
1981+        # of 1. This should fail, because servers_of_happiness is 4 and
1982+        # we can't satisfy that.
1983         d.addCallback(_set_basedir)
1984         d.addCallback(lambda ign:
1985             self._setup_and_upload())
1986hunk ./src/allmydata/test/test_upload.py 1175
1987             self.shouldFail(UploadUnhappinessError,
1988                             "test_dropped_servers_in_encoder",
1989                             "lost too many servers during upload "
1990-                            "(still have 3, want 4)",
1991+                            "(happiness is now 3, but we wanted 4)",
1992                             self._do_upload_with_broken_servers, 2))
1993         # Now do the same thing over again, but make some of the servers
1994         # readonly, break some of the ones that aren't, and make sure that
1995hunk ./src/allmydata/test/test_upload.py 1202
1996             self.shouldFail(UploadUnhappinessError,
1997                             "test_dropped_servers_in_encoder",
1998                             "lost too many servers during upload "
1999-                            "(still have 3, want 4)",
2000+                            "(happiness is now 3, but we wanted 4)",
2001                             self._do_upload_with_broken_servers, 2))
2002         return d
2003 
2004hunk ./src/allmydata/test/test_upload.py 1207
2005 
2006-    def test_servers_with_unique_shares(self):
2007-        # servers_with_unique_shares expects a dict of
2008-        # shnum => peerid as a preexisting shares argument.
2009+    def test_merge_peers(self):
2010+        # merge_peers merges a list of used_peers and a dict of
2011+        # shareid -> peerid mappings.
2012+        shares = {
2013+                    1 : set(["server1"]),
2014+                    2 : set(["server2"]),
2015+                    3 : set(["server3"]),
2016+                    4 : set(["server4", "server5"]),
2017+                    5 : set(["server1", "server2"]),
2018+                 }
2019+        # if not provided with a used_peers argument, it should just
2020+        # return the first argument unchanged.
2021+        self.failUnlessEqual(shares, merge_peers(shares, set([])))
2022+        class FakePeerTracker:
2023+            pass
2024+        trackers = []
2025+        for (i, server) in [(i, "server%d" % i) for i in xrange(5, 9)]:
2026+            t = FakePeerTracker()
2027+            t.peerid = server
2028+            t.buckets = [i]
2029+            trackers.append(t)
2030+        expected = {
2031+                    1 : set(["server1"]),
2032+                    2 : set(["server2"]),
2033+                    3 : set(["server3"]),
2034+                    4 : set(["server4", "server5"]),
2035+                    5 : set(["server1", "server2", "server5"]),
2036+                    6 : set(["server6"]),
2037+                    7 : set(["server7"]),
2038+                    8 : set(["server8"]),
2039+                   }
2040+        self.failUnlessEqual(expected, merge_peers(shares, set(trackers)))
2041+        shares2 = {}
2042+        expected = {
2043+                    5 : set(["server5"]),
2044+                    6 : set(["server6"]),
2045+                    7 : set(["server7"]),
2046+                    8 : set(["server8"]),
2047+                   }
2048+        self.failUnlessEqual(expected, merge_peers(shares2, set(trackers)))
2049+        shares3 = {}
2050+        trackers = []
2051+        expected = {}
2052+        for (i, server) in [(i, "server%d" % i) for i in xrange(10)]:
2053+            shares3[i] = set([server])
2054+            t = FakePeerTracker()
2055+            t.peerid = server
2056+            t.buckets = [i]
2057+            trackers.append(t)
2058+            expected[i] = set([server])
2059+        self.failUnlessEqual(expected, merge_peers(shares3, set(trackers)))
2060+
2061+
2062+    def test_servers_of_happiness_utility_function(self):
2063+        # These tests are concerned with the servers_of_happiness()
2064+        # utility function, and its underlying matching algorithm. Other
2065+        # aspects of the servers_of_happiness behavior are tested
2066+        # elsehwere These tests exist to ensure that
2067+        # servers_of_happiness doesn't under or overcount the happiness
2068+        # value for given inputs.
2069+
2070+        # servers_of_happiness expects a dict of
2071+        # shnum => set(peerids) as a preexisting shares argument.
2072         test1 = {
2073hunk ./src/allmydata/test/test_upload.py 1271
2074-                 1 : "server1",
2075-                 2 : "server2",
2076-                 3 : "server3",
2077-                 4 : "server4"
2078+                 1 : set(["server1"]),
2079+                 2 : set(["server2"]),
2080+                 3 : set(["server3"]),
2081+                 4 : set(["server4"])
2082                 }
2083hunk ./src/allmydata/test/test_upload.py 1276
2084-        unique_servers = upload.servers_with_unique_shares(test1)
2085-        self.failUnlessEqual(4, len(unique_servers))
2086-        for server in ["server1", "server2", "server3", "server4"]:
2087-            self.failUnlessIn(server, unique_servers)
2088-        test1[4] = "server1"
2089-        # Now there should only be 3 unique servers.
2090-        unique_servers = upload.servers_with_unique_shares(test1)
2091-        self.failUnlessEqual(3, len(unique_servers))
2092-        for server in ["server1", "server2", "server3"]:
2093-            self.failUnlessIn(server, unique_servers)
2094-        # servers_with_unique_shares expects to receive some object with
2095-        # a peerid attribute. So we make a FakePeerTracker whose only
2096-        # job is to have a peerid attribute.
2097+        happy = servers_of_happiness(test1)
2098+        self.failUnlessEqual(4, happy)
2099+        test1[4] = set(["server1"])
2100+        # We've added a duplicate server, so now servers_of_happiness
2101+        # should be 3 instead of 4.
2102+        happy = servers_of_happiness(test1)
2103+        self.failUnlessEqual(3, happy)
2104+        # The second argument of merge_peers should be a set of
2105+        # objects with peerid and buckets as attributes. In actual use,
2106+        # these will be PeerTracker instances, but for testing it is fine
2107+        # to make a FakePeerTracker whose job is to hold those instance
2108+        # variables to test that part.
2109         class FakePeerTracker:
2110             pass
2111         trackers = []
2112hunk ./src/allmydata/test/test_upload.py 1296
2113             t.peerid = server
2114             t.buckets = [i]
2115             trackers.append(t)
2116-        # Recall that there are 3 unique servers in test1. Since none of
2117-        # those overlap with the ones in trackers, we should get 7 back
2118-        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
2119-        self.failUnlessEqual(7, len(unique_servers))
2120-        expected_servers = ["server" + str(i) for i in xrange(1, 9)]
2121-        expected_servers.remove("server4")
2122-        for server in expected_servers:
2123-            self.failUnlessIn(server, unique_servers)
2124-        # Now add an overlapping server to trackers.
2125+        # Recall that test1 is a server layout with servers_of_happiness
2126+        # = 3.  Since there isn't any overlap between the shnum ->
2127+        # set([peerid]) correspondences in test1 and those in trackers,
2128+        # the result here should be 7.
2129+        test2 = merge_peers(test1, set(trackers))
2130+        happy = servers_of_happiness(test2)
2131+        self.failUnlessEqual(7, happy)
2132+        # Now add an overlapping server to trackers. This is redundant,
2133+        # so it should not cause the previously reported happiness value
2134+        # to change.
2135         t = FakePeerTracker()
2136         t.peerid = "server1"
2137         t.buckets = [1]
2138hunk ./src/allmydata/test/test_upload.py 1310
2139         trackers.append(t)
2140-        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
2141-        self.failUnlessEqual(7, len(unique_servers))
2142-        for server in expected_servers:
2143-            self.failUnlessIn(server, unique_servers)
2144+        test2 = merge_peers(test1, set(trackers))
2145+        happy = servers_of_happiness(test2)
2146+        self.failUnlessEqual(7, happy)
2147         test = {}
2148hunk ./src/allmydata/test/test_upload.py 1314
2149-        unique_servers = upload.servers_with_unique_shares(test)
2150-        self.failUnlessEqual(0, len(test))
2151+        happy = servers_of_happiness(test)
2152+        self.failUnlessEqual(0, happy)
2153+        # Test a more substantial overlap between the trackers and the
2154+        # existing assignments.
2155+        test = {
2156+            1 : set(['server1']),
2157+            2 : set(['server2']),
2158+            3 : set(['server3']),
2159+            4 : set(['server4']),
2160+        }
2161+        trackers = []
2162+        t = FakePeerTracker()
2163+        t.peerid = 'server5'
2164+        t.buckets = [4]
2165+        trackers.append(t)
2166+        t = FakePeerTracker()
2167+        t.peerid = 'server6'
2168+        t.buckets = [3, 5]
2169+        trackers.append(t)
2170+        # The value returned by servers_of_happiness is the size
2171+        # of a maximum matching in the bipartite graph that
2172+        # servers_of_happiness() makes between peerids and share
2173+        # numbers. It should find something like this:
2174+        # (server 1, share 1)
2175+        # (server 2, share 2)
2176+        # (server 3, share 3)
2177+        # (server 5, share 4)
2178+        # (server 6, share 5)
2179+        #
2180+        # and, since there are 5 edges in this matching, it should
2181+        # return 5.
2182+        test2 = merge_peers(test, set(trackers))
2183+        happy = servers_of_happiness(test2)
2184+        self.failUnlessEqual(5, happy)
2185+        # Zooko's first puzzle:
2186+        # (from http://allmydata.org/trac/tahoe-lafs/ticket/778#comment:156)
2187+        #
2188+        # server 1: shares 0, 1
2189+        # server 2: shares 1, 2
2190+        # server 3: share 2
2191+        #
2192+        # This should yield happiness of 3.
2193+        test = {
2194+            0 : set(['server1']),
2195+            1 : set(['server1', 'server2']),
2196+            2 : set(['server2', 'server3']),
2197+        }
2198+        self.failUnlessEqual(3, servers_of_happiness(test))
2199+        # Zooko's second puzzle:       
2200+        # (from http://allmydata.org/trac/tahoe-lafs/ticket/778#comment:158)
2201+        #
2202+        # server 1: shares 0, 1
2203+        # server 2: share 1
2204+        #
2205+        # This should yield happiness of 2.
2206+        test = {
2207+            0 : set(['server1']),
2208+            1 : set(['server1', 'server2']),
2209+        }
2210+        self.failUnlessEqual(2, servers_of_happiness(test))
2211 
2212 
2213     def test_shares_by_server(self):
2214hunk ./src/allmydata/test/test_upload.py 1377
2215-        test = dict([(i, "server%d" % i) for i in xrange(1, 5)])
2216-        shares_by_server = upload.shares_by_server(test)
2217-        self.failUnlessEqual(set([1]), shares_by_server["server1"])
2218-        self.failUnlessEqual(set([2]), shares_by_server["server2"])
2219-        self.failUnlessEqual(set([3]), shares_by_server["server3"])
2220-        self.failUnlessEqual(set([4]), shares_by_server["server4"])
2221+        test = dict([(i, set(["server%d" % i])) for i in xrange(1, 5)])
2222+        sbs = shares_by_server(test)
2223+        self.failUnlessEqual(set([1]), sbs["server1"])
2224+        self.failUnlessEqual(set([2]), sbs["server2"])
2225+        self.failUnlessEqual(set([3]), sbs["server3"])
2226+        self.failUnlessEqual(set([4]), sbs["server4"])
2227         test1 = {
2228hunk ./src/allmydata/test/test_upload.py 1384
2229-                    1 : "server1",
2230-                    2 : "server1",
2231-                    3 : "server1",
2232-                    4 : "server2",
2233-                    5 : "server2"
2234+                    1 : set(["server1"]),
2235+                    2 : set(["server1"]),
2236+                    3 : set(["server1"]),
2237+                    4 : set(["server2"]),
2238+                    5 : set(["server2"])
2239                 }
2240hunk ./src/allmydata/test/test_upload.py 1390
2241-        shares_by_server = upload.shares_by_server(test1)
2242-        self.failUnlessEqual(set([1, 2, 3]), shares_by_server["server1"])
2243-        self.failUnlessEqual(set([4, 5]), shares_by_server["server2"])
2244+        sbs = shares_by_server(test1)
2245+        self.failUnlessEqual(set([1, 2, 3]), sbs["server1"])
2246+        self.failUnlessEqual(set([4, 5]), sbs["server2"])
2247+        # This should fail unless the peerid part of the mapping is a set
2248+        test2 = {1: "server1"}
2249+        self.shouldFail(AssertionError,
2250+                       "test_shares_by_server",
2251+                       "",
2252+                       shares_by_server, test2)
2253 
2254 
2255     def test_existing_share_detection(self):
2256hunk ./src/allmydata/test/test_upload.py 1405
2257         self.basedir = self.mktemp()
2258         d = self._setup_and_upload()
2259         # Our final setup should look like this:
2260-        # server 1: shares 1 - 10, read-only
2261+        # server 1: shares 0 - 9, read-only
2262         # server 2: empty
2263         # server 3: empty
2264         # server 4: empty
2265hunk ./src/allmydata/test/test_upload.py 1437
2266         return d
2267 
2268 
2269-    def test_should_add_server(self):
2270-        shares = dict([(i, "server%d" % i) for i in xrange(10)])
2271-        self.failIf(upload.should_add_server(shares, "server1", 4))
2272-        shares[4] = "server1"
2273-        self.failUnless(upload.should_add_server(shares, "server4", 4))
2274-        shares = {}
2275-        self.failUnless(upload.should_add_server(shares, "server1", 1))
2276-
2277-
2278     def test_exception_messages_during_peer_selection(self):
2279hunk ./src/allmydata/test/test_upload.py 1438
2280-        # server 1: readonly, no shares
2281-        # server 2: readonly, no shares
2282-        # server 3: readonly, no shares
2283-        # server 4: readonly, no shares
2284-        # server 5: readonly, no shares
2285+        # server 1: read-only, no shares
2286+        # server 2: read-only, no shares
2287+        # server 3: read-only, no shares
2288+        # server 4: read-only, no shares
2289+        # server 5: read-only, no shares
2290         # This will fail, but we want to make sure that the log messages
2291         # are informative about why it has failed.
2292         self.basedir = self.mktemp()
2293hunk ./src/allmydata/test/test_upload.py 1468
2294             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2295                             "peer selection failed for <Tahoe2PeerSelector "
2296                             "for upload dglev>: placed 0 shares out of 10 "
2297-                            "total (10 homeless), want to place on 4 servers,"
2298-                            " sent 5 queries to 5 peers, 0 queries placed "
2299+                            "total (10 homeless), want to place shares on at "
2300+                            "least 4 servers such that any 3 of them have "
2301+                            "enough shares to recover the file, "
2302+                            "sent 5 queries to 5 peers, 0 queries placed "
2303                             "some shares, 5 placed none "
2304                             "(of which 5 placed none due to the server being "
2305                             "full and 0 placed none due to an error)",
2306hunk ./src/allmydata/test/test_upload.py 1479
2307                             upload.Data("data" * 10000, convergence="")))
2308 
2309 
2310-        # server 1: readonly, no shares
2311+        # server 1: read-only, no shares
2312         # server 2: broken, no shares
2313hunk ./src/allmydata/test/test_upload.py 1481
2314-        # server 3: readonly, no shares
2315-        # server 4: readonly, no shares
2316-        # server 5: readonly, no shares
2317+        # server 3: read-only, no shares
2318+        # server 4: read-only, no shares
2319+        # server 5: read-only, no shares
2320         def _reset(ign):
2321             self.basedir = self.mktemp()
2322         d.addCallback(_reset)
2323hunk ./src/allmydata/test/test_upload.py 1496
2324         def _break_server_2(ign):
2325             server = self.g.servers_by_number[2].my_nodeid
2326             # We have to break the server in servers_by_id,
2327-            # because the ones in servers_by_number isn't wrapped,
2328-            # and doesn't look at its broken attribute
2329+            # because the one in servers_by_number isn't wrapped,
2330+            # and doesn't look at its broken attribute when answering
2331+            # queries.
2332             self.g.servers_by_id[server].broken = True
2333         d.addCallback(_break_server_2)
2334         d.addCallback(lambda ign:
2335hunk ./src/allmydata/test/test_upload.py 1509
2336             self._add_server_with_share(server_number=5, readonly=True))
2337         d.addCallback(lambda ign:
2338             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
2339-        def _reset_encoding_parameters(ign):
2340+        def _reset_encoding_parameters(ign, happy=4):
2341             client = self.g.clients[0]
2342hunk ./src/allmydata/test/test_upload.py 1511
2343-            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
2344+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
2345             return client
2346         d.addCallback(_reset_encoding_parameters)
2347         d.addCallback(lambda client:
2348hunk ./src/allmydata/test/test_upload.py 1518
2349             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2350                             "peer selection failed for <Tahoe2PeerSelector "
2351                             "for upload dglev>: placed 0 shares out of 10 "
2352-                            "total (10 homeless), want to place on 4 servers,"
2353-                            " sent 5 queries to 5 peers, 0 queries placed "
2354+                            "total (10 homeless), want to place shares on at "
2355+                            "least 4 servers such that any 3 of them have "
2356+                            "enough shares to recover the file, "
2357+                            "sent 5 queries to 5 peers, 0 queries placed "
2358                             "some shares, 5 placed none "
2359                             "(of which 4 placed none due to the server being "
2360                             "full and 1 placed none due to an error)",
2361hunk ./src/allmydata/test/test_upload.py 1527
2362                             client.upload,
2363                             upload.Data("data" * 10000, convergence="")))
2364+        # server 0, server 1 = empty, accepting shares
2365+        # This should place all of the shares, but still fail with happy=4.
2366+        # We want to make sure that the exception message is worded correctly.
2367+        d.addCallback(_reset)
2368+        d.addCallback(lambda ign:
2369+            self._setup_grid())
2370+        d.addCallback(lambda ign:
2371+            self._add_server_with_share(server_number=1))
2372+        d.addCallback(_reset_encoding_parameters)
2373+        d.addCallback(lambda client:
2374+            self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2375+                            "shares could only be placed or found on 2 "
2376+                            "server(s). We were asked to place shares on at "
2377+                            "least 4 server(s) such that any 3 of them have "
2378+                            "enough shares to recover the file.",
2379+                            client.upload, upload.Data("data" * 10000,
2380+                                                       convergence="")))
2381+        # servers 0 - 4 = empty, accepting shares
2382+        # This too should place all the shares, and this too should fail,
2383+        # but since the effective happiness is more than the k encoding
2384+        # parameter, it should trigger a different error message than the one
2385+        # above.
2386+        d.addCallback(_reset)
2387+        d.addCallback(lambda ign:
2388+            self._setup_grid())
2389+        d.addCallback(lambda ign:
2390+            self._add_server_with_share(server_number=1))
2391+        d.addCallback(lambda ign:
2392+            self._add_server_with_share(server_number=2))
2393+        d.addCallback(lambda ign:
2394+            self._add_server_with_share(server_number=3))
2395+        d.addCallback(lambda ign:
2396+            self._add_server_with_share(server_number=4))
2397+        d.addCallback(_reset_encoding_parameters, happy=7)
2398+        d.addCallback(lambda client:
2399+            self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2400+                            "shares could only be placed on 5 server(s) such "
2401+                            "that any 3 of them have enough shares to recover "
2402+                            "the file, but we were asked to use at least 7 "
2403+                            "such servers.",
2404+                            client.upload, upload.Data("data" * 10000,
2405+                                                       convergence="")))
2406+        # server 0: shares 0 - 9
2407+        # server 1: share 0, read-only
2408+        # server 2: share 0, read-only
2409+        # server 3: share 0, read-only
2410+        # This should place all of the shares, but fail with happy=4.
2411+        # Since the number of servers with shares is more than the number
2412+        # necessary to reconstitute the file, this will trigger a different
2413+        # error message than either of those above.
2414+        d.addCallback(_reset)
2415+        d.addCallback(lambda ign:
2416+            self._setup_and_upload())
2417+        d.addCallback(lambda ign:
2418+            self._add_server_with_share(server_number=1, share_number=0,
2419+                                        readonly=True))
2420+        d.addCallback(lambda ign:
2421+            self._add_server_with_share(server_number=2, share_number=0,
2422+                                        readonly=True))
2423+        d.addCallback(lambda ign:
2424+            self._add_server_with_share(server_number=3, share_number=0,
2425+                                        readonly=True))
2426+        d.addCallback(_reset_encoding_parameters, happy=7)
2427+        d.addCallback(lambda client:
2428+            self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2429+                            "shares could be placed or found on 4 server(s), "
2430+                            "but they are not spread out evenly enough to "
2431+                            "ensure that any 3 of these servers would have "
2432+                            "enough shares to recover the file. We were asked "
2433+                            "to place shares on at least 7 servers such that "
2434+                            "any 3 of them have enough shares to recover the "
2435+                            "file",
2436+                            client.upload, upload.Data("data" * 10000,
2437+                                                       convergence="")))
2438         return d
2439 
2440 
2441}
2442
2443Context:
2444
2445[setup: add licensing declaration for setuptools (noticed by the FSF compliance folks)
2446zooko@zooko.com**20100309184415
2447 Ignore-this: 2dfa7d812d65fec7c72ddbf0de609ccb
2448]
2449[setup: fix error in licensing declaration from Shawn Willden, as noted by the FSF compliance division
2450zooko@zooko.com**20100309163736
2451 Ignore-this: c0623d27e469799d86cabf67921a13f8
2452]
2453[CREDITS to Jacob Appelbaum
2454zooko@zooko.com**20100304015616
2455 Ignore-this: 70db493abbc23968fcc8db93f386ea54
2456]
2457[desert-island-build-with-proper-versions
2458jacob@appelbaum.net**20100304013858]
2459[docs: a few small edits to try to guide newcomers through the docs
2460zooko@zooko.com**20100303231902
2461 Ignore-this: a6aab44f5bf5ad97ea73e6976bc4042d
2462 These edits were suggested by my watching over Jake Appelbaum's shoulder as he completely ignored/skipped/missed install.html and also as he decided that debian.txt wouldn't help him with basic installation. Then I threw in a few docs edits that have been sitting around in my sandbox asking to be committed for months.
2463]
2464[TAG allmydata-tahoe-1.6.1
2465david-sarah@jacaranda.org**20100228062314
2466 Ignore-this: eb5f03ada8ea953ee7780e7fe068539
2467]
2468[Change install.html to reference 1.6.1 instead of 1.6.0
2469david-sarah@jacaranda.org**20100228061941
2470 Ignore-this: 4738440e66a12dcf2cadf968fba5337
2471]
2472[docs: fix the asymptotic network performance of mutable file download in performance.txt, rename the howto-make-a-release file
2473zooko@zooko.com**20100228061439
2474 Ignore-this: c983b2fa7864f717ec17fb556f8a95d2
2475]
2476[Change code that gives a base32 SI or an empty string to be more straightforward. (#948)
2477david-sarah@jacaranda.org**20100227065551
2478 Ignore-this: ba2b0eb430635fcfb09faeca5046ed21
2479]
2480[Additional test for DIR2-LIT directories in test_web.py, fixed version (#948)
2481david-sarah@jacaranda.org**20100225041824
2482 Ignore-this: 86d710f438439f27aa372b84411af011
2483]
2484[Updates to NEWS for 1.6.1
2485david-sarah@jacaranda.org**20100224081542
2486 Ignore-this: ae1ca1892d7013bcb5f54f201459632
2487]
2488[Additional fixes for DIR2-LIT More Info page and deep-check/manifest operations (#948)
2489david-sarah@jacaranda.org**20100224080220
2490 Ignore-this: 3b431b712f380b5476231ebd99648a7f
2491]
2492[directories: add DIR2-LIT directories to test_deepcheck.py (#948)
2493david-sarah@jacaranda.org**20100224075433
2494 Ignore-this: ed1dcbe45870f5efae0ebbcdff677a4b
2495]
2496[dirnode: add tests of literal dirnodes (current and fix for #948)
2497david-sarah@jacaranda.org**20100224043345
2498 Ignore-this: f18cd17d72ed2495a646fa6c3af42aa1
2499]
2500[Additional fix for abbrev_si, with test
2501david-sarah@jacaranda.org**20100222033652
2502 Ignore-this: 7dc1c7031cd395fb4ec0a5aa96e69a10
2503]
2504[Additions to test_web.py for #948
2505david-sarah@jacaranda.org**20100222025352
2506 Ignore-this: b99be703923efc75db75894a05e6a527
2507]
2508[Change direct accesses to an_uri.storage_index to calls to .get_storage_index() (fixes #948)
2509david-sarah@jacaranda.org**20100222024504
2510 Ignore-this: 91f6fccb5fd9456aa0e02d312f902928
2511]
2512[Tweak to 'tahoe ls --help' output (#837)
2513david-sarah@jacaranda.org**20100224030231
2514 Ignore-this: 9c86ff8ee1f2c9b8a4f6e205a58905f
2515]
2516[Test behaviour of 'tahoe ls' for unknown objects (#837)
2517david-sarah@jacaranda.org**20100224025913
2518 Ignore-this: b999f6239796a90cadb41e8650aa3782
2519]
2520[Improve behaviour of 'tahoe ls' for unknown objects, addressing kevan's comments
2521david-sarah@jacaranda.org**20100220061313
2522 Ignore-this: 6205025c477f1c999473a4ae67e1c83
2523]
2524[docs: update relnotes.txt for v1.6.1
2525zooko@zooko.com**20100224065755
2526 Ignore-this: 6d078e94425462ac8d074e3e7c82da28
2527]
2528[docs: NEWS and relnotes-short.txt and CREDITS for v1.6.1
2529zooko@zooko.com**20100224065231
2530 Ignore-this: 41c056ae48c639e5a934d4c1983bc118
2531]
2532[misc/coverage.el: improve filename matching
2533Brian Warner <warner@lothar.com>**20100224044757
2534 Ignore-this: 8d9fb1d2a71e01370da006a2fef04346
2535]
2536[test_util.py: improve coverage of util.time_format
2537Brian Warner <warner@lothar.com>**20100224044637
2538 Ignore-this: bd93495132fe73a9c117d35c1a4e2d72
2539]
2540[docs/performance.txt: split out CPU from network, expand on mutable costs
2541Brian Warner <warner@lothar.com>**20100224043813
2542 Ignore-this: 4779e78ca0eed1dcbd1652e6287219f1
2543]
2544[docs/FTP: the Twisted patch (t3462) has landed, will be in the next release
2545Brian Warner <warner@lothar.com>**20100223210402
2546 Ignore-this: ddc5c8da8c95d8c19380d8c7ecbaf18
2547]
2548[Change OphandleTable to use a deterministic clock, so we can test it
2549Kevan Carstensen <kevan@isnotajoke.com>**20100220210713
2550 Ignore-this: a7437f4eda359bdfa243bd534f23bf52
2551 
2552 To test the changes for #577, we need a deterministic way to simulate
2553 the passage of long periods of time. twisted.internet.task.Clock seems,
2554 from my Googling, to be the way to go for this functionality. I changed
2555 a few things so that OphandleTable would use twisted.internet.task.Clock
2556 when testing:
2557 
2558   * WebishServer.__init___ now takes an optional 'clock' parameter,
2559   * which it passes to the root.Root instance it creates.
2560   * root.Root.__init__ now takes an optional 'clock' parameter, which it
2561     passes to the OphandleTable.__init__ method.
2562   * OphandleTable.__init__ now takes an optional 'clock' parameter. If
2563     it is provided, and it isn't None, its callLater method will be used
2564     to schedule ophandle expirations (as opposed to using
2565     reactor.callLater, which is what OphandleTable does normally).
2566   * The WebMixin object in test_web.py now sets a self.clock parameter,
2567     which is a twisted.internet.task.Clock that it feeds to the
2568     WebishServer it creates.
2569 
2570 Tests using the WebMixin can control the passage of time in
2571 OphandleTable by accessing self.clock.
2572]
2573[Add tests for the ophandle expiration behavior in #577
2574Kevan Carstensen <kevan@isnotajoke.com>**20100221010455
2575 Ignore-this: 87a435108999c24920354b58fd78353f
2576]
2577[Update docs/frontends/webapi.txt to reflect the new expiration times in #577
2578Kevan Carstensen <kevan@isnotajoke.com>**20100221010716
2579 Ignore-this: cefee2ba800c285ae4148fe2dff39a3b
2580]
2581[Increase ophandle expiration times, per #577
2582Kevan Carstensen <kevan@isnotajoke.com>**20100221010512
2583 Ignore-this: 247f61fe8855a0c76fef3777a957f495
2584]
2585[More cleanups to test_cli using new utilities for reading and writing files.
2586david-sarah@jacaranda.org**20100206013855
2587 Ignore-this: 9fd2294406b346bfe9144fff6a61f789
2588]
2589[Fix race conditions and missing callback in allmydata.test.test_cli.Cp.test_copy_using_filecap, add utilities for one-liner reading and writing of files, and fix cases in test_cli where files were not being closed after writing.
2590david-sarah@jacaranda.org**20100206013727
2591 Ignore-this: 49da6c33190d526a4ae84c472f04d5f4
2592]
2593[setup: comment-out the dependency on pycrypto, see #953
2594zooko@zooko.com**20100215050844
2595 Ignore-this: 2751120921ff35b8189d8fcd896da149
2596]
2597[Add tests for #939
2598Kevan Carstensen <kevan@isnotajoke.com>**20100212062137
2599 Ignore-this: 5459e8c64ba76cca70aa720e68549637
2600]
2601[Alter CLI utilities to handle nonexistent aliases better
2602Kevan Carstensen <kevan@isnotajoke.com>**20100211024318
2603 Ignore-this: e698ea4a57f5fe27c24336581ca0cf65
2604]
2605[web/storage.py: display total-seen on the last-complete-cycle line. For #940.
2606Brian Warner <warner@lothar.com>**20100208002010
2607 Ignore-this: c0ed860f3e9628d3171d2b055d96c5aa
2608]
2609[adding pycrypto to the auto dependencies
2610secorp@allmydata.com**20100206054314
2611 Ignore-this: b873fc00a6a5b001d30d479e6053cf2f
2612]
2613[docs running.html - "tahoe run ." does not work with the current installation, replaced with "tahoe start ."
2614secorp@allmydata.com**20100206165320
2615 Ignore-this: fdb2dcb0e417d303cd43b1951a4f8c03
2616]
2617[code coverage: replace figleaf with coverage.py, should work on py2.6 now.
2618Brian Warner <warner@lothar.com>**20100203165421
2619 Ignore-this: 46ab590360be6a385cb4fc4e68b6b42c
2620 
2621 It still lacks the right HTML report (the builtin report is very pretty, but
2622 lacks the "lines uncovered" numbers that I want), and the half-finished
2623 delta-from-last-run measurements.
2624]
2625[More comprehensive changes and ticket references for NEWS
2626david-sarah@jacaranda.org**20100202061256
2627 Ignore-this: 696cf0106e8a7fd388afc5b55fba8a1b
2628]
2629[docs: install.html: link into Python 2.5.5 download page
2630zooko@zooko.com**20100202065852
2631 Ignore-this: 1a9471b8175b7de5741d8445a7ede29d
2632]
2633[TAG allmydata-tahoe-1.6.0
2634zooko@zooko.com**20100202061125
2635 Ignore-this: dee6ade7ac1452cf5d1d9c69a8146d84
2636]
2637[docs: install.html: recommend Python 2.5 (because I can build extension modules for it with mingw), architecture.txt: point out that our Proof of Retrievability feature is client-side-only
2638zooko@zooko.com**20100202053842
2639 Ignore-this: e33fd413a91771c77b17d7de0f215bea
2640]
2641[architecture.txt: remove trailing whitespace, wrap lines: no content changes
2642Brian Warner <warner@lothar.com>**20100202055304
2643 Ignore-this: 1662f37d1162858ac2619db27bcc411f
2644]
2645[docs: a couple of small edits to release notes (thanks Peter)
2646zooko@zooko.com**20100202054832
2647 Ignore-this: 1d0963c43ff19c92775b124c49c8a88a
2648]
2649[docs: CREDITS: where due
2650zooko@zooko.com**20100202053831
2651 Ignore-this: 11646dd603ac715ae8277a4bb9562215
2652]
2653[docs: a few small edits to performance.txt and README
2654zooko@zooko.com**20100202052750
2655 Ignore-this: bf8b1b7438e8fb6da09eec9713c78533
2656]
2657[docs: a few edits to architecture.txt, most significantly highlighting "future work" to avoid confusing it with the current version, and adding a "future work" about a random-sampling Proof of Retrievability verifier
2658zooko@zooko.com**20100202045117
2659 Ignore-this: 81122b3042ea9ee6bc12e795c2386d59
2660]
2661[docs: a few edits and updates to relnotes.txt, relnotes-short.txt, and NEWS in preparation for v1.6.0
2662zooko@zooko.com**20100202043222
2663 Ignore-this: d90c644fa61d78e33cbdf0be428bb07a
2664]
2665[Document leakage of cap URLs via phishing filters in known_issues.txt
2666david-sarah@jacaranda.org**20100202015238
2667 Ignore-this: 78e668dbca77c0e3a73e10c0b74cf024
2668]
2669[docs: updates to relnotes.txt, NEWS, architecture, historical_known_issues, install.html, etc.
2670zooko@zooko.com**20100201181809
2671 Ignore-this: f4fc924652af746862c8ee4d9ba97bf6
2672]
2673[immutable: downloader accepts notifications of buckets even if those notifications arrive after he has begun downloading shares.
2674zooko@zooko.com**20100201061610
2675 Ignore-this: 5b09709f27603a3157eba7ba70028955
2676 This can be useful if one of the ones that he has already begun downloading fails. See #287 for discussion. This fixes part of #287 which part was a regression caused by #928, namely this fixes fail-over in case a share is corrupted (or the server returns an error or disconnects). This does not fix the related issue mentioned in #287 if a server hangs and doesn't reply to requests for blocks.
2677 
2678]
2679[tests: don't require tahoe to run with no noise if we are using an old twisted that emits DeprecationWarnings
2680zooko@zooko.com**20100201052323
2681 Ignore-this: 69668c772cce612a0c6936a2195ebd2a
2682]
2683[Use if instead of assert to check for twisted ftp patch
2684david-sarah@jacaranda.org**20100127015529
2685 Ignore-this: 66959d946bd1a835ece6f074e75086b2
2686]
2687[tests: stop being surprised that Nevow no longer prints out warnings when it tries to find its static files
2688zooko@zooko.com**20100201041144
2689 Ignore-this: 77b4ac383165d98dfe2a9008ce794742
2690 Unless we are using a sufficiently new version of Nevow, in which case if it prints out warnings then this is a hard test failure. :-)
2691]
2692[cli: suppress DeprecationWarnings emitted from importing nevow and twisted. Fixes #859
2693david-sarah@jacaranda.org**20100201004429
2694 Ignore-this: 22d7216921cd5f04381c0194ed501bbe
2695]
2696[Fill in 'docs/performance.txt' with some performance information
2697Kevan Carstensen <kevan@isnotajoke.com>**20100202005914
2698 Ignore-this: c66b255b2bd2e7e11f5707b25e7b38be
2699]
2700[Improvements to test_unknownnode to cover invalid cap URIs with known prefixes
2701david-sarah@jacaranda.org**20100130063908
2702 Ignore-this: e1a298942c21207473e418ea5efd6276
2703]
2704[Fix invalid trailing commas in JSON example
2705david-sarah@jacaranda.org**20100129201742
2706 Ignore-this: d99e0a8ead4fafabf39a1daf11ec450b
2707]
2708[Improvements to test_hung_server, and fix for status updates in download.py
2709david-sarah@jacaranda.org**20100130064303
2710 Ignore-this: dd889c643afdcf0f86d55855aafda6ad
2711]
2712[immutable: fix bug in tests, change line-endings to unix style, add comment
2713zooko@zooko.com**20100129184237
2714 Ignore-this: f6bd875fe974c55c881e05eddf8d3436
2715]
2716[New tests for #928
2717david-sarah@jacaranda.org**20100129123845
2718 Ignore-this: 5c520f40141f0d9c000ffb05a4698995
2719]
2720[immutable: download from the first servers which provide at least K buckets instead of waiting for all servers to reply
2721zooko@zooko.com**20100127233417
2722 Ignore-this: c855355a40d96827e1d0c469a8d8ab3f
2723 This should put an end to the phenomenon I've been seeing that a single hung server can cause all downloads on a grid to hang.  Also it should speed up all downloads by (a) not-waiting for responses to queries that it doesn't need, and (b) downloading shares from the servers which answered the initial query the fastest.
2724 Also, do not count how many buckets you've gotten when deciding whether the download has enough shares or not -- instead count how many buckets to *unique* shares that you've gotten.  This appears to improve a slightly weird behavior in the current download code in which receiving >= K different buckets all to the same sharenumber would make it think it had enough to download the file when in fact it hadn't.
2725 This patch needs tests before it is actually ready for trunk.
2726]
2727[Eliminate 'foo if test else bar' syntax that isn't supported by Python 2.4
2728david-sarah@jacaranda.org**20100129035210
2729 Ignore-this: 70eafd487b4b6299beedd63b4a54a0c
2730]
2731[Fix example JSON in webapi.txt that cannot occur in practice
2732david-sarah@jacaranda.org**20100129032742
2733 Ignore-this: 361a1ba663d77169aeef93caef870097
2734]
2735[Add mutable field to t=json output for unknown nodes, when mutability is known
2736david-sarah@jacaranda.org**20100129031424
2737 Ignore-this: 1516d63559bdfeb6355485dff0f5c04e
2738]
2739[Show -IMM and -RO suffixes for types of immutable and read-only unknown nodes in directory listings
2740david-sarah@jacaranda.org**20100128220800
2741 Ignore-this: dc5c17c0a566398f88e4303c41321e66
2742]
2743[Fix inaccurate comment in test_mutant_dirnodes_are_omitted
2744david-sarah@jacaranda.org**20100128202456
2745 Ignore-this: 9fa17ed7feac9e4d084f1b2338c76fca
2746]
2747[docs: update relnotes.txt for Tahoe-LAFS v1.6
2748zooko@zooko.com**20100128171257
2749 Ignore-this: 920df92152aead69ef861b9b2e8ff218
2750]
2751[Address comments by Kevan on 833 and add test for stripping spaces
2752david-sarah@jacaranda.org**20100127230642
2753 Ignore-this: de36aeaf4afb3ba05dbeb49a5e9a6b26
2754]
2755[Miscellaneous documentation, test, and code formatting tweaks.
2756david-sarah@jacaranda.org**20100127070309
2757 Ignore-this: 84ca7e4bb7c64221ae2c61144ef5edef
2758]
2759[Prevent mutable objects from being retrieved from an immutable directory, and associated forward-compatibility improvements.
2760david-sarah@jacaranda.org**20100127064430
2761 Ignore-this: 5ef6a3554cf6bef0bf0712cc7d6c0252
2762]
2763[test_runner: cleanup, refactor common code into a non-executable method
2764Brian Warner <warner@lothar.com>**20100127224040
2765 Ignore-this: 4cb4aada87777771f688edfd8129ffca
2766 
2767 Having both test_node() and test_client() (one of which calls the other) felt
2768 confusing to me, so I changed it to have test_node(), test_client(), and a
2769 common do_create() helper method.
2770]
2771[scripts/runner.py: simplify David-Sarah's clever grouped-commands usage trick
2772Brian Warner <warner@lothar.com>**20100127223758
2773 Ignore-this: 70877ebf06ae59f32960b0aa4ce1d1ae
2774]
2775[tahoe backup: skip all symlinks, with warning. Fixes #850, addresses #641.
2776Brian Warner <warner@lothar.com>**20100127223517
2777 Ignore-this: ab5cf05158d32a575ca8efc0f650033f
2778]
2779[NEWS: update with all recent user-visible changes
2780Brian Warner <warner@lothar.com>**20100127222209
2781 Ignore-this: 277d24568018bf4f3fb7736fda64eceb
2782]
2783["tahoe backup": fix --exclude-vcs docs to include Git
2784Brian Warner <warner@lothar.com>**20100127201044
2785 Ignore-this: 756a58dde21bdc65aa62b81803605b5
2786]
2787[docs: fix references to --no-storage, explanation of [storage] section
2788Brian Warner <warner@lothar.com>**20100127200956
2789 Ignore-this: f4be1763a585e1ac6299a4f1b94a59e0
2790]
2791[docs: further CREDITS level-ups for Nils, Kevan, David-Sarah
2792zooko@zooko.com**20100126170021
2793 Ignore-this: 1e513e85cf7b7abf57f056e6d7544b38
2794]
2795[Patch to accept t=set-children as well as t=set_children
2796david-sarah@jacaranda.org**20100124030020
2797 Ignore-this: 2c061f12af817cdf77feeeb64098ec3a
2798]
2799[Fix boodlegrid use of set_children
2800david-sarah@jacaranda.org**20100126063414
2801 Ignore-this: 3aa2d4836f76303b2bacecd09611f999
2802]
2803[ftpd: clearer error message if Twisted needs a patch (by Nils Durner)
2804zooko@zooko.com**20100126143411
2805 Ignore-this: 440e6831ae6da5135c1edd081c93871f
2806]
2807[Add 'docs/performance.txt', which (for the moment) describes mutable file performance issues
2808Kevan Carstensen <kevan@isnotajoke.com>**20100115204500
2809 Ignore-this: ade4e500217db2509aee35aacc8c5dbf
2810]
2811[docs: more CREDITS for François, Kevan, and David-Sarah
2812zooko@zooko.com**20100126132133
2813 Ignore-this: f37d4977c13066fcac088ba98a31b02e
2814]
2815[tahoe_backup.py: display warnings on errors instead of stopping the whole backup. Fix #729.
2816francois@ctrlaltdel.ch**20100120094249
2817 Ignore-this: 7006ea4b0910b6d29af6ab4a3997a8f9
2818 
2819 This patch displays a warning to the user in two cases:
2820   
2821   1. When special files like symlinks, fifos, devices, etc. are found in the
2822      local source.
2823   
2824   2. If files or directories are not readables by the user running the 'tahoe
2825      backup' command.
2826 
2827 In verbose mode, the number of skipped files and directories is printed at the
2828 end of the backup.
2829 
2830 Exit status returned by 'tahoe backup':
2831 
2832   - 0 everything went fine
2833   - 1 the backup failed
2834   - 2 files were skipped during the backup
2835 
2836]
2837[Warn about test failures due to setting FLOG* env vars
2838david-sarah@jacaranda.org**20100124220629
2839 Ignore-this: 1c25247ca0f0840390a1b7259a9f4a3c
2840]
2841[Message saying that we couldn't find bin/tahoe should say where we looked
2842david-sarah@jacaranda.org**20100116204556
2843 Ignore-this: 1068576fd59ea470f1e19196315d1bb
2844]
2845[Change running.html to describe 'tahoe run'
2846david-sarah@jacaranda.org**20100112044409
2847 Ignore-this: 23ad0114643ce31b56e19bb14e011e4f
2848]
2849[cli: merge the better version of David-Sarah's split-usage-and-help patch with the earlier version that I mistakenly committed
2850zooko@zooko.com**20100126044559
2851 Ignore-this: 284d188e13b7901013cbb650168e6447
2852]
2853[Split tahoe --help options into groups.
2854david-sarah@jacaranda.org**20100112043935
2855 Ignore-this: 610f9c41b00e6863e3cd047379733e3a
2856]
2857[cli: split usage strings into groups (patch by David-Sarah Hopwood)
2858zooko@zooko.com**20100126043921
2859 Ignore-this: 51928d266a7292b873f87f7d53c9a01e
2860]
2861[Add create-node CLI command, and make create-client equivalent to create-node --no-storage (fixes #760)
2862david-sarah@jacaranda.org**20100116052055
2863 Ignore-this: 47d08b18c69738685e13ff365738d5a
2864]
2865[Remove replace= parameter to mkdir-immutable and mkdir-with-children
2866david-sarah@jacaranda.org**20100124224325
2867 Ignore-this: 25207bcc946c0c43d9528718e76ba7b
2868]
2869[contrib/fuse/runtests.py: Fix #888, configure settings in tahoe.cfg and don't treat warnings as failure
2870francois@ctrlaltdel.ch**20100109123010
2871 Ignore-this: 2590d44044acd7dfa3690c416cae945c
2872 
2873 Fix a few bitrotten pieces in the FUSE test script.  It now configures tahoe
2874 node settings by editing tahoe.cfg which is the new supported method.
2875 
2876 It alos tolerate warnings issued by the mount command, the cause of these
2877 warnings is the same as in #876 (contrib/fuse/runtests.py doesn't tolerate
2878 deprecations warnings).
2879 
2880]
2881[Fix webapi t=mkdir with multpart/form-data, as on the Welcome page. Closes #919.
2882Brian Warner <warner@lothar.com>**20100121065052
2883 Ignore-this: 1f20ea0a0f1f6d6c1e8e14f193a92c87
2884]
2885[tahoe_add_alias.py: minor refactoring
2886Brian Warner <warner@lothar.com>**20100115064220
2887 Ignore-this: 29910e81ad11209c9e493d65fd2dab9b
2888]
2889[test_dirnode.py: reduce scope of a Client instance, suggested by Kevan.
2890Brian Warner <warner@lothar.com>**20100115062713
2891 Ignore-this: b35efd9e6027e43de6c6f509bfb4ccaa
2892]
2893[test_provisioning: STAN is not always a list. Fix by David-Sarah Hopwood.
2894Brian Warner <warner@lothar.com>**20100115014632
2895 Ignore-this: 9989de7f1e00907706d2b63153138219
2896]
2897[web/directory.py mkdir-immutable: hush pyflakes, add TODO for #903 behavior
2898Brian Warner <warner@lothar.com>**20100114222804
2899 Ignore-this: 717cd3b9a1c8aeee76938c9641db7356
2900]
2901[hush pyflakes-0.4.0 warnings: slightly less-trivial fixes. Closes #900.
2902Brian Warner <warner@lothar.com>**20100114221719
2903 Ignore-this: f774f4637e256ad55502659413a811a8
2904 
2905 This includes one fix (in test_web) which was testing the wrong thing.
2906]
2907[hush pyflakes-0.4.0 warnings: remove trivial unused variables. For #900.
2908Brian Warner <warner@lothar.com>**20100114221529
2909 Ignore-this: e96106c8f1a99fbf93306fbfe9a294cf
2910]
2911[tahoe add-alias/create-alias: don't corrupt non-newline-terminated alias
2912Brian Warner <warner@lothar.com>**20100114210246
2913 Ignore-this: 9c994792e53a85159d708760a9b1b000
2914 file. Closes #741.
2915]
2916[change docs and --help to use "grid" instead of "virtual drive": closes #892.
2917Brian Warner <warner@lothar.com>**20100114201119
2918 Ignore-this: a20d4a4dcc4de4e3b404ff72d40fc29b
2919 
2920 Thanks to David-Sarah Hopwood for the patch.
2921]
2922[backupdb.txt: fix ST_CTIME reference
2923Brian Warner <warner@lothar.com>**20100114194052
2924 Ignore-this: 5a189c7a1181b07dd87f0a08ea31b6d3
2925]
2926[client.py: fix/update comments on KeyGenerator
2927Brian Warner <warner@lothar.com>**20100113004226
2928 Ignore-this: 2208adbb3fd6a911c9f44e814583cabd
2929]
2930[Clean up log.err calls, for one of the issues in #889.
2931Brian Warner <warner@lothar.com>**20100112013343
2932 Ignore-this: f58455ce15f1fda647c5fb25d234d2db
2933 
2934 allmydata.util.log.err() either takes a Failure as the first positional
2935 argument, or takes no positional arguments and must be invoked in an
2936 exception handler. Fixed its signature to match both foolscap.logging.log.err
2937 and twisted.python.log.err . Included a brief unit test.
2938]
2939[tidy up DeadReferenceError handling, ignore them in add_lease calls
2940Brian Warner <warner@lothar.com>**20100112000723
2941 Ignore-this: 72f1444e826fd0b9db6d318f89603c38
2942 
2943 Stop checking separately for ConnectionDone/ConnectionLost, since those have
2944 been folded into DeadReferenceError since foolscap-0.3.1 . Write
2945 rrefutil.trap_deadref() in terms of rrefutil.trap_and_discard() to improve
2946 code coverage.
2947]
2948[NEWS: improve "tahoe backup" notes, mention first-backup-after-upgrade duration
2949Brian Warner <warner@lothar.com>**20100111190132
2950 Ignore-this: 10347c590b3375964579ba6c2b0edb4f
2951 
2952 Thanks to Francois Deppierraz for the suggestion.
2953]
2954[test_repairer: add (commented-out) test_each_byte, to see exactly what the
2955Brian Warner <warner@lothar.com>**20100110203552
2956 Ignore-this: 8e84277d5304752edeff052b97821815
2957 Verifier misses
2958 
2959 The results (described in #819) match our expectations: it misses corruption
2960 in unused share fields and in most container fields (which are only visible
2961 to the storage server, not the client). 1265 bytes of a 2753 byte
2962 share (hosting a 56-byte file with an artifically small segment size) are
2963 unused, mostly in the unused tail of the overallocated UEB space (765 bytes),
2964 and the allocated-but-unwritten plaintext_hash_tree (480 bytes).
2965]
2966[repairer: fix some wrong offsets in the randomized verifier tests, debugged by Brian
2967zooko@zooko.com**20100110203721
2968 Ignore-this: 20604a609db8706555578612c1c12feb
2969 fixes #819
2970]
2971[test_repairer: fix colliding basedir names, which caused test inconsistencies
2972Brian Warner <warner@lothar.com>**20100110084619
2973 Ignore-this: b1d56dd27e6ab99a7730f74ba10abd23
2974]
2975[repairer: add deterministic test for #819, mark as TODO
2976zooko@zooko.com**20100110013619
2977 Ignore-this: 4cb8bb30b25246de58ed2b96fa447d68
2978]
2979[contrib/fuse/runtests.py: Tolerate the tahoe CLI returning deprecation warnings
2980francois@ctrlaltdel.ch**20100109175946
2981 Ignore-this: 419c354d9f2f6eaec03deb9b83752aee
2982 
2983 Depending on the versions of external libraries such as Twisted of Foolscap,
2984 the tahoe CLI can display deprecation warnings on stdout.  The tests should
2985 not interpret those warnings as a failure if the node is in fact correctly
2986 started.
2987   
2988 See http://allmydata.org/trac/tahoe/ticket/859 for an example of deprecation
2989 warnings.
2990 
2991 fixes #876
2992]
2993[contrib: fix fuse_impl_c to use new Python API
2994zooko@zooko.com**20100109174956
2995 Ignore-this: 51ca1ec7c2a92a0862e9b99e52542179
2996 original patch by Thomas Delaet, fixed by François, reviewed by Brian, committed by me
2997]
2998[docs: CREDITS: add David-Sarah to the CREDITS file
2999zooko@zooko.com**20100109060435
3000 Ignore-this: 896062396ad85f9d2d4806762632f25a
3001]
3002[mutable/publish: don't loop() right away upon DeadReferenceError. Closes #877
3003Brian Warner <warner@lothar.com>**20100102220841
3004 Ignore-this: b200e707b3f13aa8251981362b8a3e61
3005 
3006 The bug was that a disconnected server could cause us to re-enter the initial
3007 loop() call, sending multiple queries to a single server, provoking an
3008 incorrect UCWE. To fix it, stall the loop() with an eventual.fireEventually()
3009]
3010[immutable/checker.py: oops, forgot some imports. Also hush pyflakes.
3011Brian Warner <warner@lothar.com>**20091229233909
3012 Ignore-this: 4d61bd3f8113015a4773fd4768176e51
3013]
3014[mutable repair: return successful=False when numshares<k (thus repair fails),
3015Brian Warner <warner@lothar.com>**20091229233746
3016 Ignore-this: d881c3275ff8c8bee42f6a80ca48441e
3017 instead of weird errors. Closes #874 and #786.
3018 
3019 Previously, if the file had 0 shares, this would raise TypeError as it tried
3020 to call download_version(None). If the file had some shares but fewer than
3021 'k', it would incorrectly raise MustForceRepairError.
3022 
3023 Added get_successful() to the IRepairResults API, to give repair() a place to
3024 report non-code-bug problems like this.
3025]
3026[node.py/interfaces.py: minor docs fixes
3027Brian Warner <warner@lothar.com>**20091229230409
3028 Ignore-this: c86ad6342ef0f95d50639b4f99cd4ddf
3029]
3030[NEWS: fix 1.4.1 announcement w.r.t. add-lease behavior in older releases
3031Brian Warner <warner@lothar.com>**20091229230310
3032 Ignore-this: bbbbb9c961f3bbcc6e5dbe0b1594822
3033]
3034[checker: don't let failures in add-lease affect checker results. Closes #875.
3035Brian Warner <warner@lothar.com>**20091229230108
3036 Ignore-this: ef1a367b93e4d01298c2b1e6ca59c492
3037 
3038 Mutable servermap updates and the immutable checker, when run with
3039 add_lease=True, send both the do-you-have-block and add-lease commands in
3040 parallel, to avoid an extra round trip time. Many older servers have problems
3041 with add-lease and raise various exceptions, which don't generally matter.
3042 The client-side code was catching+ignoring some of them, but unrecognized
3043 exceptions were passed through to the DYHB code, concealing the DYHB results
3044 from the checker, making it think the server had no shares.
3045 
3046 The fix is to separate the code paths. Both commands are sent at the same
3047 time, but the errback path from add-lease is handled separately. Known
3048 exceptions are ignored, the others (both unknown-remote and all-local) are
3049 logged (log.WEIRD, which will trigger an Incident), but neither will affect
3050 the DYHB results.
3051 
3052 The add-lease message is sent first, and we know that the server handles them
3053 synchronously. So when the checker is done, we can be sure that all the
3054 add-lease messages have been retired. This makes life easier for unit tests.
3055]
3056[test_cli: verify fix for "tahoe get" not creating empty file on error (#121)
3057Brian Warner <warner@lothar.com>**20091227235444
3058 Ignore-this: 6444d52413b68eb7c11bc3dfdc69c55f
3059]
3060[addendum to "Fix 'tahoe ls' on files (#771)"
3061Brian Warner <warner@lothar.com>**20091227232149
3062 Ignore-this: 6dd5e25f8072a3153ba200b7fdd49491
3063 
3064 tahoe_ls.py: tolerate missing metadata
3065 web/filenode.py: minor cleanups
3066 test_cli.py: test 'tahoe ls FILECAP'
3067]
3068[Fix 'tahoe ls' on files (#771). Patch adapted from Kevan Carstensen.
3069Brian Warner <warner@lothar.com>**20091227225443
3070 Ignore-this: 8bf8c7b1cd14ea4b0ebd453434f4fe07
3071 
3072 web/filenode.py: also serve edge metadata when using t=json on a
3073                  DIRCAP/childname object.
3074 tahoe_ls.py: list file objects as if we were listing one-entry directories.
3075              Show edge metadata if we have it, which will be true when doing
3076              'tahoe ls DIRCAP/filename' and false when doing 'tahoe ls
3077              FILECAP'
3078]
3079[tahoe_get: don't create the output file on error. Closes #121.
3080Brian Warner <warner@lothar.com>**20091227220404
3081 Ignore-this: 58d5e793a77ec6e87d9394ade074b926
3082]
3083[webapi: don't accept zero-length childnames during traversal. Closes #358, #676.
3084Brian Warner <warner@lothar.com>**20091227201043
3085 Ignore-this: a9119dec89e1c7741f2289b0cad6497b
3086 
3087 This forbids operations that would implicitly create a directory with a
3088 zero-length (empty string) name, like what you'd get if you did "tahoe put
3089 local /oops/blah" (#358) or "POST /uri/CAP//?t=mkdir" (#676). The error
3090 message is fairly friendly too.
3091 
3092 Also added code to "tahoe put" to catch this error beforehand and suggest the
3093 correct syntax (i.e. without the leading slash).
3094]
3095[CLI: send 'Accept:' header to ask for text/plain tracebacks. Closes #646.
3096Brian Warner <warner@lothar.com>**20091227195828
3097 Ignore-this: 44c258d4d4c7dac0ed58adb22f73331
3098 
3099 The webapi has been looking for an Accept header since 1.4.0, but it treats a
3100 missing header as equal to */* (to honor RFC2616). This change finally
3101 modifies our CLI tools to ask for "text/plain, application/octet-stream",
3102 which seems roughly correct (we either want a plain-text traceback or error
3103 message, or an uninterpreted chunk of binary data to save to disk). Some day
3104 we'll figure out how JSON fits into this scheme.
3105]
3106[Makefile: upload-tarballs: switch from xfer-client to flappclient, closes #350
3107Brian Warner <warner@lothar.com>**20091227163703
3108 Ignore-this: 3beeecdf2ad9c2438ab57f0e33dcb357
3109 
3110 I've also set up a new flappserver on source@allmydata.org to receive the
3111 tarballs. We still need to replace the gutsy buildslave (which is where the
3112 tarballs used to be generated+uploaded) and give it the new FURL.
3113]
3114[misc/ringsim.py: make it deterministic, more detail about grid-is-full behavior
3115Brian Warner <warner@lothar.com>**20091227024832
3116 Ignore-this: a691cc763fb2e98a4ce1767c36e8e73f
3117]
3118[misc/ringsim.py: tool to discuss #302
3119Brian Warner <warner@lothar.com>**20091226060339
3120 Ignore-this: fc171369b8f0d97afeeb8213e29d10ed
3121]
3122[docs: fix helper.txt to describe new config style
3123zooko@zooko.com**20091224223522
3124 Ignore-this: 102e7692dc414a4b466307f7d78601fe
3125]
3126[docs/stats.txt: add TOC, notes about controlling gatherer's listening port
3127Brian Warner <warner@lothar.com>**20091224202133
3128 Ignore-this: 8eef63b0e18db5aa8249c2eafde02c05
3129 
3130 Thanks to Jody Harris for the suggestions.
3131]
3132[Add docs/stats.py, explaining Tahoe stats, the gatherer, and the munin plugins.
3133Brian Warner <warner@lothar.com>**20091223052400
3134 Ignore-this: 7c9eeb6e5644eceda98b59a67730ccd5
3135]
3136[more #859: avoid deprecation warning for unit tests too, hush pyflakes
3137Brian Warner <warner@lothar.com>**20091215000147
3138 Ignore-this: 193622e24d31077da825a11ed2325fd3
3139 
3140 * factor maybe-import-sha logic into util.hashutil
3141]
3142[use hashlib module if available, thus avoiding a DeprecationWarning for importing the old sha module; fixes #859
3143zooko@zooko.com**20091214212703
3144 Ignore-this: 8d0f230a4bf8581dbc1b07389d76029c
3145]
3146[docs: reflow architecture.txt to 78-char lines
3147zooko@zooko.com**20091208232943
3148 Ignore-this: 88f55166415f15192e39407815141f77
3149]
3150[docs: update the about.html a little
3151zooko@zooko.com**20091208212737
3152 Ignore-this: 3fe2d9653c6de0727d3e82bd70f2a8ed
3153]
3154[docs: remove obsolete doc file "codemap.txt"
3155zooko@zooko.com**20091113163033
3156 Ignore-this: 16bc21a1835546e71d1b344c06c61ebb
3157 I started to update this to reflect the current codebase, but then I thought (a) nobody seemed to notice that it hasn't been updated since December 2007, and (b) it will just bit-rot again, so I'm removing it.
3158]
3159[mutable/retrieve.py: stop reaching into private MutableFileNode attributes
3160Brian Warner <warner@lothar.com>**20091208172921
3161 Ignore-this: 61e548798c1105aed66a792bf26ceef7
3162]
3163[mutable/servermap.py: stop reaching into private MutableFileNode attributes
3164Brian Warner <warner@lothar.com>**20091208172608
3165 Ignore-this: b40a6b62f623f9285ad96fda139c2ef2
3166]
3167[mutable/servermap.py: oops, query N+e servers in MODE_WRITE, not k+e
3168Brian Warner <warner@lothar.com>**20091208171156
3169 Ignore-this: 3497f4ab70dae906759007c3cfa43bc
3170 
3171 under normal conditions, this wouldn't cause any problems, but if the shares
3172 are really sparse (perhaps because new servers were added), then
3173 file-modifies might stop looking too early and leave old shares in place
3174]
3175[control.py: fix speedtest: use download_best_version (not read) on mutable nodes
3176Brian Warner <warner@lothar.com>**20091207060512
3177 Ignore-this: 7125eabfe74837e05f9291dd6414f917
3178]
3179[FTP-and-SFTP.txt: fix ssh-keygen pointer
3180Brian Warner <warner@lothar.com>**20091207052803
3181 Ignore-this: bc2a70ee8c58ec314e79c1262ccb22f7
3182]
3183[setup: ignore _darcs in the "test-clean" test and make the "clean" step remove all .egg's in the root dir
3184zooko@zooko.com**20091206184835
3185 Ignore-this: 6066bd160f0db36d7bf60aba405558d2
3186]
3187[remove MutableFileNode.download(), prefer download_best_version() instead
3188Brian Warner <warner@lothar.com>**20091201225438
3189 Ignore-this: 5733eb373a902063e09fd52cc858dec0
3190]
3191[Simplify immutable download API: use just filenode.read(consumer, offset, size)
3192Brian Warner <warner@lothar.com>**20091201225330
3193 Ignore-this: bdedfb488ac23738bf52ae6d4ab3a3fb
3194 
3195 * remove Downloader.download_to_data/download_to_filename/download_to_filehandle
3196 * remove download.Data/FileName/FileHandle targets
3197 * remove filenode.download/download_to_data/download_to_filename methods
3198 * leave Downloader.download (the whole Downloader will go away eventually)
3199 * add util.consumer.MemoryConsumer/download_to_data, for convenience
3200   (this is mostly used by unit tests, but it gets used by enough non-test
3201    code to warrant putting it in allmydata.util)
3202 * update tests
3203 * removes about 180 lines of code. Yay negative code days!
3204 
3205 Overall plan is to rewrite immutable/download.py and leave filenode.read() as
3206 the sole read-side API.
3207]
3208[server.py: undo my bogus 'correction' of David-Sarah's comment fix
3209Brian Warner <warner@lothar.com>**20091201024607
3210 Ignore-this: ff4bb58f6a9e045b900ac3a89d6f506a
3211 
3212 and move it to a better line
3213]
3214[Implement more coherent behavior when copying with dircaps/filecaps (closes #761). Patch by Kevan Carstensen.
3215"Brian Warner <warner@lothar.com>"**20091130211009]
3216[storage.py: update comment
3217"Brian Warner <warner@lothar.com>"**20091130195913]
3218[storage server: detect disk space usage on Windows too (fixes #637)
3219david-sarah@jacaranda.org**20091121055644
3220 Ignore-this: 20fb30498174ce997befac7701fab056
3221]
3222[make status of finished operations consistently "Finished"
3223david-sarah@jacaranda.org**20091121061543
3224 Ignore-this: 97d483e8536ccfc2934549ceff7055a3
3225]
3226[NEWS: update with all user-visible changes since the last release
3227Brian Warner <warner@lothar.com>**20091127224217
3228 Ignore-this: 741da6cd928e939fb6d21a61ea3daf0b
3229]
3230[update "tahoe backup" docs, and webapi.txt's mkdir-with-children
3231Brian Warner <warner@lothar.com>**20091127055900
3232 Ignore-this: defac1fb9a2335b0af3ef9dbbcc67b7e
3233]
3234[Add dirnodes to backupdb and "tahoe backup", closes #606.
3235Brian Warner <warner@lothar.com>**20091126234257
3236 Ignore-this: fa88796fcad1763c6a2bf81f56103223
3237 
3238 * backups now share dirnodes with any previous backup, in any location,
3239   so renames and moves are handled very efficiently
3240 * "tahoe backup" no longer bothers reading the previous snapshot
3241 * if you switch grids, you should delete ~/.tahoe/private/backupdb.sqlite,
3242   to force new uploads of all files and directories
3243]
3244[webapi: fix t=check for DIR2-LIT (i.e. empty immutable directories)
3245Brian Warner <warner@lothar.com>**20091126232731
3246 Ignore-this: 8513c890525c69c1eca0e80d53a231f8
3247]
3248[PipelineError: fix str() on python2.4 . Closes #842.
3249Brian Warner <warner@lothar.com>**20091124212512
3250 Ignore-this: e62c92ea9ede2ab7d11fe63f43b9c942
3251]
3252[test_uri.py: s/NewDirnode/Dirnode/ , now that they aren't "new" anymore
3253Brian Warner <warner@lothar.com>**20091120075553
3254 Ignore-this: 61c8ef5e45a9d966873a610d8349b830
3255]
3256[interface name cleanups: IFileNode, IImmutableFileNode, IMutableFileNode
3257Brian Warner <warner@lothar.com>**20091120075255
3258 Ignore-this: e3d193c229e2463e1d0b0c92306de27f
3259 
3260 The proper hierarchy is:
3261  IFilesystemNode
3262  +IFileNode
3263  ++IMutableFileNode
3264  ++IImmutableFileNode
3265  +IDirectoryNode
3266 
3267 Also expand test_client.py (NodeMaker) to hit all IFilesystemNode types.
3268]
3269[class name cleanups: s/FileNode/ImmutableFileNode/
3270Brian Warner <warner@lothar.com>**20091120072239
3271 Ignore-this: 4b3218f2d0e585c62827e14ad8ed8ac1
3272 
3273 also fix test/bench_dirnode.py for recent dirnode changes
3274]
3275[Use DIR-IMM and t=mkdir-immutable for "tahoe backup", for #828
3276Brian Warner <warner@lothar.com>**20091118192813
3277 Ignore-this: a4720529c9bc6bc8b22a3d3265925491
3278]
3279[web/directory.py: use "DIR-IMM" to describe immutable directories, not DIR-RO
3280Brian Warner <warner@lothar.com>**20091118191832
3281 Ignore-this: aceafd6ab4bf1cc0c2a719ef7319ac03
3282]
3283[web/info.py: hush pyflakes
3284Brian Warner <warner@lothar.com>**20091118191736
3285 Ignore-this: edc5f128a2b8095fb20686a75747c8
3286]
3287[make get_size/get_current_size consistent for all IFilesystemNode classes
3288Brian Warner <warner@lothar.com>**20091118191624
3289 Ignore-this: bd3449cf96e4827abaaf962672c1665a
3290 
3291 * stop caching most_recent_size in dirnode, rely upon backing filenode for it
3292 * start caching most_recent_size in MutableFileNode
3293 * return None when you don't know, not "?"
3294 * only render None as "?" in the web "more info" page
3295 * add get_size/get_current_size to UnknownNode
3296]
3297[ImmutableDirectoryURIVerifier: fix verifycap handling
3298Brian Warner <warner@lothar.com>**20091118164238
3299 Ignore-this: 6bba5c717b54352262eabca6e805d590
3300]
3301[Add t=mkdir-immutable to the webapi. Closes #607.
3302Brian Warner <warner@lothar.com>**20091118070900
3303 Ignore-this: 311e5fab9a5f28b9e8a28d3d08f3c0d
3304 
3305 * change t=mkdir-with-children to not use multipart/form encoding. Instead,
3306   the request body is all JSON. t=mkdir-immutable uses this format too.
3307 * make nodemaker.create_immutable_dirnode() get convergence from SecretHolder,
3308   but let callers override it
3309 * raise NotDeepImmutableError instead of using assert()
3310 * add mutable= argument to DirectoryNode.create_subdirectory(), default True
3311]
3312[move convergence secret into SecretHolder, next to lease secret
3313Brian Warner <warner@lothar.com>**20091118015444
3314 Ignore-this: 312f85978a339f2d04deb5bcb8f511bc
3315]
3316[nodemaker: implement immutable directories (internal interface), for #607
3317Brian Warner <warner@lothar.com>**20091112002233
3318 Ignore-this: d09fccf41813fdf7e0db177ed9e5e130
3319 
3320 * nodemaker.create_from_cap() now handles DIR2-CHK and DIR2-LIT
3321 * client.create_immutable_dirnode() is used to create them
3322 * no webapi yet
3323]
3324[stop using IURI()/etc as an adapter
3325Brian Warner <warner@lothar.com>**20091111224542
3326 Ignore-this: 9611da7ea6a4696de2a3b8c08776e6e0
3327]
3328[clean up uri-vs-cap terminology, emphasize cap instances instead of URI strings
3329Brian Warner <warner@lothar.com>**20091111222619
3330 Ignore-this: 93626385f6e7f039ada71f54feefe267
3331 
3332  * "cap" means a python instance which encapsulates a filecap/dircap (uri.py)
3333  * "uri" means a string with a "URI:" prefix
3334  * FileNode instances are created with (and retain) a cap instance, and
3335    generate uri strings on demand
3336  * .get_cap/get_readcap/get_verifycap/get_repaircap return cap instances
3337  * .get_uri/get_readonly_uri return uri strings
3338 
3339 * add filenode.download_to_filename() for control.py, should find a better way
3340 * use MutableFileNode.init_from_cap, not .init_from_uri
3341 * directory URI instances: use get_filenode_cap, not get_filenode_uri
3342 * update/cleanup bench_dirnode.py to match, add Makefile target to run it
3343]
3344[add parser for immutable directory caps: DIR2-CHK, DIR2-LIT, DIR2-CHK-Verifier
3345Brian Warner <warner@lothar.com>**20091104181351
3346 Ignore-this: 854398cc7a75bada57fa97c367b67518
3347]
3348[wui: s/TahoeLAFS/Tahoe-LAFS/
3349zooko@zooko.com**20091029035050
3350 Ignore-this: 901e64cd862e492ed3132bd298583c26
3351]
3352[tests: bump up the timeout on test_repairer to see if 120 seconds was too short for François's ARM box to do the test even when it was doing it right.
3353zooko@zooko.com**20091027224800
3354 Ignore-this: 95e93dc2e018b9948253c2045d506f56
3355]
3356[dirnode.pack_children(): add deep_immutable= argument
3357Brian Warner <warner@lothar.com>**20091026162809
3358 Ignore-this: d5a2371e47662c4bc6eff273e8181b00
3359 
3360 This will be used by DIR2:CHK to enforce the deep-immutability requirement.
3361]
3362[webapi: use t=mkdir-with-children instead of a children= arg to t=mkdir .
3363Brian Warner <warner@lothar.com>**20091026011321
3364 Ignore-this: 769cab30b6ab50db95000b6c5a524916
3365 
3366 This is safer: in the earlier API, an old webapi server would silently ignore
3367 the initial children, and clients trying to set them would have to fetch the
3368 newly-created directory to discover the incompatibility. In the new API,
3369 clients using t=mkdir-with-children against an old webapi server will get a
3370 clear error.
3371]
3372[nodemaker.create_new_mutable_directory: pack_children() in initial_contents=
3373Brian Warner <warner@lothar.com>**20091020005118
3374 Ignore-this: bd43c4eefe06fd32b7492bcb0a55d07e
3375 instead of creating an empty file and then adding the children later.
3376 
3377 This should speed up mkdir(initial_children) considerably, removing two
3378 roundtrips and an entire read-modify-write cycle, probably bringing it down
3379 to a single roundtrip. A quick test (against the volunteergrid) suggests a
3380 30% speedup.
3381 
3382 test_dirnode: add new tests to enforce the restrictions that interfaces.py
3383 claims for create_new_mutable_directory(): no UnknownNodes, metadata dicts
3384]
3385[test_dirnode.py: add tests of initial_children= args to client.create_dirnode
3386Brian Warner <warner@lothar.com>**20091017194159
3387 Ignore-this: 2e2da28323a4d5d815466387914abc1b
3388 and nodemaker.create_new_mutable_directory
3389]
3390[update many dirnode interfaces to accept dict-of-nodes instead of dict-of-caps
3391Brian Warner <warner@lothar.com>**20091017192829
3392 Ignore-this: b35472285143862a856bf4b361d692f0
3393 
3394 interfaces.py: define INodeMaker, document argument values, change
3395                create_new_mutable_directory() to take dict-of-nodes. Change
3396                dirnode.set_nodes() and dirnode.create_subdirectory() too.
3397 nodemaker.py: use INodeMaker, update create_new_mutable_directory()
3398 client.py: have create_dirnode() delegate initial_children= to nodemaker
3399 dirnode.py (Adder): take dict-of-nodes instead of list-of-nodes, which
3400                     updates set_nodes() and create_subdirectory()
3401 web/common.py (convert_initial_children_json): create dict-of-nodes
3402 web/directory.py: same
3403 web/unlinked.py: same
3404 test_dirnode.py: update tests to match
3405]
3406[dirnode.py: move pack_children() out to a function, for eventual use by others
3407Brian Warner <warner@lothar.com>**20091017180707
3408 Ignore-this: 6a823fb61f2c180fd38d6742d3196a7a
3409]
3410[move dirnode.CachingDict to dictutil.AuxValueDict, generalize method names,
3411Brian Warner <warner@lothar.com>**20091017180005
3412 Ignore-this: b086933cf429df0fcea16a308d2640dd
3413 improve tests. Let dirnode _pack_children accept either dict or AuxValueDict.
3414]
3415[test/common.py: update FakeMutableFileNode to new contents= callable scheme
3416Brian Warner <warner@lothar.com>**20091013052154
3417 Ignore-this: 62f00a76454a2190d1c8641c5993632f
3418]
3419[The initial_children= argument to nodemaker.create_new_mutable_directory is
3420Brian Warner <warner@lothar.com>**20091013031922
3421 Ignore-this: 72e45317c21f9eb9ec3bd79bd4311f48
3422 now enabled.
3423]
3424[client.create_mutable_file(contents=) now accepts a callable, which is
3425Brian Warner <warner@lothar.com>**20091013031232
3426 Ignore-this: 3c89d2f50c1e652b83f20bd3f4f27c4b
3427 invoked with the new MutableFileNode and is supposed to return the initial
3428 contents. This can be used by e.g. a new dirnode which needs the filenode's
3429 writekey to encrypt its initial children.
3430 
3431 create_mutable_file() still accepts a bytestring too, or None for an empty
3432 file.
3433]
3434[webapi: t=mkdir now accepts initial children, using the same JSON that t=json
3435Brian Warner <warner@lothar.com>**20091013023444
3436 Ignore-this: 574a46ed46af4251abf8c9580fd31ef7
3437 emits.
3438 
3439 client.create_dirnode(initial_children=) now works.
3440]
3441[replace dirnode.create_empty_directory() with create_subdirectory(), which
3442Brian Warner <warner@lothar.com>**20091013021520
3443 Ignore-this: 6b57cb51bcfcc6058d0df569fdc8a9cf
3444 takes an initial_children= argument
3445]
3446[dirnode.set_children: change return value: fire with self instead of None
3447Brian Warner <warner@lothar.com>**20091013015026
3448 Ignore-this: f1d14e67e084e4b2a4e25fa849b0e753
3449]
3450[dirnode.set_nodes: change return value: fire with self instead of None
3451Brian Warner <warner@lothar.com>**20091013014546
3452 Ignore-this: b75b3829fb53f7399693f1c1a39aacae
3453]
3454[dirnode.set_children: take a dict, not a list
3455Brian Warner <warner@lothar.com>**20091013002440
3456 Ignore-this: 540ce72ce2727ee053afaae1ff124e21
3457]
3458[dirnode.set_uri/set_children: change signature to take writecap+readcap
3459Brian Warner <warner@lothar.com>**20091012235126
3460 Ignore-this: 5df617b2d379a51c79148a857e6026b1
3461 instead of a single cap. The webapi t=set_children call benefits too.
3462]
3463[replace Client.create_empty_dirnode() with create_dirnode(), in anticipation
3464Brian Warner <warner@lothar.com>**20091012224506
3465 Ignore-this: cbdaa4266ecb3c6496ffceab4f95709d
3466 of adding initial_children= argument.
3467 
3468 Includes stubbed-out initial_children= support.
3469]
3470[test_web.py: use a less-fake client, making test harness smaller
3471Brian Warner <warner@lothar.com>**20091012222808
3472 Ignore-this: 29e95147f8c94282885c65b411d100bb
3473]
3474[webapi.txt: document t=set_children, other small edits
3475Brian Warner <warner@lothar.com>**20091009200446
3476 Ignore-this: 4d7e76b04a7b8eaa0a981879f778ea5d
3477]
3478[Verifier: check the full cryptext-hash tree on each share. Removed .todos
3479Brian Warner <warner@lothar.com>**20091005221849
3480 Ignore-this: 6fb039c5584812017d91725e687323a5
3481 from the last few test_repairer tests that were waiting on this.
3482]
3483[Verifier: check the full block-hash-tree on each share
3484Brian Warner <warner@lothar.com>**20091005214844
3485 Ignore-this: 3f7ccf6d253f32340f1bf1da27803eee
3486 
3487 Removed the .todo from two test_repairer tests that check this. The only
3488 remaining .todos are on the three crypttext-hash-tree tests.
3489]
3490[Verifier: check the full share-hash chain on each share
3491Brian Warner <warner@lothar.com>**20091005213443
3492 Ignore-this: 3d30111904158bec06a4eac22fd39d17
3493 
3494 Removed the .todo from two test_repairer tests that check this.
3495]
3496[test_repairer: rename Verifier test cases to be more precise and less verbose
3497Brian Warner <warner@lothar.com>**20091005201115
3498 Ignore-this: 64be7094e33338c7c2aea9387e138771
3499]
3500[immutable/checker.py: rearrange code a little bit, make it easier to follow
3501Brian Warner <warner@lothar.com>**20091005200252
3502 Ignore-this: 91cc303fab66faf717433a709f785fb5
3503]
3504[test/common.py: wrap docstrings to 80cols so I can read them more easily
3505Brian Warner <warner@lothar.com>**20091005200143
3506 Ignore-this: b180a3a0235cbe309c87bd5e873cbbb3
3507]
3508[immutable/download.py: wrap to 80cols, no functional changes
3509Brian Warner <warner@lothar.com>**20091005192542
3510 Ignore-this: 6b05fe3dc6d78832323e708b9e6a1fe
3511]
3512[CHK-hashes.svg: cross out plaintext hashes, since we don't include
3513Brian Warner <warner@lothar.com>**20091005010803
3514 Ignore-this: bea2e953b65ec7359363aa20de8cb603
3515 them (until we finish #453)
3516]
3517[docs: a few licensing clarifications requested by Ubuntu
3518zooko@zooko.com**20090927033226
3519 Ignore-this: 749fc8c9aeb6dc643669854a3e81baa7
3520]
3521[setup: remove binary WinFUSE modules
3522zooko@zooko.com**20090924211436
3523 Ignore-this: 8aefc571d2ae22b9405fc650f2c2062
3524 I would prefer to have just source code, or indications of what 3rd-party packages are required, under revision control, and have the build process generate o
3525 r acquire the binaries as needed.  Also, having these in our release tarballs is interfering with getting Tahoe-LAFS uploaded into Ubuntu Karmic.  (Technicall
3526 y, they would accept binary modules as long as they came with the accompanying source so that they could satisfy their obligations under GPL2+ and TGPPL1+, bu
3527 t it is easier for now to remove the binaries from the source tree.)
3528 In this case, the binaries are from the tahoe-w32-client project: http://allmydata.org/trac/tahoe-w32-client , from which you can also get the source.
3529]
3530[setup: remove binary _fusemodule.so 's
3531zooko@zooko.com**20090924211130
3532 Ignore-this: 74487bbe27d280762ac5dd5f51e24186
3533 I would prefer to have just source code, or indications of what 3rd-party packages are required, under revision control, and have the build process generate or acquire the binaries as needed.  Also, having these in our release tarballs is interfering with getting Tahoe-LAFS uploaded into Ubuntu Karmic.  (Technically, they would accept binary modules as long as they came with the accompanying source so that they could satisfy their obligations under GPL2+ and TGPPL1+, but it is easier for now to remove the binaries from the source tree.)
3534 In this case, these modules come from the MacFUSE project: http://code.google.com/p/macfuse/
3535]
3536[doc: add a copy of LGPL2 for documentation purposes for ubuntu
3537zooko@zooko.com**20090924054218
3538 Ignore-this: 6a073b48678a7c84dc4fbcef9292ab5b
3539]
3540[setup: remove a convenience copy of figleaf, to ease inclusion into Ubuntu Karmic Koala
3541zooko@zooko.com**20090924053215
3542 Ignore-this: a0b0c990d6e2ee65c53a24391365ac8d
3543 We need to carefully document the licence of figleaf in order to get Tahoe-LAFS into Ubuntu Karmic Koala.  However, figleaf isn't really a part of Tahoe-LAFS per se -- this is just a "convenience copy" of a development tool.  The quickest way to make Tahoe-LAFS acceptable for Karmic then, is to remove figleaf from the Tahoe-LAFS tarball itself.  People who want to run figleaf on Tahoe-LAFS (as everyone should want) can install figleaf themselves.  I haven't tested this -- there may be incompatibilities between upstream figleaf and the copy that we had here...
3544]
3545[setup: shebang for misc/build-deb.py to fail quickly
3546zooko@zooko.com**20090819135626
3547 Ignore-this: 5a1b893234d2d0bb7b7346e84b0a6b4d
3548 Without this patch, when I ran "chmod +x ./misc/build-deb.py && ./misc/build-deb.py" then it hung indefinitely.  (I wonder what it was doing.)
3549]
3550[docs: Shawn Willden grants permission for his contributions under GPL2+|TGPPL1+
3551zooko@zooko.com**20090921164651
3552 Ignore-this: ef1912010d07ff2ffd9678e7abfd0d57
3553]
3554[docs: Csaba Henk granted permission to license fuse.py under the same terms as Tahoe-LAFS itself
3555zooko@zooko.com**20090921154659
3556 Ignore-this: c61ba48dcb7206a89a57ca18a0450c53
3557]
3558[setup: mark setup.py as having utf-8 encoding in it
3559zooko@zooko.com**20090920180343
3560 Ignore-this: 9d3850733700a44ba7291e9c5e36bb91
3561]
3562[doc: licensing cleanups
3563zooko@zooko.com**20090920171631
3564 Ignore-this: 7654f2854bf3c13e6f4d4597633a6630
3565 Use nice utf-8 © instead of "(c)". Remove licensing statements on utility modules that have been assigned to allmydata.com by their original authors. (Nattraverso was not assigned to allmydata.com -- it was LGPL'ed -- but I checked and src/allmydata/util/iputil.py was completely rewritten and doesn't contain any line of code from nattraverso.)  Add notes to misc/debian/copyright about licensing on files that aren't just allmydata.com-licensed.
3566]
3567[build-deb.py: run darcsver early, otherwise we get the wrong version later on
3568Brian Warner <warner@lothar.com>**20090918033620
3569 Ignore-this: 6635c5b85e84f8aed0d8390490c5392a
3570]
3571[new approach for debian packaging, sharing pieces across distributions. Still experimental, still only works for sid.
3572warner@lothar.com**20090818190527
3573 Ignore-this: a75eb63db9106b3269badbfcdd7f5ce1
3574]
3575[new experimental deb-packaging rules. Only works for sid so far.
3576Brian Warner <warner@lothar.com>**20090818014052
3577 Ignore-this: 3a26ad188668098f8f3cc10a7c0c2f27
3578]
3579[setup.py: read _version.py and pass to setup(version=), so more commands work
3580Brian Warner <warner@lothar.com>**20090818010057
3581 Ignore-this: b290eb50216938e19f72db211f82147e
3582 like "setup.py --version" and "setup.py --fullname"
3583]
3584[test/check_speed.py: fix shbang line
3585Brian Warner <warner@lothar.com>**20090818005948
3586 Ignore-this: 7f3a37caf349c4c4de704d0feb561f8d
3587]
3588[setup: remove bundled version of darcsver-1.2.1
3589zooko@zooko.com**20090816233432
3590 Ignore-this: 5357f26d2803db2d39159125dddb963a
3591 That version of darcsver emits a scary error message when the darcs executable or the _darcs subdirectory is not found.
3592 This error is hidden (unless the --loud option is passed) in darcsver >= 1.3.1.
3593 Fixes #788.
3594]
3595[de-Service-ify Helper, pass in storage_broker and secret_holder directly.
3596Brian Warner <warner@lothar.com>**20090815201737
3597 Ignore-this: 86b8ac0f90f77a1036cd604dd1304d8b
3598 This makes it more obvious that the Helper currently generates leases with
3599 the Helper's own secrets, rather than getting values from the client, which
3600 is arguably a bug that will likely be resolved with the Accounting project.
3601]
3602[immutable.Downloader: pass StorageBroker to constructor, stop being a Service
3603Brian Warner <warner@lothar.com>**20090815192543
3604 Ignore-this: af5ab12dbf75377640a670c689838479
3605 child of the client, access with client.downloader instead of
3606 client.getServiceNamed("downloader"). The single "Downloader" instance is
3607 scheduled for demolition anyways, to be replaced by individual
3608 filenode.download calls.
3609]
3610[tests: double the timeout on test_runner.RunNode.test_introducer since feisty hit a timeout
3611zooko@zooko.com**20090815160512
3612 Ignore-this: ca7358bce4bdabe8eea75dedc39c0e67
3613 I'm not sure if this is an actual timing issue (feisty is running on an overloaded VM if I recall correctly), or it there is a deeper bug.
3614]
3615[stop making History be a Service, it wasn't necessary
3616Brian Warner <warner@lothar.com>**20090815114415
3617 Ignore-this: b60449231557f1934a751c7effa93cfe
3618]
3619[Overhaul IFilesystemNode handling, to simplify tests and use POLA internally.
3620Brian Warner <warner@lothar.com>**20090815112846
3621 Ignore-this: 1db1b9c149a60a310228aba04c5c8e5f
3622 
3623 * stop using IURI as an adapter
3624 * pass cap strings around instead of URI instances
3625 * move filenode/dirnode creation duties from Client to new NodeMaker class
3626 * move other Client duties to KeyGenerator, SecretHolder, History classes
3627 * stop passing Client reference to dirnode/filenode constructors
3628   - pass less-powerful references instead, like StorageBroker or Uploader
3629 * always create DirectoryNodes by wrapping a filenode (mutable for now)
3630 * remove some specialized mock classes from unit tests
3631 
3632 Detailed list of changes (done one at a time, then merged together)
3633 
3634 always pass a string to create_node_from_uri(), not an IURI instance
3635 always pass a string to IFilesystemNode constructors, not an IURI instance
3636 stop using IURI() as an adapter, switch on cap prefix in create_node_from_uri()
3637 client.py: move SecretHolder code out to a separate class
3638 test_web.py: hush pyflakes
3639 client.py: move NodeMaker functionality out into a separate object
3640 LiteralFileNode: stop storing a Client reference
3641 immutable Checker: remove Client reference, it only needs a SecretHolder
3642 immutable Upload: remove Client reference, leave SecretHolder and StorageBroker
3643 immutable Repairer: replace Client reference with StorageBroker and SecretHolder
3644 immutable FileNode: remove Client reference
3645 mutable.Publish: stop passing Client
3646 mutable.ServermapUpdater: get StorageBroker in constructor, not by peeking into Client reference
3647 MutableChecker: reference StorageBroker and History directly, not through Client
3648 mutable.FileNode: removed unused indirection to checker classes
3649 mutable.FileNode: remove Client reference
3650 client.py: move RSA key generation into a separate class, so it can be passed to the nodemaker
3651 move create_mutable_file() into NodeMaker
3652 test_dirnode.py: stop using FakeClient mockups, use NoNetworkGrid instead. This simplifies the code, but takes longer to run (17s instead of 6s). This should come down later when other cleanups make it possible to use simpler (non-RSA) fake mutable files for dirnode tests.
3653 test_mutable.py: clean up basedir names
3654 client.py: move create_empty_dirnode() into NodeMaker
3655 dirnode.py: get rid of DirectoryNode.create
3656 remove DirectoryNode.init_from_uri, refactor NodeMaker for customization, simplify test_web's mock Client to match
3657 stop passing Client to DirectoryNode, make DirectoryNode.create_with_mutablefile the normal DirectoryNode constructor, start removing client from NodeMaker
3658 remove Client from NodeMaker
3659 move helper status into History, pass History to web.Status instead of Client
3660 test_mutable.py: fix minor typo
3661]
3662[docs: edits for docs/running.html from Sam Mason
3663zooko@zooko.com**20090809201416
3664 Ignore-this: 2207e80449943ebd4ed50cea57c43143
3665]
3666[docs: install.html: instruct Debian users to use this document and not to go find the DownloadDebianPackages page, ignore the warning at the top of it, and try it
3667zooko@zooko.com**20090804123840
3668 Ignore-this: 49da654f19d377ffc5a1eff0c820e026
3669 http://allmydata.org/pipermail/tahoe-dev/2009-August/002507.html
3670]
3671[docs: relnotes.txt: reflow to 63 chars wide because google groups and some web forms seem to wrap to that
3672zooko@zooko.com**20090802135016
3673 Ignore-this: 53b1493a0491bc30fb2935fad283caeb
3674]
3675[docs: about.html: fix English usage noticed by Amber
3676zooko@zooko.com**20090802050533
3677 Ignore-this: 89965c4650f9bd100a615c401181a956
3678]
3679[docs: fix mis-spelled word in about.html
3680zooko@zooko.com**20090802050320
3681 Ignore-this: fdfd0397bc7cef9edfde425dddeb67e5
3682]
3683[TAG allmydata-tahoe-1.5.0
3684zooko@zooko.com**20090802031303
3685 Ignore-this: 94e5558e7225c39a86aae666ea00f166
3686]
3687Patch bundle hash:
3688f3394b6a4ddd6b67b78458401b5f84474f17f5be