Ticket #778: tests.4.txt

File tests.4.txt, 134.1 KB (added by kevan, at 2010-01-18T21:48:51Z)
Line 
1Sat Oct 17 18:30:13 PDT 2009  Kevan Carstensen <kevan@isnotajoke.com>
2  * Alter NoNetworkGrid to allow the creation of readonly servers for testing purposes.
3
4Fri Oct 30 02:19:08 PDT 2009  "Kevan Carstensen" <kevan@isnotajoke.com>
5  * Refactor some behavior into a mixin, and add tests for the behavior described in #778
6
7Tue Nov  3 19:36:02 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
8  * Alter tests to use the new form of set_shareholders
9
10Tue Nov  3 19:42:32 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
11  * Minor tweak to an existing test -- make the first server read-write, instead of read-only
12
13Wed Nov  4 03:13:24 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
14  * Add a test for upload.shares_by_server
15
16Wed Nov  4 03:28:49 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
17  * Add more tests for comment:53 in ticket #778
18
19Sun Nov  8 16:37:35 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
20  * Test Tahoe2PeerSelector to make sure that it recognizeses existing shares on readonly servers
21
22Mon Nov 16 11:23:34 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
23  * Re-work 'test_upload.py' to be more readable; add more tests for #778
24
25Sun Nov 22 17:20:08 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
26  * Add tests for the behavior described in #834.
27
28Fri Dec  4 20:34:53 PST 2009  Kevan Carstensen <kevan@isnotajoke.com>
29  * Replace "UploadHappinessError" with "UploadUnhappinessError" in tests.
30
31Thu Jan  7 10:13:25 PST 2010  Kevan Carstensen <kevan@isnotajoke.com>
32  * Alter various unit tests to work with the new happy behavior
33
34Mon Jan 18 13:02:38 PST 2010  Kevan Carstensen <kevan@isnotajoke.com>
35  * Revisions of the #778 tests, per reviewers' comments
36 
37  - Fix comments and confusing naming.
38  - Add tests for the new error messages suggested by David-Sarah
39    and Zooko.
40  - Alter existing tests for new error messages.
41  - Make sure that the tests continue to work with the trunk.
42  - Add a test for a mutual disjointedness assertion that I added to
43    upload.servers_of_happiness.
44 
45
46New patches:
47
48[Alter NoNetworkGrid to allow the creation of readonly servers for testing purposes.
49Kevan Carstensen <kevan@isnotajoke.com>**20091018013013
50 Ignore-this: e12cd7c4ddeb65305c5a7e08df57c754
51] {
52hunk ./src/allmydata/test/no_network.py 204
53             c.setServiceParent(self)
54             self.clients.append(c)
55 
56-    def make_server(self, i):
57+    def make_server(self, i, readonly=False):
58         serverid = hashutil.tagged_hash("serverid", str(i))[:20]
59         serverdir = os.path.join(self.basedir, "servers",
60                                  idlib.shortnodeid_b2a(serverid))
61hunk ./src/allmydata/test/no_network.py 209
62         fileutil.make_dirs(serverdir)
63-        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats())
64+        ss = StorageServer(serverdir, serverid, stats_provider=SimpleStats(),
65+                           readonly_storage=readonly)
66         return ss
67 
68     def add_server(self, i, ss):
69}
70[Refactor some behavior into a mixin, and add tests for the behavior described in #778
71"Kevan Carstensen" <kevan@isnotajoke.com>**20091030091908
72 Ignore-this: a6f9797057ca135579b249af3b2b66ac
73] {
74hunk ./src/allmydata/test/test_upload.py 2
75 
76-import os
77+import os, shutil
78 from cStringIO import StringIO
79 from twisted.trial import unittest
80 from twisted.python.failure import Failure
81hunk ./src/allmydata/test/test_upload.py 12
82 
83 import allmydata # for __full_version__
84 from allmydata import uri, monitor, client
85-from allmydata.immutable import upload
86+from allmydata.immutable import upload, encode
87 from allmydata.interfaces import FileTooLargeError, NoSharesError, \
88      NotEnoughSharesError
89 from allmydata.util.assertutil import precondition
90hunk ./src/allmydata/test/test_upload.py 20
91 from no_network import GridTestMixin
92 from common_util import ShouldFailMixin
93 from allmydata.storage_client import StorageFarmBroker
94+from allmydata.storage.server import storage_index_to_dir
95 
96 MiB = 1024*1024
97 
98hunk ./src/allmydata/test/test_upload.py 91
99 class ServerError(Exception):
100     pass
101 
102+class SetDEPMixin:
103+    def set_encoding_parameters(self, k, happy, n, max_segsize=1*MiB):
104+        p = {"k": k,
105+             "happy": happy,
106+             "n": n,
107+             "max_segment_size": max_segsize,
108+             }
109+        self.node.DEFAULT_ENCODING_PARAMETERS = p
110+
111 class FakeStorageServer:
112     def __init__(self, mode):
113         self.mode = mode
114hunk ./src/allmydata/test/test_upload.py 247
115     u = upload.FileHandle(fh, convergence=None)
116     return uploader.upload(u)
117 
118-class GoodServer(unittest.TestCase, ShouldFailMixin):
119+class GoodServer(unittest.TestCase, ShouldFailMixin, SetDEPMixin):
120     def setUp(self):
121         self.node = FakeClient(mode="good")
122         self.u = upload.Uploader()
123hunk ./src/allmydata/test/test_upload.py 254
124         self.u.running = True
125         self.u.parent = self.node
126 
127-    def set_encoding_parameters(self, k, happy, n, max_segsize=1*MiB):
128-        p = {"k": k,
129-             "happy": happy,
130-             "n": n,
131-             "max_segment_size": max_segsize,
132-             }
133-        self.node.DEFAULT_ENCODING_PARAMETERS = p
134-
135     def _check_small(self, newuri, size):
136         u = uri.from_string(newuri)
137         self.failUnless(isinstance(u, uri.LiteralFileURI))
138hunk ./src/allmydata/test/test_upload.py 377
139         d.addCallback(self._check_large, SIZE_LARGE)
140         return d
141 
142-class ServerErrors(unittest.TestCase, ShouldFailMixin):
143+class ServerErrors(unittest.TestCase, ShouldFailMixin, SetDEPMixin):
144     def make_node(self, mode, num_servers=10):
145         self.node = FakeClient(mode, num_servers)
146         self.u = upload.Uploader()
147hunk ./src/allmydata/test/test_upload.py 677
148         d.addCallback(_done)
149         return d
150 
151-class EncodingParameters(GridTestMixin, unittest.TestCase):
152+class EncodingParameters(GridTestMixin, unittest.TestCase, SetDEPMixin,
153+    ShouldFailMixin):
154+    def _do_upload_with_broken_servers(self, servers_to_break):
155+        """
156+        I act like a normal upload, but before I send the results of
157+        Tahoe2PeerSelector to the Encoder, I break the first servers_to_break
158+        PeerTrackers in the used_peers part of the return result.
159+        """
160+        assert self.g, "I tried to find a grid at self.g, but failed"
161+        broker = self.g.clients[0].storage_broker
162+        sh     = self.g.clients[0]._secret_holder
163+        data = upload.Data("data" * 10000, convergence="")
164+        data.encoding_param_k = 3
165+        data.encoding_param_happy = 4
166+        data.encoding_param_n = 10
167+        uploadable = upload.EncryptAnUploadable(data)
168+        encoder = encode.Encoder()
169+        encoder.set_encrypted_uploadable(uploadable)
170+        status = upload.UploadStatus()
171+        selector = upload.Tahoe2PeerSelector("dglev", "test", status)
172+        storage_index = encoder.get_param("storage_index")
173+        share_size = encoder.get_param("share_size")
174+        block_size = encoder.get_param("block_size")
175+        num_segments = encoder.get_param("num_segments")
176+        d = selector.get_shareholders(broker, sh, storage_index,
177+                                      share_size, block_size, num_segments,
178+                                      10, 4)
179+        def _have_shareholders((used_peers, already_peers)):
180+            assert servers_to_break <= len(used_peers)
181+            for index in xrange(servers_to_break):
182+                server = list(used_peers)[index]
183+                for share in server.buckets.keys():
184+                    server.buckets[share].abort()
185+            buckets = {}
186+            for peer in used_peers:
187+                buckets.update(peer.buckets)
188+            encoder.set_shareholders(buckets)
189+            d = encoder.start()
190+            return d
191+        d.addCallback(_have_shareholders)
192+        return d
193+
194+    def _add_server_with_share(self, server_number, share_number=None,
195+                               readonly=False):
196+        assert self.g, "I tried to find a grid at self.g, but failed"
197+        assert self.shares, "I tried to find shares at self.shares, but failed"
198+        ss = self.g.make_server(server_number, readonly)
199+        self.g.add_server(server_number, ss)
200+        if share_number:
201+            # Copy share i from the directory associated with the first
202+            # storage server to the directory associated with this one.
203+            old_share_location = self.shares[share_number][2]
204+            new_share_location = os.path.join(ss.storedir, "shares")
205+            si = uri.from_string(self.uri).get_storage_index()
206+            new_share_location = os.path.join(new_share_location,
207+                                              storage_index_to_dir(si))
208+            if not os.path.exists(new_share_location):
209+                os.makedirs(new_share_location)
210+            new_share_location = os.path.join(new_share_location,
211+                                              str(share_number))
212+            shutil.copy(old_share_location, new_share_location)
213+            shares = self.find_shares(self.uri)
214+            # Make sure that the storage server has the share.
215+            self.failUnless((share_number, ss.my_nodeid, new_share_location)
216+                            in shares)
217+
218+    def _setup_and_upload(self):
219+        """
220+        I set up a NoNetworkGrid with a single server and client,
221+        upload a file to it, store its uri in self.uri, and store its
222+        sharedata in self.shares.
223+        """
224+        self.set_up_grid(num_clients=1, num_servers=1)
225+        client = self.g.clients[0]
226+        client.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
227+        data = upload.Data("data" * 10000, convergence="")
228+        self.data = data
229+        d = client.upload(data)
230+        def _store_uri(ur):
231+            self.uri = ur.uri
232+        d.addCallback(_store_uri)
233+        d.addCallback(lambda ign:
234+            self.find_shares(self.uri))
235+        def _store_shares(shares):
236+            self.shares = shares
237+        d.addCallback(_store_shares)
238+        return d
239+
240     def test_configure_parameters(self):
241         self.basedir = self.mktemp()
242         hooks = {0: self._set_up_nodes_extra_config}
243hunk ./src/allmydata/test/test_upload.py 784
244         d.addCallback(_check)
245         return d
246 
247+    def _setUp(self, ns):
248+        # Used by test_happy_semantics and test_prexisting_share_behavior
249+        # to set up the grid.
250+        self.node = FakeClient(mode="good", num_servers=ns)
251+        self.u = upload.Uploader()
252+        self.u.running = True
253+        self.u.parent = self.node
254+
255+    def test_happy_semantics(self):
256+        self._setUp(2)
257+        DATA = upload.Data("kittens" * 10000, convergence="")
258+        # These parameters are unsatisfiable with the client that we've made
259+        # -- we'll use them to test that the semnatics work correctly.
260+        self.set_encoding_parameters(k=3, happy=5, n=10)
261+        d = self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
262+                            "shares could only be placed on 2 servers "
263+                            "(5 were requested)",
264+                            self.u.upload, DATA)
265+        # Let's reset the client to have 10 servers
266+        d.addCallback(lambda ign:
267+            self._setUp(10))
268+        # These parameters are satisfiable with the client we've made.
269+        d.addCallback(lambda ign:
270+            self.set_encoding_parameters(k=3, happy=5, n=10))
271+        # this should work
272+        d.addCallback(lambda ign:
273+            self.u.upload(DATA))
274+        # Let's reset the client to have 7 servers
275+        # (this is less than n, but more than h)
276+        d.addCallback(lambda ign:
277+            self._setUp(7))
278+        # These encoding parameters should still be satisfiable with our
279+        # client setup
280+        d.addCallback(lambda ign:
281+            self.set_encoding_parameters(k=3, happy=5, n=10))
282+        # This, then, should work.
283+        d.addCallback(lambda ign:
284+            self.u.upload(DATA))
285+        return d
286+
287+    def test_problem_layouts(self):
288+        self.basedir = self.mktemp()
289+        # This scenario is at
290+        # http://allmydata.org/trac/tahoe/ticket/778#comment:52
291+        #
292+        # The scenario in comment:52 proposes that we have a layout
293+        # like:
294+        # server 1: share 1
295+        # server 2: share 1
296+        # server 3: share 1
297+        # server 4: shares 2 - 10
298+        # To get access to the shares, we will first upload to one
299+        # server, which will then have shares 1 - 10. We'll then
300+        # add three new servers, configure them to not accept any new
301+        # shares, then write share 1 directly into the serverdir of each.
302+        # Then each of servers 1 - 3 will report that they have share 1,
303+        # and will not accept any new share, while server 4 will report that
304+        # it has shares 2 - 10 and will accept new shares.
305+        # We'll then set 'happy' = 4, and see that an upload fails
306+        # (as it should)
307+        d = self._setup_and_upload()
308+        d.addCallback(lambda ign:
309+            self._add_server_with_share(1, 0, True))
310+        d.addCallback(lambda ign:
311+            self._add_server_with_share(2, 0, True))
312+        d.addCallback(lambda ign:
313+            self._add_server_with_share(3, 0, True))
314+        # Remove the first share from server 0.
315+        def _remove_share_0():
316+            share_location = self.shares[0][2]
317+            os.remove(share_location)
318+        d.addCallback(lambda ign:
319+            _remove_share_0())
320+        # Set happy = 4 in the client.
321+        def _prepare():
322+            client = self.g.clients[0]
323+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
324+            return client
325+        d.addCallback(lambda ign:
326+            _prepare())
327+        # Uploading data should fail
328+        d.addCallback(lambda client:
329+            self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
330+                            "shares could only be placed on 1 servers "
331+                            "(4 were requested)",
332+                            client.upload, upload.Data("data" * 10000,
333+                                                       convergence="")))
334+
335+
336+        # This scenario is at
337+        # http://allmydata.org/trac/tahoe/ticket/778#comment:53
338+        #
339+        # Set up the grid to have one server
340+        def _change_basedir(ign):
341+            self.basedir = self.mktemp()
342+        d.addCallback(_change_basedir)
343+        d.addCallback(lambda ign:
344+            self._setup_and_upload())
345+        # We want to have a layout like this:
346+        # server 1: share 1
347+        # server 2: share 2
348+        # server 3: share 3
349+        # server 4: shares 1 - 10
350+        # (this is an expansion of Zooko's example because it is easier
351+        #  to code, but it will fail in the same way)
352+        # To start, we'll create a server with shares 1-10 of the data
353+        # we're about to upload.
354+        # Next, we'll add three new servers to our NoNetworkGrid. We'll add
355+        # one share from our initial upload to each of these.
356+        # The counterintuitive ordering of the share numbers is to deal with
357+        # the permuting of these servers -- distributing the shares this
358+        # way ensures that the Tahoe2PeerSelector sees them in the order
359+        # described above.
360+        d.addCallback(lambda ign:
361+            self._add_server_with_share(server_number=1, share_number=2))
362+        d.addCallback(lambda ign:
363+            self._add_server_with_share(server_number=2, share_number=0))
364+        d.addCallback(lambda ign:
365+            self._add_server_with_share(server_number=3, share_number=1))
366+        # So, we now have the following layout:
367+        # server 0: shares 1 - 10
368+        # server 1: share 0
369+        # server 2: share 1
370+        # server 3: share 2
371+        # We want to change the 'happy' parameter in the client to 4.
372+        # We then want to feed the upload process a list of peers that
373+        # server 0 is at the front of, so we trigger Zooko's scenario.
374+        # Ideally, a reupload of our original data should work.
375+        def _reset_encoding_parameters(ign):
376+            client = self.g.clients[0]
377+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
378+            return client
379+        d.addCallback(_reset_encoding_parameters)
380+        # We need this to get around the fact that the old Data
381+        # instance already has a happy parameter set.
382+        d.addCallback(lambda client:
383+            client.upload(upload.Data("data" * 10000, convergence="")))
384+        return d
385+
386+
387+    def test_dropped_servers_in_encoder(self):
388+        def _set_basedir(ign=None):
389+            self.basedir = self.mktemp()
390+        _set_basedir()
391+        d = self._setup_and_upload();
392+        # Add 5 servers, with one share each from the original
393+        # Add a readonly server
394+        def _do_server_setup(ign):
395+            self._add_server_with_share(1, 1, True)
396+            self._add_server_with_share(2)
397+            self._add_server_with_share(3)
398+            self._add_server_with_share(4)
399+            self._add_server_with_share(5)
400+        d.addCallback(_do_server_setup)
401+        # remove the original server
402+        # (necessary to ensure that the Tahoe2PeerSelector will distribute
403+        #  all the shares)
404+        def _remove_server(ign):
405+            server = self.g.servers_by_number[0]
406+            self.g.remove_server(server.my_nodeid)
407+        d.addCallback(_remove_server)
408+        # This should succeed.
409+        d.addCallback(lambda ign:
410+            self._do_upload_with_broken_servers(1))
411+        # Now, do the same thing over again, but drop 2 servers instead
412+        # of 1. This should fail.
413+        d.addCallback(_set_basedir)
414+        d.addCallback(lambda ign:
415+            self._setup_and_upload())
416+        d.addCallback(_do_server_setup)
417+        d.addCallback(_remove_server)
418+        d.addCallback(lambda ign:
419+            self.shouldFail(NotEnoughSharesError,
420+                            "test_dropped_server_in_encoder", "",
421+                            self._do_upload_with_broken_servers, 2))
422+        return d
423+
424+
425+    def test_servers_with_unique_shares(self):
426+        # servers_with_unique_shares expects a dict of
427+        # shnum => peerid as a preexisting shares argument.
428+        test1 = {
429+                 1 : "server1",
430+                 2 : "server2",
431+                 3 : "server3",
432+                 4 : "server4"
433+                }
434+        unique_servers = upload.servers_with_unique_shares(test1)
435+        self.failUnlessEqual(4, len(unique_servers))
436+        for server in ["server1", "server2", "server3", "server4"]:
437+            self.failUnlessIn(server, unique_servers)
438+        test1[4] = "server1"
439+        # Now there should only be 3 unique servers.
440+        unique_servers = upload.servers_with_unique_shares(test1)
441+        self.failUnlessEqual(3, len(unique_servers))
442+        for server in ["server1", "server2", "server3"]:
443+            self.failUnlessIn(server, unique_servers)
444+        # servers_with_unique_shares expects a set of PeerTracker
445+        # instances as a used_peers argument, but only uses the peerid
446+        # instance variable to assess uniqueness. So we feed it some fake
447+        # PeerTrackers whose only important characteristic is that they
448+        # have peerid set to something.
449+        class FakePeerTracker:
450+            pass
451+        trackers = []
452+        for server in ["server5", "server6", "server7", "server8"]:
453+            t = FakePeerTracker()
454+            t.peerid = server
455+            trackers.append(t)
456+        # Recall that there are 3 unique servers in test1. Since none of
457+        # those overlap with the ones in trackers, we should get 7 back
458+        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
459+        self.failUnlessEqual(7, len(unique_servers))
460+        expected_servers = ["server" + str(i) for i in xrange(1, 9)]
461+        expected_servers.remove("server4")
462+        for server in expected_servers:
463+            self.failUnlessIn(server, unique_servers)
464+        # Now add an overlapping server to trackers.
465+        t = FakePeerTracker()
466+        t.peerid = "server1"
467+        trackers.append(t)
468+        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
469+        self.failUnlessEqual(7, len(unique_servers))
470+        for server in expected_servers:
471+            self.failUnlessIn(server, unique_servers)
472+
473+
474     def _set_up_nodes_extra_config(self, clientdir):
475         cfgfn = os.path.join(clientdir, "tahoe.cfg")
476         oldcfg = open(cfgfn, "r").read()
477}
478[Alter tests to use the new form of set_shareholders
479Kevan Carstensen <kevan@isnotajoke.com>**20091104033602
480 Ignore-this: 3deac11fc831618d11441317463ef830
481] {
482hunk ./src/allmydata/test/test_encode.py 301
483                      (NUM_SEGMENTS-1)*segsize, len(data), NUM_SEGMENTS*segsize)
484 
485             shareholders = {}
486+            servermap = {}
487             for shnum in range(NUM_SHARES):
488                 peer = FakeBucketReaderWriterProxy()
489                 shareholders[shnum] = peer
490hunk ./src/allmydata/test/test_encode.py 305
491+                servermap[shnum] = str(shnum)
492                 all_shareholders.append(peer)
493hunk ./src/allmydata/test/test_encode.py 307
494-            e.set_shareholders(shareholders)
495+            e.set_shareholders(shareholders, servermap)
496             return e.start()
497         d.addCallback(_ready)
498 
499merger 0.0 (
500hunk ./src/allmydata/test/test_encode.py 462
501-            all_peers = []
502hunk ./src/allmydata/test/test_encode.py 463
503+            servermap = {}
504)
505hunk ./src/allmydata/test/test_encode.py 467
506                 mode = bucket_modes.get(shnum, "good")
507                 peer = FakeBucketReaderWriterProxy(mode)
508                 shareholders[shnum] = peer
509-            e.set_shareholders(shareholders)
510+                servermap[shnum] = str(shnum)
511+            e.set_shareholders(shareholders, servermap)
512             return e.start()
513         d.addCallback(_ready)
514         def _sent(res):
515hunk ./src/allmydata/test/test_upload.py 711
516                 for share in server.buckets.keys():
517                     server.buckets[share].abort()
518             buckets = {}
519+            servermap = already_peers.copy()
520             for peer in used_peers:
521                 buckets.update(peer.buckets)
522hunk ./src/allmydata/test/test_upload.py 714
523-            encoder.set_shareholders(buckets)
524+                for bucket in peer.buckets:
525+                    servermap[bucket] = peer.peerid
526+            encoder.set_shareholders(buckets, servermap)
527             d = encoder.start()
528             return d
529         d.addCallback(_have_shareholders)
530hunk ./src/allmydata/test/test_upload.py 933
531         _set_basedir()
532         d = self._setup_and_upload();
533         # Add 5 servers, with one share each from the original
534-        # Add a readonly server
535         def _do_server_setup(ign):
536             self._add_server_with_share(1, 1, True)
537             self._add_server_with_share(2)
538}
539[Minor tweak to an existing test -- make the first server read-write, instead of read-only
540Kevan Carstensen <kevan@isnotajoke.com>**20091104034232
541 Ignore-this: a951a46c93f7f58dd44d93d8623b2aee
542] hunk ./src/allmydata/test/test_upload.py 934
543         d = self._setup_and_upload();
544         # Add 5 servers, with one share each from the original
545         def _do_server_setup(ign):
546-            self._add_server_with_share(1, 1, True)
547+            self._add_server_with_share(1, 1)
548             self._add_server_with_share(2)
549             self._add_server_with_share(3)
550             self._add_server_with_share(4)
551[Add a test for upload.shares_by_server
552Kevan Carstensen <kevan@isnotajoke.com>**20091104111324
553 Ignore-this: f9802e82d6982a93e00f92e0b276f018
554] hunk ./src/allmydata/test/test_upload.py 1013
555             self.failUnlessIn(server, unique_servers)
556 
557 
558+    def test_shares_by_server(self):
559+        test = {
560+                    1 : "server1",
561+                    2 : "server2",
562+                    3 : "server3",
563+                    4 : "server4"
564+               }
565+        shares_by_server = upload.shares_by_server(test)
566+        self.failUnlessEqual(set([1]), shares_by_server["server1"])
567+        self.failUnlessEqual(set([2]), shares_by_server["server2"])
568+        self.failUnlessEqual(set([3]), shares_by_server["server3"])
569+        self.failUnlessEqual(set([4]), shares_by_server["server4"])
570+        test1 = {
571+                    1 : "server1",
572+                    2 : "server1",
573+                    3 : "server1",
574+                    4 : "server2",
575+                    5 : "server2"
576+                }
577+        shares_by_server = upload.shares_by_server(test1)
578+        self.failUnlessEqual(set([1, 2, 3]), shares_by_server["server1"])
579+        self.failUnlessEqual(set([4, 5]), shares_by_server["server2"])
580+
581+
582     def _set_up_nodes_extra_config(self, clientdir):
583         cfgfn = os.path.join(clientdir, "tahoe.cfg")
584         oldcfg = open(cfgfn, "r").read()
585[Add more tests for comment:53 in ticket #778
586Kevan Carstensen <kevan@isnotajoke.com>**20091104112849
587 Ignore-this: 3bb2edd299a944cc9586e14d5d83ec8c
588] {
589hunk ./src/allmydata/test/test_upload.py 722
590         d.addCallback(_have_shareholders)
591         return d
592 
593-    def _add_server_with_share(self, server_number, share_number=None,
594-                               readonly=False):
595+    def _add_server(self, server_number, readonly=False):
596         assert self.g, "I tried to find a grid at self.g, but failed"
597         assert self.shares, "I tried to find shares at self.shares, but failed"
598         ss = self.g.make_server(server_number, readonly)
599hunk ./src/allmydata/test/test_upload.py 727
600         self.g.add_server(server_number, ss)
601+
602+    def _add_server_with_share(self, server_number, share_number=None,
603+                               readonly=False):
604+        self._add_server(server_number, readonly)
605         if share_number:
606hunk ./src/allmydata/test/test_upload.py 732
607-            # Copy share i from the directory associated with the first
608-            # storage server to the directory associated with this one.
609-            old_share_location = self.shares[share_number][2]
610-            new_share_location = os.path.join(ss.storedir, "shares")
611-            si = uri.from_string(self.uri).get_storage_index()
612-            new_share_location = os.path.join(new_share_location,
613-                                              storage_index_to_dir(si))
614-            if not os.path.exists(new_share_location):
615-                os.makedirs(new_share_location)
616-            new_share_location = os.path.join(new_share_location,
617-                                              str(share_number))
618-            shutil.copy(old_share_location, new_share_location)
619-            shares = self.find_shares(self.uri)
620-            # Make sure that the storage server has the share.
621-            self.failUnless((share_number, ss.my_nodeid, new_share_location)
622-                            in shares)
623+            self._copy_share_to_server(share_number, server_number)
624+
625+    def _copy_share_to_server(self, share_number, server_number):
626+        ss = self.g.servers_by_number[server_number]
627+        # Copy share i from the directory associated with the first
628+        # storage server to the directory associated with this one.
629+        assert self.g, "I tried to find a grid at self.g, but failed"
630+        assert self.shares, "I tried to find shares at self.shares, but failed"
631+        old_share_location = self.shares[share_number][2]
632+        new_share_location = os.path.join(ss.storedir, "shares")
633+        si = uri.from_string(self.uri).get_storage_index()
634+        new_share_location = os.path.join(new_share_location,
635+                                          storage_index_to_dir(si))
636+        if not os.path.exists(new_share_location):
637+            os.makedirs(new_share_location)
638+        new_share_location = os.path.join(new_share_location,
639+                                          str(share_number))
640+        shutil.copy(old_share_location, new_share_location)
641+        shares = self.find_shares(self.uri)
642+        # Make sure that the storage server has the share.
643+        self.failUnless((share_number, ss.my_nodeid, new_share_location)
644+                        in shares)
645+
646 
647     def _setup_and_upload(self):
648         """
649hunk ./src/allmydata/test/test_upload.py 917
650         d.addCallback(lambda ign:
651             self._add_server_with_share(server_number=3, share_number=1))
652         # So, we now have the following layout:
653-        # server 0: shares 1 - 10
654+        # server 0: shares 0 - 9
655         # server 1: share 0
656         # server 2: share 1
657         # server 3: share 2
658hunk ./src/allmydata/test/test_upload.py 934
659         # instance already has a happy parameter set.
660         d.addCallback(lambda client:
661             client.upload(upload.Data("data" * 10000, convergence="")))
662+
663+
664+        # This scenario is basically comment:53, but with the order reversed;
665+        # this means that the Tahoe2PeerSelector sees
666+        # server 0: shares 1-10
667+        # server 1: share 1
668+        # server 2: share 2
669+        # server 3: share 3
670+        d.addCallback(_change_basedir)
671+        d.addCallback(lambda ign:
672+            self._setup_and_upload())
673+        d.addCallback(lambda ign:
674+            self._add_server_with_share(server_number=2, share_number=0))
675+        d.addCallback(lambda ign:
676+            self._add_server_with_share(server_number=3, share_number=1))
677+        d.addCallback(lambda ign:
678+            self._add_server_with_share(server_number=1, share_number=2))
679+        # Copy all of the other shares to server number 2
680+        def _copy_shares(ign):
681+            for i in xrange(1, 10):
682+                self._copy_share_to_server(i, 2)
683+        d.addCallback(_copy_shares)
684+        # Remove the first server, and add a placeholder with share 0
685+        d.addCallback(lambda ign:
686+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
687+        d.addCallback(lambda ign:
688+            self._add_server_with_share(server_number=0, share_number=0))
689+        # Now try uploading.
690+        d.addCallback(_reset_encoding_parameters)
691+        d.addCallback(lambda client:
692+            client.upload(upload.Data("data" * 10000, convergence="")))
693+        # Try the same thing, but with empty servers after the first one
694+        # We want to make sure that Tahoe2PeerSelector will redistribute
695+        # shares as necessary, not simply discover an existing layout.
696+        d.addCallback(_change_basedir)
697+        d.addCallback(lambda ign:
698+            self._setup_and_upload())
699+        d.addCallback(lambda ign:
700+            self._add_server(server_number=2))
701+        d.addCallback(lambda ign:
702+            self._add_server(server_number=3))
703+        d.addCallback(lambda ign:
704+            self._add_server(server_number=1))
705+        d.addCallback(_copy_shares)
706+        d.addCallback(lambda ign:
707+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
708+        d.addCallback(lambda ign:
709+            self._add_server(server_number=0))
710+        d.addCallback(_reset_encoding_parameters)
711+        d.addCallback(lambda client:
712+            client.upload(upload.Data("data" * 10000, convergence="")))
713+        # Try the following layout
714+        # server 0: shares 1-10
715+        # server 1: share 1, read-only
716+        # server 2: share 2, read-only
717+        # server 3: share 3, read-only
718+        d.addCallback(_change_basedir)
719+        d.addCallback(lambda ign:
720+            self._setup_and_upload())
721+        d.addCallback(lambda ign:
722+            self._add_server_with_share(server_number=2, share_number=0))
723+        d.addCallback(lambda ign:
724+            self._add_server_with_share(server_number=3, share_number=1,
725+                                        readonly=True))
726+        d.addCallback(lambda ign:
727+            self._add_server_with_share(server_number=1, share_number=2,
728+                                        readonly=True))
729+        # Copy all of the other shares to server number 2
730+        d.addCallback(_copy_shares)
731+        # Remove server 0, and add another in its place
732+        d.addCallback(lambda ign:
733+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
734+        d.addCallback(lambda ign:
735+            self._add_server_with_share(server_number=0, share_number=0,
736+                                        readonly=True))
737+        d.addCallback(_reset_encoding_parameters)
738+        d.addCallback(lambda client:
739+            client.upload(upload.Data("data" * 10000, convergence="")))
740         return d
741 
742 
743}
744[Test Tahoe2PeerSelector to make sure that it recognizeses existing shares on readonly servers
745Kevan Carstensen <kevan@isnotajoke.com>**20091109003735
746 Ignore-this: 12f9b4cff5752fca7ed32a6ebcff6446
747] hunk ./src/allmydata/test/test_upload.py 1125
748         self.failUnlessEqual(set([4, 5]), shares_by_server["server2"])
749 
750 
751+    def test_existing_share_detection(self):
752+        self.basedir = self.mktemp()
753+        d = self._setup_and_upload()
754+        # Our final setup should look like this:
755+        # server 1: shares 1 - 10, read-only
756+        # server 2: empty
757+        # server 3: empty
758+        # server 4: empty
759+        # The purpose of this test is to make sure that the peer selector
760+        # knows about the shares on server 1, even though it is read-only.
761+        # It used to simply filter these out, which would cause the test
762+        # to fail when servers_of_happiness = 4.
763+        d.addCallback(lambda ign:
764+            self._add_server_with_share(1, 0, True))
765+        d.addCallback(lambda ign:
766+            self._add_server_with_share(2))
767+        d.addCallback(lambda ign:
768+            self._add_server_with_share(3))
769+        d.addCallback(lambda ign:
770+            self._add_server_with_share(4))
771+        def _copy_shares(ign):
772+            for i in xrange(1, 10):
773+                self._copy_share_to_server(i, 1)
774+        d.addCallback(_copy_shares)
775+        d.addCallback(lambda ign:
776+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
777+        def _prepare_client(ign):
778+            client = self.g.clients[0]
779+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
780+            return client
781+        d.addCallback(_prepare_client)
782+        d.addCallback(lambda client:
783+            client.upload(upload.Data("data" * 10000, convergence="")))
784+        return d
785+
786+
787     def _set_up_nodes_extra_config(self, clientdir):
788         cfgfn = os.path.join(clientdir, "tahoe.cfg")
789         oldcfg = open(cfgfn, "r").read()
790[Re-work 'test_upload.py' to be more readable; add more tests for #778
791Kevan Carstensen <kevan@isnotajoke.com>**20091116192334
792 Ignore-this: 7e8565f92fe51dece5ae28daf442d659
793] {
794hunk ./src/allmydata/test/test_upload.py 722
795         d.addCallback(_have_shareholders)
796         return d
797 
798+
799     def _add_server(self, server_number, readonly=False):
800         assert self.g, "I tried to find a grid at self.g, but failed"
801         assert self.shares, "I tried to find shares at self.shares, but failed"
802hunk ./src/allmydata/test/test_upload.py 729
803         ss = self.g.make_server(server_number, readonly)
804         self.g.add_server(server_number, ss)
805 
806+
807     def _add_server_with_share(self, server_number, share_number=None,
808                                readonly=False):
809         self._add_server(server_number, readonly)
810hunk ./src/allmydata/test/test_upload.py 733
811-        if share_number:
812+        if share_number is not None:
813             self._copy_share_to_server(share_number, server_number)
814 
815hunk ./src/allmydata/test/test_upload.py 736
816+
817     def _copy_share_to_server(self, share_number, server_number):
818         ss = self.g.servers_by_number[server_number]
819         # Copy share i from the directory associated with the first
820hunk ./src/allmydata/test/test_upload.py 752
821             os.makedirs(new_share_location)
822         new_share_location = os.path.join(new_share_location,
823                                           str(share_number))
824-        shutil.copy(old_share_location, new_share_location)
825+        if old_share_location != new_share_location:
826+            shutil.copy(old_share_location, new_share_location)
827         shares = self.find_shares(self.uri)
828         # Make sure that the storage server has the share.
829         self.failUnless((share_number, ss.my_nodeid, new_share_location)
830hunk ./src/allmydata/test/test_upload.py 782
831         d.addCallback(_store_shares)
832         return d
833 
834+
835     def test_configure_parameters(self):
836         self.basedir = self.mktemp()
837         hooks = {0: self._set_up_nodes_extra_config}
838hunk ./src/allmydata/test/test_upload.py 802
839         d.addCallback(_check)
840         return d
841 
842+
843     def _setUp(self, ns):
844         # Used by test_happy_semantics and test_prexisting_share_behavior
845         # to set up the grid.
846hunk ./src/allmydata/test/test_upload.py 811
847         self.u.running = True
848         self.u.parent = self.node
849 
850+
851     def test_happy_semantics(self):
852         self._setUp(2)
853         DATA = upload.Data("kittens" * 10000, convergence="")
854hunk ./src/allmydata/test/test_upload.py 844
855             self.u.upload(DATA))
856         return d
857 
858-    def test_problem_layouts(self):
859-        self.basedir = self.mktemp()
860+
861+    def test_problem_layout_comment_52(self):
862+        def _basedir():
863+            self.basedir = self.mktemp()
864+        _basedir()
865         # This scenario is at
866         # http://allmydata.org/trac/tahoe/ticket/778#comment:52
867         #
868hunk ./src/allmydata/test/test_upload.py 890
869         # Uploading data should fail
870         d.addCallback(lambda client:
871             self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
872-                            "shares could only be placed on 1 servers "
873+                            "shares could only be placed on 2 servers "
874                             "(4 were requested)",
875                             client.upload, upload.Data("data" * 10000,
876                                                        convergence="")))
877hunk ./src/allmydata/test/test_upload.py 895
878 
879+        # Do comment:52, but like this:
880+        # server 2: empty
881+        # server 3: share 0, read-only
882+        # server 1: share 0, read-only
883+        # server 0: shares 0-9
884+        d.addCallback(lambda ign:
885+            _basedir())
886+        d.addCallback(lambda ign:
887+            self._setup_and_upload())
888+        d.addCallback(lambda ign:
889+            self._add_server_with_share(server_number=2))
890+        d.addCallback(lambda ign:
891+            self._add_server_with_share(server_number=3, share_number=0,
892+                                        readonly=True))
893+        d.addCallback(lambda ign:
894+            self._add_server_with_share(server_number=1, share_number=0,
895+                                        readonly=True))
896+        def _prepare2():
897+            client = self.g.clients[0]
898+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
899+            return client
900+        d.addCallback(lambda ign:
901+            _prepare2())
902+        d.addCallback(lambda client:
903+            self.shouldFail(NotEnoughSharesError, "test_happy_sematics",
904+                            "shares could only be placed on 2 servers "
905+                            "(3 were requested)",
906+                            client.upload, upload.Data("data" * 10000,
907+                                                       convergence="")))
908+        return d
909+
910 
911hunk ./src/allmydata/test/test_upload.py 927
912+    def test_problem_layout_comment_53(self):
913         # This scenario is at
914         # http://allmydata.org/trac/tahoe/ticket/778#comment:53
915         #
916hunk ./src/allmydata/test/test_upload.py 934
917         # Set up the grid to have one server
918         def _change_basedir(ign):
919             self.basedir = self.mktemp()
920-        d.addCallback(_change_basedir)
921-        d.addCallback(lambda ign:
922-            self._setup_and_upload())
923-        # We want to have a layout like this:
924-        # server 1: share 1
925-        # server 2: share 2
926-        # server 3: share 3
927-        # server 4: shares 1 - 10
928-        # (this is an expansion of Zooko's example because it is easier
929-        #  to code, but it will fail in the same way)
930-        # To start, we'll create a server with shares 1-10 of the data
931-        # we're about to upload.
932+        _change_basedir(None)
933+        d = self._setup_and_upload()
934+        # We start by uploading all of the shares to one server (which has
935+        # already been done above).
936         # Next, we'll add three new servers to our NoNetworkGrid. We'll add
937         # one share from our initial upload to each of these.
938         # The counterintuitive ordering of the share numbers is to deal with
939hunk ./src/allmydata/test/test_upload.py 952
940             self._add_server_with_share(server_number=3, share_number=1))
941         # So, we now have the following layout:
942         # server 0: shares 0 - 9
943-        # server 1: share 0
944-        # server 2: share 1
945-        # server 3: share 2
946+        # server 1: share 2
947+        # server 2: share 0
948+        # server 3: share 1
949         # We want to change the 'happy' parameter in the client to 4.
950hunk ./src/allmydata/test/test_upload.py 956
951-        # We then want to feed the upload process a list of peers that
952-        # server 0 is at the front of, so we trigger Zooko's scenario.
953+        # The Tahoe2PeerSelector will see the peers permuted as:
954+        # 2, 3, 1, 0
955         # Ideally, a reupload of our original data should work.
956hunk ./src/allmydata/test/test_upload.py 959
957-        def _reset_encoding_parameters(ign):
958+        def _reset_encoding_parameters(ign, happy=4):
959             client = self.g.clients[0]
960hunk ./src/allmydata/test/test_upload.py 961
961-            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
962+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
963             return client
964         d.addCallback(_reset_encoding_parameters)
965hunk ./src/allmydata/test/test_upload.py 964
966-        # We need this to get around the fact that the old Data
967-        # instance already has a happy parameter set.
968         d.addCallback(lambda client:
969             client.upload(upload.Data("data" * 10000, convergence="")))
970 
971hunk ./src/allmydata/test/test_upload.py 970
972 
973         # This scenario is basically comment:53, but with the order reversed;
974         # this means that the Tahoe2PeerSelector sees
975-        # server 0: shares 1-10
976-        # server 1: share 1
977-        # server 2: share 2
978-        # server 3: share 3
979+        # server 2: shares 1-10
980+        # server 3: share 1
981+        # server 1: share 2
982+        # server 4: share 3
983         d.addCallback(_change_basedir)
984         d.addCallback(lambda ign:
985             self._setup_and_upload())
986hunk ./src/allmydata/test/test_upload.py 992
987         d.addCallback(lambda ign:
988             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
989         d.addCallback(lambda ign:
990-            self._add_server_with_share(server_number=0, share_number=0))
991+            self._add_server_with_share(server_number=4, share_number=0))
992         # Now try uploading.
993         d.addCallback(_reset_encoding_parameters)
994         d.addCallback(lambda client:
995hunk ./src/allmydata/test/test_upload.py 1013
996         d.addCallback(lambda ign:
997             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
998         d.addCallback(lambda ign:
999-            self._add_server(server_number=0))
1000+            self._add_server(server_number=4))
1001         d.addCallback(_reset_encoding_parameters)
1002         d.addCallback(lambda client:
1003             client.upload(upload.Data("data" * 10000, convergence="")))
1004hunk ./src/allmydata/test/test_upload.py 1017
1005+        return d
1006+
1007+
1008+    def test_happiness_with_some_readonly_peers(self):
1009         # Try the following layout
1010hunk ./src/allmydata/test/test_upload.py 1022
1011-        # server 0: shares 1-10
1012-        # server 1: share 1, read-only
1013-        # server 2: share 2, read-only
1014-        # server 3: share 3, read-only
1015-        d.addCallback(_change_basedir)
1016-        d.addCallback(lambda ign:
1017-            self._setup_and_upload())
1018+        # server 2: shares 0-9
1019+        # server 4: share 0, read-only
1020+        # server 3: share 1, read-only
1021+        # server 1: share 2, read-only
1022+        self.basedir = self.mktemp()
1023+        d = self._setup_and_upload()
1024         d.addCallback(lambda ign:
1025             self._add_server_with_share(server_number=2, share_number=0))
1026         d.addCallback(lambda ign:
1027hunk ./src/allmydata/test/test_upload.py 1037
1028             self._add_server_with_share(server_number=1, share_number=2,
1029                                         readonly=True))
1030         # Copy all of the other shares to server number 2
1031+        def _copy_shares(ign):
1032+            for i in xrange(1, 10):
1033+                self._copy_share_to_server(i, 2)
1034         d.addCallback(_copy_shares)
1035         # Remove server 0, and add another in its place
1036         d.addCallback(lambda ign:
1037hunk ./src/allmydata/test/test_upload.py 1045
1038             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1039         d.addCallback(lambda ign:
1040-            self._add_server_with_share(server_number=0, share_number=0,
1041+            self._add_server_with_share(server_number=4, share_number=0,
1042                                         readonly=True))
1043hunk ./src/allmydata/test/test_upload.py 1047
1044+        def _reset_encoding_parameters(ign, happy=4):
1045+            client = self.g.clients[0]
1046+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
1047+            return client
1048+        d.addCallback(_reset_encoding_parameters)
1049+        d.addCallback(lambda client:
1050+            client.upload(upload.Data("data" * 10000, convergence="")))
1051+        return d
1052+
1053+
1054+    def test_happiness_with_all_readonly_peers(self):
1055+        # server 3: share 1, read-only
1056+        # server 1: share 2, read-only
1057+        # server 2: shares 0-9, read-only
1058+        # server 4: share 0, read-only
1059+        # The idea with this test is to make sure that the survey of
1060+        # read-only peers doesn't undercount servers of happiness
1061+        self.basedir = self.mktemp()
1062+        d = self._setup_and_upload()
1063+        d.addCallback(lambda ign:
1064+            self._add_server_with_share(server_number=4, share_number=0,
1065+                                        readonly=True))
1066+        d.addCallback(lambda ign:
1067+            self._add_server_with_share(server_number=3, share_number=1,
1068+                                        readonly=True))
1069+        d.addCallback(lambda ign:
1070+            self._add_server_with_share(server_number=1, share_number=2,
1071+                                        readonly=True))
1072+        d.addCallback(lambda ign:
1073+            self._add_server_with_share(server_number=2, share_number=0,
1074+                                        readonly=True))
1075+        def _copy_shares(ign):
1076+            for i in xrange(1, 10):
1077+                self._copy_share_to_server(i, 2)
1078+        d.addCallback(_copy_shares)
1079+        d.addCallback(lambda ign:
1080+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1081+        def _reset_encoding_parameters(ign, happy=4):
1082+            client = self.g.clients[0]
1083+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
1084+            return client
1085         d.addCallback(_reset_encoding_parameters)
1086         d.addCallback(lambda client:
1087             client.upload(upload.Data("data" * 10000, convergence="")))
1088hunk ./src/allmydata/test/test_upload.py 1099
1089             self.basedir = self.mktemp()
1090         _set_basedir()
1091         d = self._setup_and_upload();
1092-        # Add 5 servers, with one share each from the original
1093+        # Add 5 servers
1094         def _do_server_setup(ign):
1095hunk ./src/allmydata/test/test_upload.py 1101
1096-            self._add_server_with_share(1, 1)
1097+            self._add_server_with_share(1)
1098             self._add_server_with_share(2)
1099             self._add_server_with_share(3)
1100             self._add_server_with_share(4)
1101hunk ./src/allmydata/test/test_upload.py 1126
1102         d.addCallback(_remove_server)
1103         d.addCallback(lambda ign:
1104             self.shouldFail(NotEnoughSharesError,
1105-                            "test_dropped_server_in_encoder", "",
1106+                            "test_dropped_servers_in_encoder",
1107+                            "lost too many servers during upload "
1108+                            "(still have 3, want 4)",
1109+                            self._do_upload_with_broken_servers, 2))
1110+        # Now do the same thing over again, but make some of the servers
1111+        # readonly, break some of the ones that aren't, and make sure that
1112+        # happiness accounting is preserved.
1113+        d.addCallback(_set_basedir)
1114+        d.addCallback(lambda ign:
1115+            self._setup_and_upload())
1116+        def _do_server_setup_2(ign):
1117+            self._add_server_with_share(1)
1118+            self._add_server_with_share(2)
1119+            self._add_server_with_share(3)
1120+            self._add_server_with_share(4, 7, readonly=True)
1121+            self._add_server_with_share(5, 8, readonly=True)
1122+        d.addCallback(_do_server_setup_2)
1123+        d.addCallback(_remove_server)
1124+        d.addCallback(lambda ign:
1125+            self._do_upload_with_broken_servers(1))
1126+        d.addCallback(_set_basedir)
1127+        d.addCallback(lambda ign:
1128+            self._setup_and_upload())
1129+        d.addCallback(_do_server_setup_2)
1130+        d.addCallback(_remove_server)
1131+        d.addCallback(lambda ign:
1132+            self.shouldFail(NotEnoughSharesError,
1133+                            "test_dropped_servers_in_encoder",
1134+                            "lost too many servers during upload "
1135+                            "(still have 3, want 4)",
1136                             self._do_upload_with_broken_servers, 2))
1137         return d
1138 
1139hunk ./src/allmydata/test/test_upload.py 1179
1140         self.failUnlessEqual(3, len(unique_servers))
1141         for server in ["server1", "server2", "server3"]:
1142             self.failUnlessIn(server, unique_servers)
1143-        # servers_with_unique_shares expects a set of PeerTracker
1144-        # instances as a used_peers argument, but only uses the peerid
1145-        # instance variable to assess uniqueness. So we feed it some fake
1146-        # PeerTrackers whose only important characteristic is that they
1147-        # have peerid set to something.
1148+        # servers_with_unique_shares expects to receive some object with
1149+        # a peerid attribute. So we make a FakePeerTracker whose only
1150+        # job is to have a peerid attribute.
1151         class FakePeerTracker:
1152             pass
1153         trackers = []
1154hunk ./src/allmydata/test/test_upload.py 1185
1155-        for server in ["server5", "server6", "server7", "server8"]:
1156+        for (i, server) in [(i, "server%d" % i) for i in xrange(5, 9)]:
1157             t = FakePeerTracker()
1158             t.peerid = server
1159hunk ./src/allmydata/test/test_upload.py 1188
1160+            t.buckets = [i]
1161             trackers.append(t)
1162         # Recall that there are 3 unique servers in test1. Since none of
1163         # those overlap with the ones in trackers, we should get 7 back
1164hunk ./src/allmydata/test/test_upload.py 1201
1165         # Now add an overlapping server to trackers.
1166         t = FakePeerTracker()
1167         t.peerid = "server1"
1168+        t.buckets = [1]
1169         trackers.append(t)
1170         unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
1171         self.failUnlessEqual(7, len(unique_servers))
1172hunk ./src/allmydata/test/test_upload.py 1207
1173         for server in expected_servers:
1174             self.failUnlessIn(server, unique_servers)
1175+        test = {}
1176+        unique_servers = upload.servers_with_unique_shares(test)
1177+        self.failUnlessEqual(0, len(test))
1178 
1179 
1180     def test_shares_by_server(self):
1181hunk ./src/allmydata/test/test_upload.py 1213
1182-        test = {
1183-                    1 : "server1",
1184-                    2 : "server2",
1185-                    3 : "server3",
1186-                    4 : "server4"
1187-               }
1188+        test = dict([(i, "server%d" % i) for i in xrange(1, 5)])
1189         shares_by_server = upload.shares_by_server(test)
1190         self.failUnlessEqual(set([1]), shares_by_server["server1"])
1191         self.failUnlessEqual(set([2]), shares_by_server["server2"])
1192hunk ./src/allmydata/test/test_upload.py 1267
1193         return d
1194 
1195 
1196+    def test_should_add_server(self):
1197+        shares = dict([(i, "server%d" % i) for i in xrange(10)])
1198+        self.failIf(upload.should_add_server(shares, "server1", 4))
1199+        shares[4] = "server1"
1200+        self.failUnless(upload.should_add_server(shares, "server4", 4))
1201+        shares = {}
1202+        self.failUnless(upload.should_add_server(shares, "server1", 1))
1203+
1204+
1205     def _set_up_nodes_extra_config(self, clientdir):
1206         cfgfn = os.path.join(clientdir, "tahoe.cfg")
1207         oldcfg = open(cfgfn, "r").read()
1208}
1209[Add tests for the behavior described in #834.
1210Kevan Carstensen <kevan@isnotajoke.com>**20091123012008
1211 Ignore-this: d8e0aa0f3f7965ce9b5cea843c6d6f9f
1212] {
1213hunk ./src/allmydata/test/test_encode.py 12
1214 from allmydata.util.assertutil import _assert
1215 from allmydata.util.consumer import MemoryConsumer
1216 from allmydata.interfaces import IStorageBucketWriter, IStorageBucketReader, \
1217-     NotEnoughSharesError, IStorageBroker
1218+     NotEnoughSharesError, IStorageBroker, UploadHappinessError
1219 from allmydata.monitor import Monitor
1220 import common_util as testutil
1221 
1222hunk ./src/allmydata/test/test_encode.py 794
1223         d = self.send_and_recover((4,8,10), bucket_modes=modemap)
1224         def _done(res):
1225             self.failUnless(isinstance(res, Failure))
1226-            self.failUnless(res.check(NotEnoughSharesError), res)
1227+            self.failUnless(res.check(UploadHappinessError), res)
1228         d.addBoth(_done)
1229         return d
1230 
1231hunk ./src/allmydata/test/test_encode.py 805
1232         d = self.send_and_recover((4,8,10), bucket_modes=modemap)
1233         def _done(res):
1234             self.failUnless(isinstance(res, Failure))
1235-            self.failUnless(res.check(NotEnoughSharesError))
1236+            self.failUnless(res.check(UploadHappinessError))
1237         d.addBoth(_done)
1238         return d
1239hunk ./src/allmydata/test/test_upload.py 13
1240 import allmydata # for __full_version__
1241 from allmydata import uri, monitor, client
1242 from allmydata.immutable import upload, encode
1243-from allmydata.interfaces import FileTooLargeError, NoSharesError, \
1244-     NotEnoughSharesError
1245+from allmydata.interfaces import FileTooLargeError, UploadHappinessError
1246 from allmydata.util.assertutil import precondition
1247 from allmydata.util.deferredutil import DeferredListShouldSucceed
1248 from no_network import GridTestMixin
1249hunk ./src/allmydata/test/test_upload.py 402
1250 
1251     def test_first_error_all(self):
1252         self.make_node("first-fail")
1253-        d = self.shouldFail(NoSharesError, "first_error_all",
1254+        d = self.shouldFail(UploadHappinessError, "first_error_all",
1255                             "peer selection failed",
1256                             upload_data, self.u, DATA)
1257         def _check((f,)):
1258hunk ./src/allmydata/test/test_upload.py 434
1259 
1260     def test_second_error_all(self):
1261         self.make_node("second-fail")
1262-        d = self.shouldFail(NotEnoughSharesError, "second_error_all",
1263+        d = self.shouldFail(UploadHappinessError, "second_error_all",
1264                             "peer selection failed",
1265                             upload_data, self.u, DATA)
1266         def _check((f,)):
1267hunk ./src/allmydata/test/test_upload.py 452
1268         self.u.parent = self.node
1269 
1270     def _should_fail(self, f):
1271-        self.failUnless(isinstance(f, Failure) and f.check(NoSharesError), f)
1272+        self.failUnless(isinstance(f, Failure) and f.check(UploadHappinessError), f)
1273 
1274     def test_data_large(self):
1275         data = DATA
1276hunk ./src/allmydata/test/test_upload.py 817
1277         # These parameters are unsatisfiable with the client that we've made
1278         # -- we'll use them to test that the semnatics work correctly.
1279         self.set_encoding_parameters(k=3, happy=5, n=10)
1280-        d = self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
1281+        d = self.shouldFail(UploadHappinessError, "test_happy_semantics",
1282                             "shares could only be placed on 2 servers "
1283                             "(5 were requested)",
1284                             self.u.upload, DATA)
1285hunk ./src/allmydata/test/test_upload.py 888
1286             _prepare())
1287         # Uploading data should fail
1288         d.addCallback(lambda client:
1289-            self.shouldFail(NotEnoughSharesError, "test_happy_semantics",
1290+            self.shouldFail(UploadHappinessError, "test_happy_semantics",
1291                             "shares could only be placed on 2 servers "
1292                             "(4 were requested)",
1293                             client.upload, upload.Data("data" * 10000,
1294hunk ./src/allmydata/test/test_upload.py 918
1295         d.addCallback(lambda ign:
1296             _prepare2())
1297         d.addCallback(lambda client:
1298-            self.shouldFail(NotEnoughSharesError, "test_happy_sematics",
1299+            self.shouldFail(UploadHappinessError, "test_happy_sematics",
1300                             "shares could only be placed on 2 servers "
1301                             "(3 were requested)",
1302                             client.upload, upload.Data("data" * 10000,
1303hunk ./src/allmydata/test/test_upload.py 1124
1304         d.addCallback(_do_server_setup)
1305         d.addCallback(_remove_server)
1306         d.addCallback(lambda ign:
1307-            self.shouldFail(NotEnoughSharesError,
1308+            self.shouldFail(UploadHappinessError,
1309                             "test_dropped_servers_in_encoder",
1310                             "lost too many servers during upload "
1311                             "(still have 3, want 4)",
1312hunk ./src/allmydata/test/test_upload.py 1151
1313         d.addCallback(_do_server_setup_2)
1314         d.addCallback(_remove_server)
1315         d.addCallback(lambda ign:
1316-            self.shouldFail(NotEnoughSharesError,
1317+            self.shouldFail(UploadHappinessError,
1318                             "test_dropped_servers_in_encoder",
1319                             "lost too many servers during upload "
1320                             "(still have 3, want 4)",
1321hunk ./src/allmydata/test/test_upload.py 1275
1322         self.failUnless(upload.should_add_server(shares, "server1", 1))
1323 
1324 
1325+    def test_exception_messages_during_peer_selection(self):
1326+        # server 1: readonly, no shares
1327+        # server 2: readonly, no shares
1328+        # server 3: readonly, no shares
1329+        # server 4: readonly, no shares
1330+        # server 5: readonly, no shares
1331+        # This will fail, but we want to make sure that the log messages
1332+        # are informative about why it has failed.
1333+        self.basedir = self.mktemp()
1334+        d = self._setup_and_upload()
1335+        d.addCallback(lambda ign:
1336+            self._add_server_with_share(server_number=1, readonly=True))
1337+        d.addCallback(lambda ign:
1338+            self._add_server_with_share(server_number=2, readonly=True))
1339+        d.addCallback(lambda ign:
1340+            self._add_server_with_share(server_number=3, readonly=True))
1341+        d.addCallback(lambda ign:
1342+            self._add_server_with_share(server_number=4, readonly=True))
1343+        d.addCallback(lambda ign:
1344+            self._add_server_with_share(server_number=5, readonly=True))
1345+        d.addCallback(lambda ign:
1346+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1347+        def _reset_encoding_parameters(ign):
1348+            client = self.g.clients[0]
1349+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
1350+            return client
1351+        d.addCallback(_reset_encoding_parameters)
1352+        d.addCallback(lambda client:
1353+            self.shouldFail(UploadHappinessError, "test_selection_exceptions",
1354+                            "peer selection failed for <Tahoe2PeerSelector "
1355+                            "for upload dglev>: placed 0 shares out of 10 "
1356+                            "total (10 homeless), want to place on 4 servers,"
1357+                            " sent 5 queries to 5 peers, 0 queries placed "
1358+                            "some shares, 5 placed none "
1359+                            "(of which 5 placed none due to the server being "
1360+                            "full and 0 placed none due to an error)",
1361+                            client.upload,
1362+                            upload.Data("data" * 10000, convergence="")))
1363+
1364+
1365+        # server 1: readonly, no shares
1366+        # server 2: broken, no shares
1367+        # server 3: readonly, no shares
1368+        # server 4: readonly, no shares
1369+        # server 5: readonly, no shares
1370+        def _reset(ign):
1371+            self.basedir = self.mktemp()
1372+        d.addCallback(_reset)
1373+        d.addCallback(lambda ign:
1374+            self._setup_and_upload())
1375+        d.addCallback(lambda ign:
1376+            self._add_server_with_share(server_number=1, readonly=True))
1377+        d.addCallback(lambda ign:
1378+            self._add_server_with_share(server_number=2))
1379+        def _break_server_2(ign):
1380+            server = self.g.servers_by_number[2].my_nodeid
1381+            # We have to break the server in servers_by_id,
1382+            # because the ones in servers_by_number isn't wrapped,
1383+            # and doesn't look at its broken attribute
1384+            self.g.servers_by_id[server].broken = True
1385+        d.addCallback(_break_server_2)
1386+        d.addCallback(lambda ign:
1387+            self._add_server_with_share(server_number=3, readonly=True))
1388+        d.addCallback(lambda ign:
1389+            self._add_server_with_share(server_number=4, readonly=True))
1390+        d.addCallback(lambda ign:
1391+            self._add_server_with_share(server_number=5, readonly=True))
1392+        d.addCallback(lambda ign:
1393+            self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
1394+        def _reset_encoding_parameters(ign):
1395+            client = self.g.clients[0]
1396+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
1397+            return client
1398+        d.addCallback(_reset_encoding_parameters)
1399+        d.addCallback(lambda client:
1400+            self.shouldFail(UploadHappinessError, "test_selection_exceptions",
1401+                            "peer selection failed for <Tahoe2PeerSelector "
1402+                            "for upload dglev>: placed 0 shares out of 10 "
1403+                            "total (10 homeless), want to place on 4 servers,"
1404+                            " sent 5 queries to 5 peers, 0 queries placed "
1405+                            "some shares, 5 placed none "
1406+                            "(of which 4 placed none due to the server being "
1407+                            "full and 1 placed none due to an error)",
1408+                            client.upload,
1409+                            upload.Data("data" * 10000, convergence="")))
1410+        return d
1411+
1412+
1413     def _set_up_nodes_extra_config(self, clientdir):
1414         cfgfn = os.path.join(clientdir, "tahoe.cfg")
1415         oldcfg = open(cfgfn, "r").read()
1416}
1417[Replace "UploadHappinessError" with "UploadUnhappinessError" in tests.
1418Kevan Carstensen <kevan@isnotajoke.com>**20091205043453
1419 Ignore-this: 83f4bc50c697d21b5f4e2a4cd91862ca
1420] {
1421replace ./src/allmydata/test/test_encode.py [A-Za-z_0-9] UploadHappinessError UploadUnhappinessError
1422replace ./src/allmydata/test/test_upload.py [A-Za-z_0-9] UploadHappinessError UploadUnhappinessError
1423}
1424[Alter various unit tests to work with the new happy behavior
1425Kevan Carstensen <kevan@isnotajoke.com>**20100107181325
1426 Ignore-this: 132032bbf865e63a079f869b663be34a
1427] {
1428hunk ./src/allmydata/test/common.py 915
1429             # We need multiple segments to test crypttext hash trees that are
1430             # non-trivial (i.e. they have more than just one hash in them).
1431             cl0.DEFAULT_ENCODING_PARAMETERS['max_segment_size'] = 12
1432+            # Tests that need to test servers of happiness using this should
1433+            # set their own value for happy -- the default (7) breaks stuff.
1434+            cl0.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1435             d2 = cl0.upload(immutable.upload.Data(TEST_DATA, convergence=""))
1436             def _after_upload(u):
1437                 filecap = u.uri
1438hunk ./src/allmydata/test/test_checker.py 283
1439         self.basedir = "checker/AddLease/875"
1440         self.set_up_grid(num_servers=1)
1441         c0 = self.g.clients[0]
1442+        c0.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1443         self.uris = {}
1444         DATA = "data" * 100
1445         d = c0.upload(Data(DATA, convergence=""))
1446hunk ./src/allmydata/test/test_system.py 93
1447         d = self.set_up_nodes()
1448         def _check_connections(res):
1449             for c in self.clients:
1450+                c.DEFAULT_ENCODING_PARAMETERS['happy'] = 5
1451                 all_peerids = c.get_storage_broker().get_all_serverids()
1452                 self.failUnlessEqual(len(all_peerids), self.numclients)
1453                 sb = c.storage_broker
1454hunk ./src/allmydata/test/test_system.py 205
1455                                                       add_to_sparent=True))
1456         def _added(extra_node):
1457             self.extra_node = extra_node
1458+            self.extra_node.DEFAULT_ENCODING_PARAMETERS['happy'] = 5
1459         d.addCallback(_added)
1460 
1461         HELPER_DATA = "Data that needs help to upload" * 1000
1462hunk ./src/allmydata/test/test_system.py 705
1463         self.basedir = "system/SystemTest/test_filesystem"
1464         self.data = LARGE_DATA
1465         d = self.set_up_nodes(use_stats_gatherer=True)
1466+        def _new_happy_semantics(ign):
1467+            for c in self.clients:
1468+                c.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1469+        d.addCallback(_new_happy_semantics)
1470         d.addCallback(self._test_introweb)
1471         d.addCallback(self.log, "starting publish")
1472         d.addCallback(self._do_publish1)
1473hunk ./src/allmydata/test/test_system.py 1129
1474         d.addCallback(self.failUnlessEqual, "new.txt contents")
1475         # and again with something large enough to use multiple segments,
1476         # and hopefully trigger pauseProducing too
1477+        def _new_happy_semantics(ign):
1478+            for c in self.clients:
1479+                # these get reset somewhere? Whatever.
1480+                c.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1481+        d.addCallback(_new_happy_semantics)
1482         d.addCallback(lambda res: self.PUT(public + "/subdir3/big.txt",
1483                                            "big" * 500000)) # 1.5MB
1484         d.addCallback(lambda res: self.GET(public + "/subdir3/big.txt"))
1485hunk ./src/allmydata/test/test_upload.py 178
1486 
1487 class FakeClient:
1488     DEFAULT_ENCODING_PARAMETERS = {"k":25,
1489-                                   "happy": 75,
1490+                                   "happy": 25,
1491                                    "n": 100,
1492                                    "max_segment_size": 1*MiB,
1493                                    }
1494hunk ./src/allmydata/test/test_upload.py 316
1495         data = self.get_data(SIZE_LARGE)
1496         segsize = int(SIZE_LARGE / 2.5)
1497         # we want 3 segments, since that's not a power of two
1498-        self.set_encoding_parameters(25, 75, 100, segsize)
1499+        self.set_encoding_parameters(25, 25, 100, segsize)
1500         d = upload_data(self.u, data)
1501         d.addCallback(extract_uri)
1502         d.addCallback(self._check_large, SIZE_LARGE)
1503hunk ./src/allmydata/test/test_upload.py 395
1504     def test_first_error(self):
1505         mode = dict([(0,"good")] + [(i,"first-fail") for i in range(1,10)])
1506         self.make_node(mode)
1507+        self.set_encoding_parameters(k=25, happy=1, n=50)
1508         d = upload_data(self.u, DATA)
1509         d.addCallback(extract_uri)
1510         d.addCallback(self._check_large, SIZE_LARGE)
1511hunk ./src/allmydata/test/test_upload.py 513
1512 
1513         self.make_client()
1514         data = self.get_data(SIZE_LARGE)
1515-        self.set_encoding_parameters(50, 75, 100)
1516+        # if there are 50 peers, then happy needs to be <= 50
1517+        self.set_encoding_parameters(50, 50, 100)
1518         d = upload_data(self.u, data)
1519         d.addCallback(extract_uri)
1520         d.addCallback(self._check_large, SIZE_LARGE)
1521hunk ./src/allmydata/test/test_upload.py 560
1522 
1523         self.make_client()
1524         data = self.get_data(SIZE_LARGE)
1525-        self.set_encoding_parameters(100, 150, 200)
1526+        # if there are 50 peers, then happy should be no more than 50 if
1527+        # we want this to work.
1528+        self.set_encoding_parameters(100, 50, 200)
1529         d = upload_data(self.u, data)
1530         d.addCallback(extract_uri)
1531         d.addCallback(self._check_large, SIZE_LARGE)
1532hunk ./src/allmydata/test/test_upload.py 580
1533 
1534         self.make_client(3)
1535         data = self.get_data(SIZE_LARGE)
1536-        self.set_encoding_parameters(3, 5, 10)
1537+        self.set_encoding_parameters(3, 3, 10)
1538         d = upload_data(self.u, data)
1539         d.addCallback(extract_uri)
1540         d.addCallback(self._check_large, SIZE_LARGE)
1541hunk ./src/allmydata/test/test_web.py 3581
1542         self.basedir = "web/Grid/exceptions"
1543         self.set_up_grid(num_clients=1, num_servers=2)
1544         c0 = self.g.clients[0]
1545+        c0.DEFAULT_ENCODING_PARAMETERS['happy'] = 2
1546         self.fileurls = {}
1547         DATA = "data" * 100
1548         d = c0.create_dirnode()
1549}
1550[Revisions of the #778 tests, per reviewers' comments
1551Kevan Carstensen <kevan@isnotajoke.com>**20100118210238
1552 Ignore-this: 6c0d9de3b5d6d7965540df8d7c79a5df
1553 
1554 - Fix comments and confusing naming.
1555 - Add tests for the new error messages suggested by David-Sarah
1556   and Zooko.
1557 - Alter existing tests for new error messages.
1558 - Make sure that the tests continue to work with the trunk.
1559 - Add a test for a mutual disjointedness assertion that I added to
1560   upload.servers_of_happiness.
1561 
1562] {
1563hunk ./src/allmydata/test/test_encode.py 462
1564         def _ready(res):
1565             k,happy,n = e.get_param("share_counts")
1566             assert n == NUM_SHARES # else we'll be completely confused
1567-            all_peers = []
1568+            servermap = {}
1569             for shnum in range(NUM_SHARES):
1570                 mode = bucket_modes.get(shnum, "good")
1571                 peer = FakeBucketReaderWriterProxy(mode)
1572hunk ./src/allmydata/test/test_upload.py 706
1573         num_segments = encoder.get_param("num_segments")
1574         d = selector.get_shareholders(broker, sh, storage_index,
1575                                       share_size, block_size, num_segments,
1576-                                      10, 4)
1577+                                      10, 3, 4)
1578         def _have_shareholders((used_peers, already_peers)):
1579             assert servers_to_break <= len(used_peers)
1580             for index in xrange(servers_to_break):
1581hunk ./src/allmydata/test/test_upload.py 762
1582         self.failUnless((share_number, ss.my_nodeid, new_share_location)
1583                         in shares)
1584 
1585+    def _setup_grid(self):
1586+        """
1587+        I set up a NoNetworkGrid with a single server and client.
1588+        """
1589+        self.set_up_grid(num_clients=1, num_servers=1)
1590 
1591     def _setup_and_upload(self):
1592         """
1593hunk ./src/allmydata/test/test_upload.py 774
1594         upload a file to it, store its uri in self.uri, and store its
1595         sharedata in self.shares.
1596         """
1597-        self.set_up_grid(num_clients=1, num_servers=1)
1598+        self._setup_grid()
1599         client = self.g.clients[0]
1600         client.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
1601         data = upload.Data("data" * 10000, convergence="")
1602hunk ./src/allmydata/test/test_upload.py 812
1603 
1604 
1605     def _setUp(self, ns):
1606-        # Used by test_happy_semantics and test_prexisting_share_behavior
1607+        # Used by test_happy_semantics and test_preexisting_share_behavior
1608         # to set up the grid.
1609         self.node = FakeClient(mode="good", num_servers=ns)
1610         self.u = upload.Uploader()
1611hunk ./src/allmydata/test/test_upload.py 823
1612     def test_happy_semantics(self):
1613         self._setUp(2)
1614         DATA = upload.Data("kittens" * 10000, convergence="")
1615-        # These parameters are unsatisfiable with the client that we've made
1616-        # -- we'll use them to test that the semnatics work correctly.
1617+        # These parameters are unsatisfiable with only 2 servers.
1618         self.set_encoding_parameters(k=3, happy=5, n=10)
1619         d = self.shouldFail(UploadUnhappinessError, "test_happy_semantics",
1620hunk ./src/allmydata/test/test_upload.py 826
1621-                            "shares could only be placed on 2 servers "
1622-                            "(5 were requested)",
1623+                            "shares could only be placed or found on 2 "
1624+                            "server(s). We were asked to place shares on "
1625+                            "at least 5 server(s) such that any 3 of them "
1626+                            "have enough shares to recover the file",
1627                             self.u.upload, DATA)
1628         # Let's reset the client to have 10 servers
1629         d.addCallback(lambda ign:
1630hunk ./src/allmydata/test/test_upload.py 834
1631             self._setUp(10))
1632-        # These parameters are satisfiable with the client we've made.
1633+        # These parameters are satisfiable with 10 servers.
1634         d.addCallback(lambda ign:
1635             self.set_encoding_parameters(k=3, happy=5, n=10))
1636         # this should work
1637hunk ./src/allmydata/test/test_upload.py 844
1638         # (this is less than n, but more than h)
1639         d.addCallback(lambda ign:
1640             self._setUp(7))
1641-        # These encoding parameters should still be satisfiable with our
1642-        # client setup
1643+        # These parameters are satisfiable with 7 servers.
1644         d.addCallback(lambda ign:
1645             self.set_encoding_parameters(k=3, happy=5, n=10))
1646         # This, then, should work.
1647hunk ./src/allmydata/test/test_upload.py 862
1648         #
1649         # The scenario in comment:52 proposes that we have a layout
1650         # like:
1651-        # server 1: share 1
1652-        # server 2: share 1
1653-        # server 3: share 1
1654-        # server 4: shares 2 - 10
1655+        # server 0: shares 1 - 9
1656+        # server 1: share 0
1657+        # server 2: share 0
1658+        # server 3: share 0
1659         # To get access to the shares, we will first upload to one
1660hunk ./src/allmydata/test/test_upload.py 867
1661-        # server, which will then have shares 1 - 10. We'll then
1662+        # server, which will then have shares 0 - 9. We'll then
1663         # add three new servers, configure them to not accept any new
1664hunk ./src/allmydata/test/test_upload.py 869
1665-        # shares, then write share 1 directly into the serverdir of each.
1666-        # Then each of servers 1 - 3 will report that they have share 1,
1667-        # and will not accept any new share, while server 4 will report that
1668-        # it has shares 2 - 10 and will accept new shares.
1669+        # shares, then write share 0 directly into the serverdir of each,
1670+        # and then remove share 0 from server 0 in the same way.
1671+        # Then each of servers 1 - 3 will report that they have share 0,
1672+        # and will not accept any new share, while server 0 will report that
1673+        # it has shares 1 - 9 and will accept new shares.
1674         # We'll then set 'happy' = 4, and see that an upload fails
1675         # (as it should)
1676         d = self._setup_and_upload()
1677hunk ./src/allmydata/test/test_upload.py 878
1678         d.addCallback(lambda ign:
1679-            self._add_server_with_share(1, 0, True))
1680+            self._add_server_with_share(server_number=1, share_number=0,
1681+                                        readonly=True))
1682         d.addCallback(lambda ign:
1683hunk ./src/allmydata/test/test_upload.py 881
1684-            self._add_server_with_share(2, 0, True))
1685+            self._add_server_with_share(server_number=2, share_number=0,
1686+                                        readonly=True))
1687         d.addCallback(lambda ign:
1688hunk ./src/allmydata/test/test_upload.py 884
1689-            self._add_server_with_share(3, 0, True))
1690+            self._add_server_with_share(server_number=3, share_number=0,
1691+                                        readonly=True))
1692         # Remove the first share from server 0.
1693hunk ./src/allmydata/test/test_upload.py 887
1694-        def _remove_share_0():
1695+        def _remove_share_0_from_server_0():
1696             share_location = self.shares[0][2]
1697             os.remove(share_location)
1698         d.addCallback(lambda ign:
1699hunk ./src/allmydata/test/test_upload.py 891
1700-            _remove_share_0())
1701+            _remove_share_0_from_server_0())
1702         # Set happy = 4 in the client.
1703         def _prepare():
1704             client = self.g.clients[0]
1705hunk ./src/allmydata/test/test_upload.py 902
1706         # Uploading data should fail
1707         d.addCallback(lambda client:
1708             self.shouldFail(UploadUnhappinessError, "test_happy_semantics",
1709-                            "shares could only be placed on 2 servers "
1710-                            "(4 were requested)",
1711+                            "shares could be placed or found on 4 server(s), "
1712+                            "but they are not spread out evenly enough to "
1713+                            "ensure that any 3 of these servers would have "
1714+                            "enough shares to recover the file. "
1715+                            "We were asked to place shares on at "
1716+                            "least 4 servers such that any 3 of them have "
1717+                            "enough shares to recover the file",
1718                             client.upload, upload.Data("data" * 10000,
1719                                                        convergence="")))
1720 
1721hunk ./src/allmydata/test/test_upload.py 931
1722                                         readonly=True))
1723         def _prepare2():
1724             client = self.g.clients[0]
1725-            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 3
1726+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
1727             return client
1728         d.addCallback(lambda ign:
1729             _prepare2())
1730hunk ./src/allmydata/test/test_upload.py 937
1731         d.addCallback(lambda client:
1732             self.shouldFail(UploadUnhappinessError, "test_happy_sematics",
1733-                            "shares could only be placed on 2 servers "
1734-                            "(3 were requested)",
1735+                            "shares could only be placed on 3 server(s) such "
1736+                            "that any 3 of them have enough shares to recover "
1737+                            "the file, but we were asked to use at least 4 "
1738+                            "such servers.",
1739                             client.upload, upload.Data("data" * 10000,
1740                                                        convergence="")))
1741         return d
1742hunk ./src/allmydata/test/test_upload.py 954
1743         def _change_basedir(ign):
1744             self.basedir = self.mktemp()
1745         _change_basedir(None)
1746-        d = self._setup_and_upload()
1747-        # We start by uploading all of the shares to one server (which has
1748-        # already been done above).
1749+        # We start by uploading all of the shares to one server.
1750         # Next, we'll add three new servers to our NoNetworkGrid. We'll add
1751         # one share from our initial upload to each of these.
1752         # The counterintuitive ordering of the share numbers is to deal with
1753hunk ./src/allmydata/test/test_upload.py 960
1754         # the permuting of these servers -- distributing the shares this
1755         # way ensures that the Tahoe2PeerSelector sees them in the order
1756-        # described above.
1757+        # described below.
1758+        d = self._setup_and_upload()
1759         d.addCallback(lambda ign:
1760             self._add_server_with_share(server_number=1, share_number=2))
1761         d.addCallback(lambda ign:
1762hunk ./src/allmydata/test/test_upload.py 973
1763         # server 1: share 2
1764         # server 2: share 0
1765         # server 3: share 1
1766-        # We want to change the 'happy' parameter in the client to 4.
1767+        # We change the 'happy' parameter in the client to 4.
1768         # The Tahoe2PeerSelector will see the peers permuted as:
1769         # 2, 3, 1, 0
1770         # Ideally, a reupload of our original data should work.
1771hunk ./src/allmydata/test/test_upload.py 986
1772             client.upload(upload.Data("data" * 10000, convergence="")))
1773 
1774 
1775-        # This scenario is basically comment:53, but with the order reversed;
1776-        # this means that the Tahoe2PeerSelector sees
1777-        # server 2: shares 1-10
1778-        # server 3: share 1
1779-        # server 1: share 2
1780-        # server 4: share 3
1781+        # This scenario is basically comment:53, but changed so that the
1782+        # Tahoe2PeerSelector sees the server with all of the shares before
1783+        # any of the other servers.
1784+        # The layout is:
1785+        # server 2: shares 0 - 9
1786+        # server 3: share 0
1787+        # server 1: share 1
1788+        # server 4: share 2
1789+        # The Tahoe2PeerSelector sees the peers permuted as:
1790+        # 2, 3, 1, 4
1791+        # Note that server 0 has been replaced by server 4; this makes it
1792+        # easier to ensure that the last server seen by Tahoe2PeerSelector
1793+        # has only one share.
1794         d.addCallback(_change_basedir)
1795         d.addCallback(lambda ign:
1796             self._setup_and_upload())
1797hunk ./src/allmydata/test/test_upload.py 1022
1798         d.addCallback(_reset_encoding_parameters)
1799         d.addCallback(lambda client:
1800             client.upload(upload.Data("data" * 10000, convergence="")))
1801+
1802+
1803         # Try the same thing, but with empty servers after the first one
1804         # We want to make sure that Tahoe2PeerSelector will redistribute
1805         # shares as necessary, not simply discover an existing layout.
1806hunk ./src/allmydata/test/test_upload.py 1027
1807+        # The layout is:
1808+        # server 2: shares 0 - 9
1809+        # server 3: empty
1810+        # server 1: empty
1811+        # server 4: empty
1812         d.addCallback(_change_basedir)
1813         d.addCallback(lambda ign:
1814             self._setup_and_upload())
1815hunk ./src/allmydata/test/test_upload.py 1127
1816 
1817 
1818     def test_dropped_servers_in_encoder(self):
1819+        # The Encoder does its own "servers_of_happiness" check if it
1820+        # happens to lose a bucket during an upload (it assumes that
1821+        # the layout presented to it satisfies "servers_of_happiness"
1822+        # until a failure occurs)
1823+        #
1824+        # This test simulates an upload where servers break after peer
1825+        # selection, but before they are written to.
1826         def _set_basedir(ign=None):
1827             self.basedir = self.mktemp()
1828         _set_basedir()
1829hunk ./src/allmydata/test/test_upload.py 1140
1830         d = self._setup_and_upload();
1831         # Add 5 servers
1832         def _do_server_setup(ign):
1833-            self._add_server_with_share(1)
1834-            self._add_server_with_share(2)
1835-            self._add_server_with_share(3)
1836-            self._add_server_with_share(4)
1837-            self._add_server_with_share(5)
1838+            self._add_server_with_share(server_number=1)
1839+            self._add_server_with_share(server_number=2)
1840+            self._add_server_with_share(server_number=3)
1841+            self._add_server_with_share(server_number=4)
1842+            self._add_server_with_share(server_number=5)
1843         d.addCallback(_do_server_setup)
1844         # remove the original server
1845         # (necessary to ensure that the Tahoe2PeerSelector will distribute
1846hunk ./src/allmydata/test/test_upload.py 1153
1847             server = self.g.servers_by_number[0]
1848             self.g.remove_server(server.my_nodeid)
1849         d.addCallback(_remove_server)
1850-        # This should succeed.
1851+        # This should succeed; we still have 4 servers, and the
1852+        # happiness of the upload is 4.
1853         d.addCallback(lambda ign:
1854             self._do_upload_with_broken_servers(1))
1855         # Now, do the same thing over again, but drop 2 servers instead
1856hunk ./src/allmydata/test/test_upload.py 1158
1857-        # of 1. This should fail.
1858+        # of 1. This should fail, because servers_of_happiness is 4 and
1859+        # we can't satisfy that.
1860         d.addCallback(_set_basedir)
1861         d.addCallback(lambda ign:
1862             self._setup_and_upload())
1863hunk ./src/allmydata/test/test_upload.py 1201
1864         return d
1865 
1866 
1867-    def test_servers_with_unique_shares(self):
1868-        # servers_with_unique_shares expects a dict of
1869+    def test_servers_of_happiness_utility_function(self):
1870+        # This function is concerned with the one-to-one or injective
1871+        # function upload.servers_of_happiness(); many
1872+        # issues that relate to other aspects of the servers of happiness
1873+        # behavior that happen to use this function are tested elsewhere.
1874+        # These tests exist to ensure that upload.servers_of_happiness doesn't
1875+        # under or overcount the happiness value for given inputs.
1876+
1877+        # servers_of_happiness expects a dict of
1878         # shnum => peerid as a preexisting shares argument.
1879         test1 = {
1880                  1 : "server1",
1881hunk ./src/allmydata/test/test_upload.py 1217
1882                  3 : "server3",
1883                  4 : "server4"
1884                 }
1885-        unique_servers = upload.servers_with_unique_shares(test1)
1886-        self.failUnlessEqual(4, len(unique_servers))
1887-        for server in ["server1", "server2", "server3", "server4"]:
1888-            self.failUnlessIn(server, unique_servers)
1889+        happy = upload.servers_of_happiness(test1)
1890+        self.failUnlessEqual(4, happy)
1891         test1[4] = "server1"
1892hunk ./src/allmydata/test/test_upload.py 1220
1893-        # Now there should only be 3 unique servers.
1894-        unique_servers = upload.servers_with_unique_shares(test1)
1895-        self.failUnlessEqual(3, len(unique_servers))
1896-        for server in ["server1", "server2", "server3"]:
1897-            self.failUnlessIn(server, unique_servers)
1898-        # servers_with_unique_shares expects to receive some object with
1899-        # a peerid attribute. So we make a FakePeerTracker whose only
1900-        # job is to have a peerid attribute.
1901+        # We've added a duplicate server, so now servers_of_happiness
1902+        # should be 3 instead of 4.
1903+        happy = upload.servers_of_happiness(test1)
1904+        self.failUnlessEqual(3, happy)
1905+        # The second argument of servers_of_happiness should be a list of
1906+        # objects with peerid and buckets as attributes. In actual use,
1907+        # these will be PeerTracker instances, but for testing it is fine
1908+        # to make a FakePeerTracker whose job is to hold those instance
1909+        # variables to test that part.
1910         class FakePeerTracker:
1911             pass
1912         trackers = []
1913hunk ./src/allmydata/test/test_upload.py 1237
1914             t.peerid = server
1915             t.buckets = [i]
1916             trackers.append(t)
1917-        # Recall that there are 3 unique servers in test1. Since none of
1918-        # those overlap with the ones in trackers, we should get 7 back
1919-        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
1920-        self.failUnlessEqual(7, len(unique_servers))
1921-        expected_servers = ["server" + str(i) for i in xrange(1, 9)]
1922-        expected_servers.remove("server4")
1923-        for server in expected_servers:
1924-            self.failUnlessIn(server, unique_servers)
1925-        # Now add an overlapping server to trackers.
1926+        # Recall that test1 is a server layout with servers_of_happiness = 3.
1927+        # Since there isn't any overlap between the server => share
1928+        # correspondences in test1 and those in trackers, the result here
1929+        # should be 7.
1930+        happy = upload.servers_of_happiness(test1, set(trackers))
1931+        self.failUnlessEqual(7, happy)
1932+        # Now add an overlapping server to trackers. This is redundant, so it
1933+        # should not cause the previously reported happiness value to change.
1934         t = FakePeerTracker()
1935         t.peerid = "server1"
1936         t.buckets = [1]
1937hunk ./src/allmydata/test/test_upload.py 1249
1938         trackers.append(t)
1939-        unique_servers = upload.servers_with_unique_shares(test1, set(trackers))
1940-        self.failUnlessEqual(7, len(unique_servers))
1941-        for server in expected_servers:
1942-            self.failUnlessIn(server, unique_servers)
1943+        happy = upload.servers_of_happiness(test1, set(trackers))
1944+        self.failUnlessEqual(7, happy)
1945         test = {}
1946hunk ./src/allmydata/test/test_upload.py 1252
1947-        unique_servers = upload.servers_with_unique_shares(test)
1948-        self.failUnlessEqual(0, len(test))
1949+        happy = upload.servers_of_happiness(test)
1950+        self.failUnlessEqual(0, happy)
1951+        # Test a more substantial overlap between the trackers and the
1952+        # existing assignments.
1953+        test = {
1954+            1 : 'server1',
1955+            2 : 'server2',
1956+            3 : 'server3',
1957+            4 : 'server4',
1958+        }
1959+        trackers = []
1960+        t = FakePeerTracker()
1961+        t.peerid = 'server5'
1962+        t.buckets = [4]
1963+        trackers.append(t)
1964+        t = FakePeerTracker()
1965+        t.peerid = 'server6'
1966+        t.buckets = [3, 5]
1967+        trackers.append(t)
1968+        # The value returned by upload.servers_of_happiness is the size
1969+        # of the domain of the one-to-one function that
1970+        # upload.servers_of_happiness makes between peerids and share numbers.
1971+        # It should make something like this:
1972+        # server 1: share 1
1973+        # server 2: share 2
1974+        # server 3: share 3
1975+        # server 5: share 4
1976+        # server 6: share 5
1977+        #
1978+        # and, since there are 5 servers in the domain of this function, it
1979+        # should return 5.
1980+        happy = upload.servers_of_happiness(test, set(trackers))
1981+        self.failUnlessEqual(5, happy)
1982+        # upload.servers_of_happiness assumes that the buckets attributes
1983+        # of the PeerTrackers in its used_peers argument are mutually
1984+        # disjoint; that is, they contain no shares in common. It should
1985+        # refuse to work at all if this is not the case.
1986+        t = FakePeerTracker()
1987+        t.peerid = "server7"
1988+        t.buckets = [3, 4, 5]
1989+        trackers.append(t)
1990+        self.shouldFail(AssertionError,
1991+                        "test_servers_of_happiness_utility_function",
1992+                        "",
1993+                        upload.servers_of_happiness, test, trackers)
1994 
1995 
1996     def test_shares_by_server(self):
1997hunk ./src/allmydata/test/test_upload.py 1355
1998 
1999 
2000     def test_should_add_server(self):
2001+        # upload.should_add_server tests whether or not the addition of
2002+        # shnum => peerid mapping to the existing_shares dictionary
2003+        # would make the dictionary happier.
2004         shares = dict([(i, "server%d" % i) for i in xrange(10)])
2005hunk ./src/allmydata/test/test_upload.py 1359
2006-        self.failIf(upload.should_add_server(shares, "server1", 4))
2007+
2008+
2009+        # Attempting to add a shnum => peerid mapping for a shnum that
2010+        # isn't already in the dictionary should return true.
2011+        self.failUnless(upload.should_add_server(shares, "server11", 11))
2012+
2013+
2014+        # Attempting to add an identical entry should return false
2015+        # (since the dictionary would be no happier than it already is)
2016+        self.failIf(upload.should_add_server(shares, "server1", 1))
2017+
2018+
2019+        # shnum 1 maps to "server1", which occurs nowhere else
2020+        # in shares. Attempting to map shnum 1 to "server4", which occurs
2021+        # elsewhere in shares, should return false because the resulting
2022+        # dictionary will be less happy than it is now.
2023+        self.failIf(upload.should_add_server(shares, "server4", 1))
2024+
2025+
2026+        # Now map shnum 4 to "server1", then check to see that
2027+        # upload.should_add_server thinks that mapping this to "server4"
2028+        # makes shares happier.
2029         shares[4] = "server1"
2030         self.failUnless(upload.should_add_server(shares, "server4", 4))
2031hunk ./src/allmydata/test/test_upload.py 1383
2032+
2033+
2034+        # attempting to add a mapping to an empty dictionary should always
2035+        # make the dictionary happier, so this should return true.
2036         shares = {}
2037         self.failUnless(upload.should_add_server(shares, "server1", 1))
2038 
2039hunk ./src/allmydata/test/test_upload.py 1422
2040             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2041                             "peer selection failed for <Tahoe2PeerSelector "
2042                             "for upload dglev>: placed 0 shares out of 10 "
2043-                            "total (10 homeless), want to place on 4 servers,"
2044-                            " sent 5 queries to 5 peers, 0 queries placed "
2045+                            "total (10 homeless), want to place shares on at "
2046+                            "least 4 servers such that any 3 of them have "
2047+                            "enough shares to recover the file, "
2048+                            "sent 5 queries to 5 peers, 0 queries placed "
2049                             "some shares, 5 placed none "
2050                             "(of which 5 placed none due to the server being "
2051                             "full and 0 placed none due to an error)",
2052hunk ./src/allmydata/test/test_upload.py 1462
2053             self._add_server_with_share(server_number=5, readonly=True))
2054         d.addCallback(lambda ign:
2055             self.g.remove_server(self.g.servers_by_number[0].my_nodeid))
2056-        def _reset_encoding_parameters(ign):
2057+        def _reset_encoding_parameters(ign, happy=4):
2058             client = self.g.clients[0]
2059hunk ./src/allmydata/test/test_upload.py 1464
2060-            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
2061+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = happy
2062             return client
2063         d.addCallback(_reset_encoding_parameters)
2064         d.addCallback(lambda client:
2065hunk ./src/allmydata/test/test_upload.py 1471
2066             self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2067                             "peer selection failed for <Tahoe2PeerSelector "
2068                             "for upload dglev>: placed 0 shares out of 10 "
2069-                            "total (10 homeless), want to place on 4 servers,"
2070-                            " sent 5 queries to 5 peers, 0 queries placed "
2071+                            "total (10 homeless), want to place shares on at "
2072+                            "least 4 servers such that any 3 of them have "
2073+                            "enough shares to recover the file, "
2074+                            "sent 5 queries to 5 peers, 0 queries placed "
2075                             "some shares, 5 placed none "
2076                             "(of which 4 placed none due to the server being "
2077                             "full and 1 placed none due to an error)",
2078hunk ./src/allmydata/test/test_upload.py 1480
2079                             client.upload,
2080                             upload.Data("data" * 10000, convergence="")))
2081+        # server 0, server 1 = empty, accepting shares
2082+        # This should place all of the shares, but still fail with happy=4.
2083+        # We want to make sure that the exception message is worded correctly.
2084+        d.addCallback(_reset)
2085+        d.addCallback(lambda ign:
2086+            self._setup_grid())
2087+        d.addCallback(lambda ign:
2088+            self._add_server_with_share(server_number=1))
2089+        d.addCallback(_reset_encoding_parameters)
2090+        d.addCallback(lambda client:
2091+            self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2092+                            "shares could only be placed or found on 2 "
2093+                            "server(s). We were asked to place shares on at "
2094+                            "least 4 server(s) such that any 3 of them have "
2095+                            "enough shares to recover the file.",
2096+                            client.upload, upload.Data("data" * 10000,
2097+                                                       convergence="")))
2098+        # servers 0 - 4 = empty, accepting shares
2099+        # This too should place all the shares, and this too should fail,
2100+        # but since the effective happiness is more than the k encoding
2101+        # parameter, it should trigger a different error message than the one
2102+        # above.
2103+        d.addCallback(_reset)
2104+        d.addCallback(lambda ign:
2105+            self._setup_grid())
2106+        d.addCallback(lambda ign:
2107+            self._add_server_with_share(server_number=1))
2108+        d.addCallback(lambda ign:
2109+            self._add_server_with_share(server_number=2))
2110+        d.addCallback(lambda ign:
2111+            self._add_server_with_share(server_number=3))
2112+        d.addCallback(lambda ign:
2113+            self._add_server_with_share(server_number=4))
2114+        d.addCallback(_reset_encoding_parameters, happy=7)
2115+        d.addCallback(lambda client:
2116+            self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2117+                            "shares could only be placed on 5 server(s) such "
2118+                            "that any 3 of them have enough shares to recover "
2119+                            "the file, but we were asked to use at least 7 "
2120+                            "such servers.",
2121+                            client.upload, upload.Data("data" * 10000,
2122+                                                       convergence="")))
2123+        # server 0 :        shares 0 - 9
2124+        # servers 1 - 3:    share 0, readonly
2125+        # This should place all of the shares, but fail with happy=4.
2126+        # Since the number of servers with shares is more than the number
2127+        # necessary to reconstitute the file, this will trigger a different
2128+        # error message than either of those above.
2129+        d.addCallback(_reset)
2130+        d.addCallback(lambda ign:
2131+            self._setup_and_upload())
2132+        d.addCallback(lambda ign:
2133+            self._add_server_with_share(server_number=1, share_number=0,
2134+                                        readonly=True))
2135+        d.addCallback(lambda ign:
2136+            self._add_server_with_share(server_number=2, share_number=0,
2137+                                        readonly=True))
2138+        d.addCallback(lambda ign:
2139+            self._add_server_with_share(server_number=3, share_number=0,
2140+                                        readonly=True))
2141+        d.addCallback(_reset_encoding_parameters, happy=7)
2142+        d.addCallback(lambda client:
2143+            self.shouldFail(UploadUnhappinessError, "test_selection_exceptions",
2144+                            "shares could be placed or found on 4 server(s), "
2145+                            "but they are not spread out evenly enough to "
2146+                            "ensure that any 3 of these servers would have "
2147+                            "enough shares to recover the file. We were asked "
2148+                            "to place shares on at least 7 servers such that "
2149+                            "any 3 of them have enough shares to recover the "
2150+                            "file",
2151+                            client.upload, upload.Data("data" * 10000,
2152+                                                       convergence="")))
2153         return d
2154 
2155 
2156}
2157
2158Context:
2159
2160[tahoe_add_alias.py: minor refactoring
2161Brian Warner <warner@lothar.com>**20100115064220
2162 Ignore-this: 29910e81ad11209c9e493d65fd2dab9b
2163]
2164[test_dirnode.py: reduce scope of a Client instance, suggested by Kevan.
2165Brian Warner <warner@lothar.com>**20100115062713
2166 Ignore-this: b35efd9e6027e43de6c6f509bfb4ccaa
2167]
2168[test_provisioning: STAN is not always a list. Fix by David-Sarah Hopwood.
2169Brian Warner <warner@lothar.com>**20100115014632
2170 Ignore-this: 9989de7f1e00907706d2b63153138219
2171]
2172[web/directory.py mkdir-immutable: hush pyflakes, add TODO for #903 behavior
2173Brian Warner <warner@lothar.com>**20100114222804
2174 Ignore-this: 717cd3b9a1c8aeee76938c9641db7356
2175]
2176[hush pyflakes-0.4.0 warnings: slightly less-trivial fixes. Closes #900.
2177Brian Warner <warner@lothar.com>**20100114221719
2178 Ignore-this: f774f4637e256ad55502659413a811a8
2179 
2180 This includes one fix (in test_web) which was testing the wrong thing.
2181]
2182[hush pyflakes-0.4.0 warnings: remove trivial unused variables. For #900.
2183Brian Warner <warner@lothar.com>**20100114221529
2184 Ignore-this: e96106c8f1a99fbf93306fbfe9a294cf
2185]
2186[tahoe add-alias/create-alias: don't corrupt non-newline-terminated alias
2187Brian Warner <warner@lothar.com>**20100114210246
2188 Ignore-this: 9c994792e53a85159d708760a9b1b000
2189 file. Closes #741.
2190]
2191[change docs and --help to use "grid" instead of "virtual drive": closes #892.
2192Brian Warner <warner@lothar.com>**20100114201119
2193 Ignore-this: a20d4a4dcc4de4e3b404ff72d40fc29b
2194 
2195 Thanks to David-Sarah Hopwood for the patch.
2196]
2197[backupdb.txt: fix ST_CTIME reference
2198Brian Warner <warner@lothar.com>**20100114194052
2199 Ignore-this: 5a189c7a1181b07dd87f0a08ea31b6d3
2200]
2201[client.py: fix/update comments on KeyGenerator
2202Brian Warner <warner@lothar.com>**20100113004226
2203 Ignore-this: 2208adbb3fd6a911c9f44e814583cabd
2204]
2205[Clean up log.err calls, for one of the issues in #889.
2206Brian Warner <warner@lothar.com>**20100112013343
2207 Ignore-this: f58455ce15f1fda647c5fb25d234d2db
2208 
2209 allmydata.util.log.err() either takes a Failure as the first positional
2210 argument, or takes no positional arguments and must be invoked in an
2211 exception handler. Fixed its signature to match both foolscap.logging.log.err
2212 and twisted.python.log.err . Included a brief unit test.
2213]
2214[tidy up DeadReferenceError handling, ignore them in add_lease calls
2215Brian Warner <warner@lothar.com>**20100112000723
2216 Ignore-this: 72f1444e826fd0b9db6d318f89603c38
2217 
2218 Stop checking separately for ConnectionDone/ConnectionLost, since those have
2219 been folded into DeadReferenceError since foolscap-0.3.1 . Write
2220 rrefutil.trap_deadref() in terms of rrefutil.trap_and_discard() to improve
2221 code coverage.
2222]
2223[NEWS: improve "tahoe backup" notes, mention first-backup-after-upgrade duration
2224Brian Warner <warner@lothar.com>**20100111190132
2225 Ignore-this: 10347c590b3375964579ba6c2b0edb4f
2226 
2227 Thanks to Francois Deppierraz for the suggestion.
2228]
2229[test_repairer: add (commented-out) test_each_byte, to see exactly what the
2230Brian Warner <warner@lothar.com>**20100110203552
2231 Ignore-this: 8e84277d5304752edeff052b97821815
2232 Verifier misses
2233 
2234 The results (described in #819) match our expectations: it misses corruption
2235 in unused share fields and in most container fields (which are only visible
2236 to the storage server, not the client). 1265 bytes of a 2753 byte
2237 share (hosting a 56-byte file with an artifically small segment size) are
2238 unused, mostly in the unused tail of the overallocated UEB space (765 bytes),
2239 and the allocated-but-unwritten plaintext_hash_tree (480 bytes).
2240]
2241[repairer: fix some wrong offsets in the randomized verifier tests, debugged by Brian
2242zooko@zooko.com**20100110203721
2243 Ignore-this: 20604a609db8706555578612c1c12feb
2244 fixes #819
2245]
2246[test_repairer: fix colliding basedir names, which caused test inconsistencies
2247Brian Warner <warner@lothar.com>**20100110084619
2248 Ignore-this: b1d56dd27e6ab99a7730f74ba10abd23
2249]
2250[repairer: add deterministic test for #819, mark as TODO
2251zooko@zooko.com**20100110013619
2252 Ignore-this: 4cb8bb30b25246de58ed2b96fa447d68
2253]
2254[contrib/fuse/runtests.py: Tolerate the tahoe CLI returning deprecation warnings
2255francois@ctrlaltdel.ch**20100109175946
2256 Ignore-this: 419c354d9f2f6eaec03deb9b83752aee
2257 
2258 Depending on the versions of external libraries such as Twisted of Foolscap,
2259 the tahoe CLI can display deprecation warnings on stdout.  The tests should
2260 not interpret those warnings as a failure if the node is in fact correctly
2261 started.
2262   
2263 See http://allmydata.org/trac/tahoe/ticket/859 for an example of deprecation
2264 warnings.
2265 
2266 fixes #876
2267]
2268[contrib: fix fuse_impl_c to use new Python API
2269zooko@zooko.com**20100109174956
2270 Ignore-this: 51ca1ec7c2a92a0862e9b99e52542179
2271 original patch by Thomas Delaet, fixed by François, reviewed by Brian, committed by me
2272]
2273[docs: CREDITS: add David-Sarah to the CREDITS file
2274zooko@zooko.com**20100109060435
2275 Ignore-this: 896062396ad85f9d2d4806762632f25a
2276]
2277[mutable/publish: don't loop() right away upon DeadReferenceError. Closes #877
2278Brian Warner <warner@lothar.com>**20100102220841
2279 Ignore-this: b200e707b3f13aa8251981362b8a3e61
2280 
2281 The bug was that a disconnected server could cause us to re-enter the initial
2282 loop() call, sending multiple queries to a single server, provoking an
2283 incorrect UCWE. To fix it, stall the loop() with an eventual.fireEventually()
2284]
2285[immutable/checker.py: oops, forgot some imports. Also hush pyflakes.
2286Brian Warner <warner@lothar.com>**20091229233909
2287 Ignore-this: 4d61bd3f8113015a4773fd4768176e51
2288]
2289[mutable repair: return successful=False when numshares<k (thus repair fails),
2290Brian Warner <warner@lothar.com>**20091229233746
2291 Ignore-this: d881c3275ff8c8bee42f6a80ca48441e
2292 instead of weird errors. Closes #874 and #786.
2293 
2294 Previously, if the file had 0 shares, this would raise TypeError as it tried
2295 to call download_version(None). If the file had some shares but fewer than
2296 'k', it would incorrectly raise MustForceRepairError.
2297 
2298 Added get_successful() to the IRepairResults API, to give repair() a place to
2299 report non-code-bug problems like this.
2300]
2301[node.py/interfaces.py: minor docs fixes
2302Brian Warner <warner@lothar.com>**20091229230409
2303 Ignore-this: c86ad6342ef0f95d50639b4f99cd4ddf
2304]
2305[NEWS: fix 1.4.1 announcement w.r.t. add-lease behavior in older releases
2306Brian Warner <warner@lothar.com>**20091229230310
2307 Ignore-this: bbbbb9c961f3bbcc6e5dbe0b1594822
2308]
2309[checker: don't let failures in add-lease affect checker results. Closes #875.
2310Brian Warner <warner@lothar.com>**20091229230108
2311 Ignore-this: ef1a367b93e4d01298c2b1e6ca59c492
2312 
2313 Mutable servermap updates and the immutable checker, when run with
2314 add_lease=True, send both the do-you-have-block and add-lease commands in
2315 parallel, to avoid an extra round trip time. Many older servers have problems
2316 with add-lease and raise various exceptions, which don't generally matter.
2317 The client-side code was catching+ignoring some of them, but unrecognized
2318 exceptions were passed through to the DYHB code, concealing the DYHB results
2319 from the checker, making it think the server had no shares.
2320 
2321 The fix is to separate the code paths. Both commands are sent at the same
2322 time, but the errback path from add-lease is handled separately. Known
2323 exceptions are ignored, the others (both unknown-remote and all-local) are
2324 logged (log.WEIRD, which will trigger an Incident), but neither will affect
2325 the DYHB results.
2326 
2327 The add-lease message is sent first, and we know that the server handles them
2328 synchronously. So when the checker is done, we can be sure that all the
2329 add-lease messages have been retired. This makes life easier for unit tests.
2330]
2331[test_cli: verify fix for "tahoe get" not creating empty file on error (#121)
2332Brian Warner <warner@lothar.com>**20091227235444
2333 Ignore-this: 6444d52413b68eb7c11bc3dfdc69c55f
2334]
2335[addendum to "Fix 'tahoe ls' on files (#771)"
2336Brian Warner <warner@lothar.com>**20091227232149
2337 Ignore-this: 6dd5e25f8072a3153ba200b7fdd49491
2338 
2339 tahoe_ls.py: tolerate missing metadata
2340 web/filenode.py: minor cleanups
2341 test_cli.py: test 'tahoe ls FILECAP'
2342]
2343[Fix 'tahoe ls' on files (#771). Patch adapted from Kevan Carstensen.
2344Brian Warner <warner@lothar.com>**20091227225443
2345 Ignore-this: 8bf8c7b1cd14ea4b0ebd453434f4fe07
2346 
2347 web/filenode.py: also serve edge metadata when using t=json on a
2348                  DIRCAP/childname object.
2349 tahoe_ls.py: list file objects as if we were listing one-entry directories.
2350              Show edge metadata if we have it, which will be true when doing
2351              'tahoe ls DIRCAP/filename' and false when doing 'tahoe ls
2352              FILECAP'
2353]
2354[tahoe_get: don't create the output file on error. Closes #121.
2355Brian Warner <warner@lothar.com>**20091227220404
2356 Ignore-this: 58d5e793a77ec6e87d9394ade074b926
2357]
2358[webapi: don't accept zero-length childnames during traversal. Closes #358, #676.
2359Brian Warner <warner@lothar.com>**20091227201043
2360 Ignore-this: a9119dec89e1c7741f2289b0cad6497b
2361 
2362 This forbids operations that would implicitly create a directory with a
2363 zero-length (empty string) name, like what you'd get if you did "tahoe put
2364 local /oops/blah" (#358) or "POST /uri/CAP//?t=mkdir" (#676). The error
2365 message is fairly friendly too.
2366 
2367 Also added code to "tahoe put" to catch this error beforehand and suggest the
2368 correct syntax (i.e. without the leading slash).
2369]
2370[CLI: send 'Accept:' header to ask for text/plain tracebacks. Closes #646.
2371Brian Warner <warner@lothar.com>**20091227195828
2372 Ignore-this: 44c258d4d4c7dac0ed58adb22f73331
2373 
2374 The webapi has been looking for an Accept header since 1.4.0, but it treats a
2375 missing header as equal to */* (to honor RFC2616). This change finally
2376 modifies our CLI tools to ask for "text/plain, application/octet-stream",
2377 which seems roughly correct (we either want a plain-text traceback or error
2378 message, or an uninterpreted chunk of binary data to save to disk). Some day
2379 we'll figure out how JSON fits into this scheme.
2380]
2381[Makefile: upload-tarballs: switch from xfer-client to flappclient, closes #350
2382Brian Warner <warner@lothar.com>**20091227163703
2383 Ignore-this: 3beeecdf2ad9c2438ab57f0e33dcb357
2384 
2385 I've also set up a new flappserver on source@allmydata.org to receive the
2386 tarballs. We still need to replace the gutsy buildslave (which is where the
2387 tarballs used to be generated+uploaded) and give it the new FURL.
2388]
2389[misc/ringsim.py: make it deterministic, more detail about grid-is-full behavior
2390Brian Warner <warner@lothar.com>**20091227024832
2391 Ignore-this: a691cc763fb2e98a4ce1767c36e8e73f
2392]
2393[misc/ringsim.py: tool to discuss #302
2394Brian Warner <warner@lothar.com>**20091226060339
2395 Ignore-this: fc171369b8f0d97afeeb8213e29d10ed
2396]
2397[docs: fix helper.txt to describe new config style
2398zooko@zooko.com**20091224223522
2399 Ignore-this: 102e7692dc414a4b466307f7d78601fe
2400]
2401[docs/stats.txt: add TOC, notes about controlling gatherer's listening port
2402Brian Warner <warner@lothar.com>**20091224202133
2403 Ignore-this: 8eef63b0e18db5aa8249c2eafde02c05
2404 
2405 Thanks to Jody Harris for the suggestions.
2406]
2407[Add docs/stats.py, explaining Tahoe stats, the gatherer, and the munin plugins.
2408Brian Warner <warner@lothar.com>**20091223052400
2409 Ignore-this: 7c9eeb6e5644eceda98b59a67730ccd5
2410]
2411[more #859: avoid deprecation warning for unit tests too, hush pyflakes
2412Brian Warner <warner@lothar.com>**20091215000147
2413 Ignore-this: 193622e24d31077da825a11ed2325fd3
2414 
2415 * factor maybe-import-sha logic into util.hashutil
2416]
2417[use hashlib module if available, thus avoiding a DeprecationWarning for importing the old sha module; fixes #859
2418zooko@zooko.com**20091214212703
2419 Ignore-this: 8d0f230a4bf8581dbc1b07389d76029c
2420]
2421[docs: reflow architecture.txt to 78-char lines
2422zooko@zooko.com**20091208232943
2423 Ignore-this: 88f55166415f15192e39407815141f77
2424]
2425[docs: update the about.html a little
2426zooko@zooko.com**20091208212737
2427 Ignore-this: 3fe2d9653c6de0727d3e82bd70f2a8ed
2428]
2429[docs: remove obsolete doc file "codemap.txt"
2430zooko@zooko.com**20091113163033
2431 Ignore-this: 16bc21a1835546e71d1b344c06c61ebb
2432 I started to update this to reflect the current codebase, but then I thought (a) nobody seemed to notice that it hasn't been updated since December 2007, and (b) it will just bit-rot again, so I'm removing it.
2433]
2434[mutable/retrieve.py: stop reaching into private MutableFileNode attributes
2435Brian Warner <warner@lothar.com>**20091208172921
2436 Ignore-this: 61e548798c1105aed66a792bf26ceef7
2437]
2438[mutable/servermap.py: stop reaching into private MutableFileNode attributes
2439Brian Warner <warner@lothar.com>**20091208172608
2440 Ignore-this: b40a6b62f623f9285ad96fda139c2ef2
2441]
2442[mutable/servermap.py: oops, query N+e servers in MODE_WRITE, not k+e
2443Brian Warner <warner@lothar.com>**20091208171156
2444 Ignore-this: 3497f4ab70dae906759007c3cfa43bc
2445 
2446 under normal conditions, this wouldn't cause any problems, but if the shares
2447 are really sparse (perhaps because new servers were added), then
2448 file-modifies might stop looking too early and leave old shares in place
2449]
2450[control.py: fix speedtest: use download_best_version (not read) on mutable nodes
2451Brian Warner <warner@lothar.com>**20091207060512
2452 Ignore-this: 7125eabfe74837e05f9291dd6414f917
2453]
2454[FTP-and-SFTP.txt: fix ssh-keygen pointer
2455Brian Warner <warner@lothar.com>**20091207052803
2456 Ignore-this: bc2a70ee8c58ec314e79c1262ccb22f7
2457]
2458[setup: ignore _darcs in the "test-clean" test and make the "clean" step remove all .egg's in the root dir
2459zooko@zooko.com**20091206184835
2460 Ignore-this: 6066bd160f0db36d7bf60aba405558d2
2461]
2462[remove MutableFileNode.download(), prefer download_best_version() instead
2463Brian Warner <warner@lothar.com>**20091201225438
2464 Ignore-this: 5733eb373a902063e09fd52cc858dec0
2465]
2466[Simplify immutable download API: use just filenode.read(consumer, offset, size)
2467Brian Warner <warner@lothar.com>**20091201225330
2468 Ignore-this: bdedfb488ac23738bf52ae6d4ab3a3fb
2469 
2470 * remove Downloader.download_to_data/download_to_filename/download_to_filehandle
2471 * remove download.Data/FileName/FileHandle targets
2472 * remove filenode.download/download_to_data/download_to_filename methods
2473 * leave Downloader.download (the whole Downloader will go away eventually)
2474 * add util.consumer.MemoryConsumer/download_to_data, for convenience
2475   (this is mostly used by unit tests, but it gets used by enough non-test
2476    code to warrant putting it in allmydata.util)
2477 * update tests
2478 * removes about 180 lines of code. Yay negative code days!
2479 
2480 Overall plan is to rewrite immutable/download.py and leave filenode.read() as
2481 the sole read-side API.
2482]
2483[server.py: undo my bogus 'correction' of David-Sarah's comment fix
2484Brian Warner <warner@lothar.com>**20091201024607
2485 Ignore-this: ff4bb58f6a9e045b900ac3a89d6f506a
2486 
2487 and move it to a better line
2488]
2489[Implement more coherent behavior when copying with dircaps/filecaps (closes #761). Patch by Kevan Carstensen.
2490"Brian Warner <warner@lothar.com>"**20091130211009]
2491[storage.py: update comment
2492"Brian Warner <warner@lothar.com>"**20091130195913]
2493[storage server: detect disk space usage on Windows too (fixes #637)
2494david-sarah@jacaranda.org**20091121055644
2495 Ignore-this: 20fb30498174ce997befac7701fab056
2496]
2497[make status of finished operations consistently "Finished"
2498david-sarah@jacaranda.org**20091121061543
2499 Ignore-this: 97d483e8536ccfc2934549ceff7055a3
2500]
2501[NEWS: update with all user-visible changes since the last release
2502Brian Warner <warner@lothar.com>**20091127224217
2503 Ignore-this: 741da6cd928e939fb6d21a61ea3daf0b
2504]
2505[update "tahoe backup" docs, and webapi.txt's mkdir-with-children
2506Brian Warner <warner@lothar.com>**20091127055900
2507 Ignore-this: defac1fb9a2335b0af3ef9dbbcc67b7e
2508]
2509[Add dirnodes to backupdb and "tahoe backup", closes #606.
2510Brian Warner <warner@lothar.com>**20091126234257
2511 Ignore-this: fa88796fcad1763c6a2bf81f56103223
2512 
2513 * backups now share dirnodes with any previous backup, in any location,
2514   so renames and moves are handled very efficiently
2515 * "tahoe backup" no longer bothers reading the previous snapshot
2516 * if you switch grids, you should delete ~/.tahoe/private/backupdb.sqlite,
2517   to force new uploads of all files and directories
2518]
2519[webapi: fix t=check for DIR2-LIT (i.e. empty immutable directories)
2520Brian Warner <warner@lothar.com>**20091126232731
2521 Ignore-this: 8513c890525c69c1eca0e80d53a231f8
2522]
2523[PipelineError: fix str() on python2.4 . Closes #842.
2524Brian Warner <warner@lothar.com>**20091124212512
2525 Ignore-this: e62c92ea9ede2ab7d11fe63f43b9c942
2526]
2527[test_uri.py: s/NewDirnode/Dirnode/ , now that they aren't "new" anymore
2528Brian Warner <warner@lothar.com>**20091120075553
2529 Ignore-this: 61c8ef5e45a9d966873a610d8349b830
2530]
2531[interface name cleanups: IFileNode, IImmutableFileNode, IMutableFileNode
2532Brian Warner <warner@lothar.com>**20091120075255
2533 Ignore-this: e3d193c229e2463e1d0b0c92306de27f
2534 
2535 The proper hierarchy is:
2536  IFilesystemNode
2537  +IFileNode
2538  ++IMutableFileNode
2539  ++IImmutableFileNode
2540  +IDirectoryNode
2541 
2542 Also expand test_client.py (NodeMaker) to hit all IFilesystemNode types.
2543]
2544[class name cleanups: s/FileNode/ImmutableFileNode/
2545Brian Warner <warner@lothar.com>**20091120072239
2546 Ignore-this: 4b3218f2d0e585c62827e14ad8ed8ac1
2547 
2548 also fix test/bench_dirnode.py for recent dirnode changes
2549]
2550[Use DIR-IMM and t=mkdir-immutable for "tahoe backup", for #828
2551Brian Warner <warner@lothar.com>**20091118192813
2552 Ignore-this: a4720529c9bc6bc8b22a3d3265925491
2553]
2554[web/directory.py: use "DIR-IMM" to describe immutable directories, not DIR-RO
2555Brian Warner <warner@lothar.com>**20091118191832
2556 Ignore-this: aceafd6ab4bf1cc0c2a719ef7319ac03
2557]
2558[web/info.py: hush pyflakes
2559Brian Warner <warner@lothar.com>**20091118191736
2560 Ignore-this: edc5f128a2b8095fb20686a75747c8
2561]
2562[make get_size/get_current_size consistent for all IFilesystemNode classes
2563Brian Warner <warner@lothar.com>**20091118191624
2564 Ignore-this: bd3449cf96e4827abaaf962672c1665a
2565 
2566 * stop caching most_recent_size in dirnode, rely upon backing filenode for it
2567 * start caching most_recent_size in MutableFileNode
2568 * return None when you don't know, not "?"
2569 * only render None as "?" in the web "more info" page
2570 * add get_size/get_current_size to UnknownNode
2571]
2572[ImmutableDirectoryURIVerifier: fix verifycap handling
2573Brian Warner <warner@lothar.com>**20091118164238
2574 Ignore-this: 6bba5c717b54352262eabca6e805d590
2575]
2576[Add t=mkdir-immutable to the webapi. Closes #607.
2577Brian Warner <warner@lothar.com>**20091118070900
2578 Ignore-this: 311e5fab9a5f28b9e8a28d3d08f3c0d
2579 
2580 * change t=mkdir-with-children to not use multipart/form encoding. Instead,
2581   the request body is all JSON. t=mkdir-immutable uses this format too.
2582 * make nodemaker.create_immutable_dirnode() get convergence from SecretHolder,
2583   but let callers override it
2584 * raise NotDeepImmutableError instead of using assert()
2585 * add mutable= argument to DirectoryNode.create_subdirectory(), default True
2586]
2587[move convergence secret into SecretHolder, next to lease secret
2588Brian Warner <warner@lothar.com>**20091118015444
2589 Ignore-this: 312f85978a339f2d04deb5bcb8f511bc
2590]
2591[nodemaker: implement immutable directories (internal interface), for #607
2592Brian Warner <warner@lothar.com>**20091112002233
2593 Ignore-this: d09fccf41813fdf7e0db177ed9e5e130
2594 
2595 * nodemaker.create_from_cap() now handles DIR2-CHK and DIR2-LIT
2596 * client.create_immutable_dirnode() is used to create them
2597 * no webapi yet
2598]
2599[stop using IURI()/etc as an adapter
2600Brian Warner <warner@lothar.com>**20091111224542
2601 Ignore-this: 9611da7ea6a4696de2a3b8c08776e6e0
2602]
2603[clean up uri-vs-cap terminology, emphasize cap instances instead of URI strings
2604Brian Warner <warner@lothar.com>**20091111222619
2605 Ignore-this: 93626385f6e7f039ada71f54feefe267
2606 
2607  * "cap" means a python instance which encapsulates a filecap/dircap (uri.py)
2608  * "uri" means a string with a "URI:" prefix
2609  * FileNode instances are created with (and retain) a cap instance, and
2610    generate uri strings on demand
2611  * .get_cap/get_readcap/get_verifycap/get_repaircap return cap instances
2612  * .get_uri/get_readonly_uri return uri strings
2613 
2614 * add filenode.download_to_filename() for control.py, should find a better way
2615 * use MutableFileNode.init_from_cap, not .init_from_uri
2616 * directory URI instances: use get_filenode_cap, not get_filenode_uri
2617 * update/cleanup bench_dirnode.py to match, add Makefile target to run it
2618]
2619[add parser for immutable directory caps: DIR2-CHK, DIR2-LIT, DIR2-CHK-Verifier
2620Brian Warner <warner@lothar.com>**20091104181351
2621 Ignore-this: 854398cc7a75bada57fa97c367b67518
2622]
2623[wui: s/TahoeLAFS/Tahoe-LAFS/
2624zooko@zooko.com**20091029035050
2625 Ignore-this: 901e64cd862e492ed3132bd298583c26
2626]
2627[tests: bump up the timeout on test_repairer to see if 120 seconds was too short for François's ARM box to do the test even when it was doing it right.
2628zooko@zooko.com**20091027224800
2629 Ignore-this: 95e93dc2e018b9948253c2045d506f56
2630]
2631[dirnode.pack_children(): add deep_immutable= argument
2632Brian Warner <warner@lothar.com>**20091026162809
2633 Ignore-this: d5a2371e47662c4bc6eff273e8181b00
2634 
2635 This will be used by DIR2:CHK to enforce the deep-immutability requirement.
2636]
2637[webapi: use t=mkdir-with-children instead of a children= arg to t=mkdir .
2638Brian Warner <warner@lothar.com>**20091026011321
2639 Ignore-this: 769cab30b6ab50db95000b6c5a524916
2640 
2641 This is safer: in the earlier API, an old webapi server would silently ignore
2642 the initial children, and clients trying to set them would have to fetch the
2643 newly-created directory to discover the incompatibility. In the new API,
2644 clients using t=mkdir-with-children against an old webapi server will get a
2645 clear error.
2646]
2647[nodemaker.create_new_mutable_directory: pack_children() in initial_contents=
2648Brian Warner <warner@lothar.com>**20091020005118
2649 Ignore-this: bd43c4eefe06fd32b7492bcb0a55d07e
2650 instead of creating an empty file and then adding the children later.
2651 
2652 This should speed up mkdir(initial_children) considerably, removing two
2653 roundtrips and an entire read-modify-write cycle, probably bringing it down
2654 to a single roundtrip. A quick test (against the volunteergrid) suggests a
2655 30% speedup.
2656 
2657 test_dirnode: add new tests to enforce the restrictions that interfaces.py
2658 claims for create_new_mutable_directory(): no UnknownNodes, metadata dicts
2659]
2660[test_dirnode.py: add tests of initial_children= args to client.create_dirnode
2661Brian Warner <warner@lothar.com>**20091017194159
2662 Ignore-this: 2e2da28323a4d5d815466387914abc1b
2663 and nodemaker.create_new_mutable_directory
2664]
2665[update many dirnode interfaces to accept dict-of-nodes instead of dict-of-caps
2666Brian Warner <warner@lothar.com>**20091017192829
2667 Ignore-this: b35472285143862a856bf4b361d692f0
2668 
2669 interfaces.py: define INodeMaker, document argument values, change
2670                create_new_mutable_directory() to take dict-of-nodes. Change
2671                dirnode.set_nodes() and dirnode.create_subdirectory() too.
2672 nodemaker.py: use INodeMaker, update create_new_mutable_directory()
2673 client.py: have create_dirnode() delegate initial_children= to nodemaker
2674 dirnode.py (Adder): take dict-of-nodes instead of list-of-nodes, which
2675                     updates set_nodes() and create_subdirectory()
2676 web/common.py (convert_initial_children_json): create dict-of-nodes
2677 web/directory.py: same
2678 web/unlinked.py: same
2679 test_dirnode.py: update tests to match
2680]
2681[dirnode.py: move pack_children() out to a function, for eventual use by others
2682Brian Warner <warner@lothar.com>**20091017180707
2683 Ignore-this: 6a823fb61f2c180fd38d6742d3196a7a
2684]
2685[move dirnode.CachingDict to dictutil.AuxValueDict, generalize method names,
2686Brian Warner <warner@lothar.com>**20091017180005
2687 Ignore-this: b086933cf429df0fcea16a308d2640dd
2688 improve tests. Let dirnode _pack_children accept either dict or AuxValueDict.
2689]
2690[test/common.py: update FakeMutableFileNode to new contents= callable scheme
2691Brian Warner <warner@lothar.com>**20091013052154
2692 Ignore-this: 62f00a76454a2190d1c8641c5993632f
2693]
2694[The initial_children= argument to nodemaker.create_new_mutable_directory is
2695Brian Warner <warner@lothar.com>**20091013031922
2696 Ignore-this: 72e45317c21f9eb9ec3bd79bd4311f48
2697 now enabled.
2698]
2699[client.create_mutable_file(contents=) now accepts a callable, which is
2700Brian Warner <warner@lothar.com>**20091013031232
2701 Ignore-this: 3c89d2f50c1e652b83f20bd3f4f27c4b
2702 invoked with the new MutableFileNode and is supposed to return the initial
2703 contents. This can be used by e.g. a new dirnode which needs the filenode's
2704 writekey to encrypt its initial children.
2705 
2706 create_mutable_file() still accepts a bytestring too, or None for an empty
2707 file.
2708]
2709[webapi: t=mkdir now accepts initial children, using the same JSON that t=json
2710Brian Warner <warner@lothar.com>**20091013023444
2711 Ignore-this: 574a46ed46af4251abf8c9580fd31ef7
2712 emits.
2713 
2714 client.create_dirnode(initial_children=) now works.
2715]
2716[replace dirnode.create_empty_directory() with create_subdirectory(), which
2717Brian Warner <warner@lothar.com>**20091013021520
2718 Ignore-this: 6b57cb51bcfcc6058d0df569fdc8a9cf
2719 takes an initial_children= argument
2720]
2721[dirnode.set_children: change return value: fire with self instead of None
2722Brian Warner <warner@lothar.com>**20091013015026
2723 Ignore-this: f1d14e67e084e4b2a4e25fa849b0e753
2724]
2725[dirnode.set_nodes: change return value: fire with self instead of None
2726Brian Warner <warner@lothar.com>**20091013014546
2727 Ignore-this: b75b3829fb53f7399693f1c1a39aacae
2728]
2729[dirnode.set_children: take a dict, not a list
2730Brian Warner <warner@lothar.com>**20091013002440
2731 Ignore-this: 540ce72ce2727ee053afaae1ff124e21
2732]
2733[dirnode.set_uri/set_children: change signature to take writecap+readcap
2734Brian Warner <warner@lothar.com>**20091012235126
2735 Ignore-this: 5df617b2d379a51c79148a857e6026b1
2736 instead of a single cap. The webapi t=set_children call benefits too.
2737]
2738[replace Client.create_empty_dirnode() with create_dirnode(), in anticipation
2739Brian Warner <warner@lothar.com>**20091012224506
2740 Ignore-this: cbdaa4266ecb3c6496ffceab4f95709d
2741 of adding initial_children= argument.
2742 
2743 Includes stubbed-out initial_children= support.
2744]
2745[test_web.py: use a less-fake client, making test harness smaller
2746Brian Warner <warner@lothar.com>**20091012222808
2747 Ignore-this: 29e95147f8c94282885c65b411d100bb
2748]
2749[webapi.txt: document t=set_children, other small edits
2750Brian Warner <warner@lothar.com>**20091009200446
2751 Ignore-this: 4d7e76b04a7b8eaa0a981879f778ea5d
2752]
2753[Verifier: check the full cryptext-hash tree on each share. Removed .todos
2754Brian Warner <warner@lothar.com>**20091005221849
2755 Ignore-this: 6fb039c5584812017d91725e687323a5
2756 from the last few test_repairer tests that were waiting on this.
2757]
2758[Verifier: check the full block-hash-tree on each share
2759Brian Warner <warner@lothar.com>**20091005214844
2760 Ignore-this: 3f7ccf6d253f32340f1bf1da27803eee
2761 
2762 Removed the .todo from two test_repairer tests that check this. The only
2763 remaining .todos are on the three crypttext-hash-tree tests.
2764]
2765[Verifier: check the full share-hash chain on each share
2766Brian Warner <warner@lothar.com>**20091005213443
2767 Ignore-this: 3d30111904158bec06a4eac22fd39d17
2768 
2769 Removed the .todo from two test_repairer tests that check this.
2770]
2771[test_repairer: rename Verifier test cases to be more precise and less verbose
2772Brian Warner <warner@lothar.com>**20091005201115
2773 Ignore-this: 64be7094e33338c7c2aea9387e138771
2774]
2775[immutable/checker.py: rearrange code a little bit, make it easier to follow
2776Brian Warner <warner@lothar.com>**20091005200252
2777 Ignore-this: 91cc303fab66faf717433a709f785fb5
2778]
2779[test/common.py: wrap docstrings to 80cols so I can read them more easily
2780Brian Warner <warner@lothar.com>**20091005200143
2781 Ignore-this: b180a3a0235cbe309c87bd5e873cbbb3
2782]
2783[immutable/download.py: wrap to 80cols, no functional changes
2784Brian Warner <warner@lothar.com>**20091005192542
2785 Ignore-this: 6b05fe3dc6d78832323e708b9e6a1fe
2786]
2787[CHK-hashes.svg: cross out plaintext hashes, since we don't include
2788Brian Warner <warner@lothar.com>**20091005010803
2789 Ignore-this: bea2e953b65ec7359363aa20de8cb603
2790 them (until we finish #453)
2791]
2792[docs: a few licensing clarifications requested by Ubuntu
2793zooko@zooko.com**20090927033226
2794 Ignore-this: 749fc8c9aeb6dc643669854a3e81baa7
2795]
2796[setup: remove binary WinFUSE modules
2797zooko@zooko.com**20090924211436
2798 Ignore-this: 8aefc571d2ae22b9405fc650f2c2062
2799 I would prefer to have just source code, or indications of what 3rd-party packages are required, under revision control, and have the build process generate o
2800 r acquire the binaries as needed.  Also, having these in our release tarballs is interfering with getting Tahoe-LAFS uploaded into Ubuntu Karmic.  (Technicall
2801 y, they would accept binary modules as long as they came with the accompanying source so that they could satisfy their obligations under GPL2+ and TGPPL1+, bu
2802 t it is easier for now to remove the binaries from the source tree.)
2803 In this case, the binaries are from the tahoe-w32-client project: http://allmydata.org/trac/tahoe-w32-client , from which you can also get the source.
2804]
2805[setup: remove binary _fusemodule.so 's
2806zooko@zooko.com**20090924211130
2807 Ignore-this: 74487bbe27d280762ac5dd5f51e24186
2808 I would prefer to have just source code, or indications of what 3rd-party packages are required, under revision control, and have the build process generate or acquire the binaries as needed.  Also, having these in our release tarballs is interfering with getting Tahoe-LAFS uploaded into Ubuntu Karmic.  (Technically, they would accept binary modules as long as they came with the accompanying source so that they could satisfy their obligations under GPL2+ and TGPPL1+, but it is easier for now to remove the binaries from the source tree.)
2809 In this case, these modules come from the MacFUSE project: http://code.google.com/p/macfuse/
2810]
2811[doc: add a copy of LGPL2 for documentation purposes for ubuntu
2812zooko@zooko.com**20090924054218
2813 Ignore-this: 6a073b48678a7c84dc4fbcef9292ab5b
2814]
2815[setup: remove a convenience copy of figleaf, to ease inclusion into Ubuntu Karmic Koala
2816zooko@zooko.com**20090924053215
2817 Ignore-this: a0b0c990d6e2ee65c53a24391365ac8d
2818 We need to carefully document the licence of figleaf in order to get Tahoe-LAFS into Ubuntu Karmic Koala.  However, figleaf isn't really a part of Tahoe-LAFS per se -- this is just a "convenience copy" of a development tool.  The quickest way to make Tahoe-LAFS acceptable for Karmic then, is to remove figleaf from the Tahoe-LAFS tarball itself.  People who want to run figleaf on Tahoe-LAFS (as everyone should want) can install figleaf themselves.  I haven't tested this -- there may be incompatibilities between upstream figleaf and the copy that we had here...
2819]
2820[setup: shebang for misc/build-deb.py to fail quickly
2821zooko@zooko.com**20090819135626
2822 Ignore-this: 5a1b893234d2d0bb7b7346e84b0a6b4d
2823 Without this patch, when I ran "chmod +x ./misc/build-deb.py && ./misc/build-deb.py" then it hung indefinitely.  (I wonder what it was doing.)
2824]
2825[docs: Shawn Willden grants permission for his contributions under GPL2+|TGPPL1+
2826zooko@zooko.com**20090921164651
2827 Ignore-this: ef1912010d07ff2ffd9678e7abfd0d57
2828]
2829[docs: Csaba Henk granted permission to license fuse.py under the same terms as Tahoe-LAFS itself
2830zooko@zooko.com**20090921154659
2831 Ignore-this: c61ba48dcb7206a89a57ca18a0450c53
2832]
2833[setup: mark setup.py as having utf-8 encoding in it
2834zooko@zooko.com**20090920180343
2835 Ignore-this: 9d3850733700a44ba7291e9c5e36bb91
2836]
2837[doc: licensing cleanups
2838zooko@zooko.com**20090920171631
2839 Ignore-this: 7654f2854bf3c13e6f4d4597633a6630
2840 Use nice utf-8 © instead of "(c)". Remove licensing statements on utility modules that have been assigned to allmydata.com by their original authors. (Nattraverso was not assigned to allmydata.com -- it was LGPL'ed -- but I checked and src/allmydata/util/iputil.py was completely rewritten and doesn't contain any line of code from nattraverso.)  Add notes to misc/debian/copyright about licensing on files that aren't just allmydata.com-licensed.
2841]
2842[build-deb.py: run darcsver early, otherwise we get the wrong version later on
2843Brian Warner <warner@lothar.com>**20090918033620
2844 Ignore-this: 6635c5b85e84f8aed0d8390490c5392a
2845]
2846[new approach for debian packaging, sharing pieces across distributions. Still experimental, still only works for sid.
2847warner@lothar.com**20090818190527
2848 Ignore-this: a75eb63db9106b3269badbfcdd7f5ce1
2849]
2850[new experimental deb-packaging rules. Only works for sid so far.
2851Brian Warner <warner@lothar.com>**20090818014052
2852 Ignore-this: 3a26ad188668098f8f3cc10a7c0c2f27
2853]
2854[setup.py: read _version.py and pass to setup(version=), so more commands work
2855Brian Warner <warner@lothar.com>**20090818010057
2856 Ignore-this: b290eb50216938e19f72db211f82147e
2857 like "setup.py --version" and "setup.py --fullname"
2858]
2859[test/check_speed.py: fix shbang line
2860Brian Warner <warner@lothar.com>**20090818005948
2861 Ignore-this: 7f3a37caf349c4c4de704d0feb561f8d
2862]
2863[setup: remove bundled version of darcsver-1.2.1
2864zooko@zooko.com**20090816233432
2865 Ignore-this: 5357f26d2803db2d39159125dddb963a
2866 That version of darcsver emits a scary error message when the darcs executable or the _darcs subdirectory is not found.
2867 This error is hidden (unless the --loud option is passed) in darcsver >= 1.3.1.
2868 Fixes #788.
2869]
2870[de-Service-ify Helper, pass in storage_broker and secret_holder directly.
2871Brian Warner <warner@lothar.com>**20090815201737
2872 Ignore-this: 86b8ac0f90f77a1036cd604dd1304d8b
2873 This makes it more obvious that the Helper currently generates leases with
2874 the Helper's own secrets, rather than getting values from the client, which
2875 is arguably a bug that will likely be resolved with the Accounting project.
2876]
2877[immutable.Downloader: pass StorageBroker to constructor, stop being a Service
2878Brian Warner <warner@lothar.com>**20090815192543
2879 Ignore-this: af5ab12dbf75377640a670c689838479
2880 child of the client, access with client.downloader instead of
2881 client.getServiceNamed("downloader"). The single "Downloader" instance is
2882 scheduled for demolition anyways, to be replaced by individual
2883 filenode.download calls.
2884]
2885[tests: double the timeout on test_runner.RunNode.test_introducer since feisty hit a timeout
2886zooko@zooko.com**20090815160512
2887 Ignore-this: ca7358bce4bdabe8eea75dedc39c0e67
2888 I'm not sure if this is an actual timing issue (feisty is running on an overloaded VM if I recall correctly), or it there is a deeper bug.
2889]
2890[stop making History be a Service, it wasn't necessary
2891Brian Warner <warner@lothar.com>**20090815114415
2892 Ignore-this: b60449231557f1934a751c7effa93cfe
2893]
2894[Overhaul IFilesystemNode handling, to simplify tests and use POLA internally.
2895Brian Warner <warner@lothar.com>**20090815112846
2896 Ignore-this: 1db1b9c149a60a310228aba04c5c8e5f
2897 
2898 * stop using IURI as an adapter
2899 * pass cap strings around instead of URI instances
2900 * move filenode/dirnode creation duties from Client to new NodeMaker class
2901 * move other Client duties to KeyGenerator, SecretHolder, History classes
2902 * stop passing Client reference to dirnode/filenode constructors
2903   - pass less-powerful references instead, like StorageBroker or Uploader
2904 * always create DirectoryNodes by wrapping a filenode (mutable for now)
2905 * remove some specialized mock classes from unit tests
2906 
2907 Detailed list of changes (done one at a time, then merged together)
2908 
2909 always pass a string to create_node_from_uri(), not an IURI instance
2910 always pass a string to IFilesystemNode constructors, not an IURI instance
2911 stop using IURI() as an adapter, switch on cap prefix in create_node_from_uri()
2912 client.py: move SecretHolder code out to a separate class
2913 test_web.py: hush pyflakes
2914 client.py: move NodeMaker functionality out into a separate object
2915 LiteralFileNode: stop storing a Client reference
2916 immutable Checker: remove Client reference, it only needs a SecretHolder
2917 immutable Upload: remove Client reference, leave SecretHolder and StorageBroker
2918 immutable Repairer: replace Client reference with StorageBroker and SecretHolder
2919 immutable FileNode: remove Client reference
2920 mutable.Publish: stop passing Client
2921 mutable.ServermapUpdater: get StorageBroker in constructor, not by peeking into Client reference
2922 MutableChecker: reference StorageBroker and History directly, not through Client
2923 mutable.FileNode: removed unused indirection to checker classes
2924 mutable.FileNode: remove Client reference
2925 client.py: move RSA key generation into a separate class, so it can be passed to the nodemaker
2926 move create_mutable_file() into NodeMaker
2927 test_dirnode.py: stop using FakeClient mockups, use NoNetworkGrid instead. This simplifies the code, but takes longer to run (17s instead of 6s). This should come down later when other cleanups make it possible to use simpler (non-RSA) fake mutable files for dirnode tests.
2928 test_mutable.py: clean up basedir names
2929 client.py: move create_empty_dirnode() into NodeMaker
2930 dirnode.py: get rid of DirectoryNode.create
2931 remove DirectoryNode.init_from_uri, refactor NodeMaker for customization, simplify test_web's mock Client to match
2932 stop passing Client to DirectoryNode, make DirectoryNode.create_with_mutablefile the normal DirectoryNode constructor, start removing client from NodeMaker
2933 remove Client from NodeMaker
2934 move helper status into History, pass History to web.Status instead of Client
2935 test_mutable.py: fix minor typo
2936]
2937[docs: edits for docs/running.html from Sam Mason
2938zooko@zooko.com**20090809201416
2939 Ignore-this: 2207e80449943ebd4ed50cea57c43143
2940]
2941[docs: install.html: instruct Debian users to use this document and not to go find the DownloadDebianPackages page, ignore the warning at the top of it, and try it
2942zooko@zooko.com**20090804123840
2943 Ignore-this: 49da654f19d377ffc5a1eff0c820e026
2944 http://allmydata.org/pipermail/tahoe-dev/2009-August/002507.html
2945]
2946[docs: relnotes.txt: reflow to 63 chars wide because google groups and some web forms seem to wrap to that
2947zooko@zooko.com**20090802135016
2948 Ignore-this: 53b1493a0491bc30fb2935fad283caeb
2949]
2950[docs: about.html: fix English usage noticed by Amber
2951zooko@zooko.com**20090802050533
2952 Ignore-this: 89965c4650f9bd100a615c401181a956
2953]
2954[docs: fix mis-spelled word in about.html
2955zooko@zooko.com**20090802050320
2956 Ignore-this: fdfd0397bc7cef9edfde425dddeb67e5
2957]
2958[TAG allmydata-tahoe-1.5.0
2959zooko@zooko.com**20090802031303
2960 Ignore-this: 94e5558e7225c39a86aae666ea00f166
2961]
2962Patch bundle hash:
296397102a87b48059967bad705fa0fe9c405e6d292a