Ticket #1363: 1363-p3.dpatch

File 1363-p3.dpatch, 88.1 KB (added by warner, at 2011-06-15T17:54:40Z)

next batch of refactoring patches

Line 
111 patches for repository /Users/warner2/stuff/tahoe/trunk:
2
3Wed Jun 15 10:48:38 PDT 2011  warner@lothar.com
4  * apply zooko's advice: storage_client get_known_servers() returns a frozenset, caller sorts
5
6Wed Jun 15 10:49:19 PDT 2011  warner@lothar.com
7  * upload.py: apply David-Sarah's advice rename (un)contacted(2) trackers to first_pass/second_pass/next_pass
8
9Wed Jun 15 10:49:38 PDT 2011  warner@lothar.com
10  * replace IServer.name() with get_name(), and get_longname()
11
12Wed Jun 15 10:50:11 PDT 2011  warner@lothar.com
13  * test_immutable.Test: rewrite to use NoNetworkGrid, now takes 2.7s not 97s
14
15Wed Jun 15 10:50:45 PDT 2011  warner@lothar.com
16  * remove now-unused ShareManglingMixin
17
18Wed Jun 15 10:51:04 PDT 2011  warner@lothar.com
19  * remove get_serverid from DownloadStatus.add_dyhb_sent and customers
20
21Wed Jun 15 10:51:27 PDT 2011  warner@lothar.com
22  * remove get_serverid from DownloadStatus.add_request_sent and customers
23
24Wed Jun 15 10:51:57 PDT 2011  warner@lothar.com
25  * web/status.py: remove spurious whitespace, no code changes
26
27Wed Jun 15 10:52:22 PDT 2011  warner@lothar.com
28  * DownloadStatus.add_known_share wants to be used by Finder, web.status
29
30Wed Jun 15 10:52:45 PDT 2011  warner@lothar.com
31  * remove nodeid from WriteBucketProxy classes and customers
32
33Wed Jun 15 10:53:03 PDT 2011  warner@lothar.com
34  * remove get_serverid() from ReadBucketProxy and customers, including Checker
35  and debug.py dump-share commands
36
37
38New patches:
39
40[apply zooko's advice: storage_client get_known_servers() returns a frozenset, caller sorts
41warner@lothar.com**20110615174838
42 Ignore-this: 859d5f8acdeb4b4bb555986fe5ea1301
43] {
44hunk ./src/allmydata/storage_client.py 127
45         return sorted(self.get_connected_servers(), key=_permuted)
46 
47     def get_all_serverids(self):
48-        serverids = set()
49-        serverids.update(self.servers.keys())
50-        return frozenset(serverids)
51+        return frozenset(self.servers.keys())
52 
53     def get_connected_servers(self):
54hunk ./src/allmydata/storage_client.py 130
55-        return frozenset([s for s in self.get_known_servers()
56-                          if s.get_rref()])
57+        return frozenset([s for s in self.servers.values() if s.get_rref()])
58 
59     def get_known_servers(self):
60hunk ./src/allmydata/storage_client.py 133
61-        return sorted(self.servers.values(), key=lambda s: s.get_serverid())
62+        return frozenset(self.servers.values())
63 
64     def get_nickname_for_serverid(self, serverid):
65         if serverid in self.servers:
66hunk ./src/allmydata/web/root.py 254
67 
68     def data_services(self, ctx, data):
69         sb = self.client.get_storage_broker()
70-        return sb.get_known_servers()
71+        return sorted(sb.get_known_servers(), key=lambda s: s.get_serverid())
72 
73     def render_service_row(self, ctx, server):
74         nodeid = server.get_serverid()
75}
76[upload.py: apply David-Sarah's advice rename (un)contacted(2) trackers to first_pass/second_pass/next_pass
77warner@lothar.com**20110615174919
78 Ignore-this: 30ae7ee20c0afc1c73ef43aa861a7be7
79] {
80hunk ./src/allmydata/immutable/upload.py 176
81                          num_segments, total_shares, needed_shares,
82                          servers_of_happiness):
83         """
84-        @return: (upload_trackers, already_servers), where upload_trackers is
85-                 a set of ServerTracker instances that have agreed to hold
86+        @return: (upload_trackers, already_serverids), where upload_trackers
87+                 is a set of ServerTracker instances that have agreed to hold
88                  some shares for us (the shareids are stashed inside the
89hunk ./src/allmydata/immutable/upload.py 179
90-                 ServerTracker), and already_servers is a dict mapping shnum
91-                 to a set of serverids which claim to already have the share.
92+                 ServerTracker), and already_serverids is a dict mapping
93+                 shnum to a set of serverids for servers which claim to
94+                 already have the share.
95         """
96 
97         if self._status:
98hunk ./src/allmydata/immutable/upload.py 192
99         self.needed_shares = needed_shares
100 
101         self.homeless_shares = set(range(total_shares))
102-        self.contacted_trackers = [] # servers worth asking again
103-        self.contacted_trackers2 = [] # servers that we have asked again
104-        self._started_second_pass = False
105         self.use_trackers = set() # ServerTrackers that have shares assigned
106                                   # to them
107         self.preexisting_shares = {} # shareid => set(serverids) holding shareid
108hunk ./src/allmydata/immutable/upload.py 250
109                                    renew, cancel)
110                 trackers.append(st)
111             return trackers
112-        self.uncontacted_trackers = _make_trackers(writable_servers)
113+
114+        # We assign each servers/trackers into one three lists. They all
115+        # start in the "first pass" list. During the first pass, as we ask
116+        # each one to hold a share, we move their tracker to the "second
117+        # pass" list, until the first-pass list is empty. Then during the
118+        # second pass, as we ask each to hold more shares, we move their
119+        # tracker to the "next pass" list, until the second-pass list is
120+        # empty. Then we move everybody from the next-pass list back to the
121+        # second-pass list and repeat the "second" pass (really the third,
122+        # fourth, etc pass), until all shares are assigned, or we've run out
123+        # of potential servers.
124+        self.first_pass_trackers = _make_trackers(writable_servers)
125+        self.second_pass_trackers = [] # servers worth asking again
126+        self.next_pass_trackers = [] # servers that we have asked again
127+        self._started_second_pass = False
128 
129         # We don't try to allocate shares to these servers, since they've
130         # said that they're incapable of storing shares of the size that we'd
131hunk ./src/allmydata/immutable/upload.py 371
132                 shares_to_spread = sum([len(list(sharelist)) - 1
133                                         for (server, sharelist)
134                                         in shares.items()])
135-                if delta <= len(self.uncontacted_trackers) and \
136+                if delta <= len(self.first_pass_trackers) and \
137                    shares_to_spread >= delta:
138                     items = shares.items()
139                     while len(self.homeless_shares) < delta:
140hunk ./src/allmydata/immutable/upload.py 407
141                     self.log(servmsg, level=log.INFREQUENT)
142                     return self._failed("%s (%s)" % (failmsg, self._get_progress_message()))
143 
144-        if self.uncontacted_trackers:
145-            tracker = self.uncontacted_trackers.pop(0)
146+        if self.first_pass_trackers:
147+            tracker = self.first_pass_trackers.pop(0)
148             # TODO: don't pre-convert all serverids to ServerTrackers
149             assert isinstance(tracker, ServerTracker)
150 
151hunk ./src/allmydata/immutable/upload.py 423
152                                            len(self.homeless_shares)))
153             d = tracker.query(shares_to_ask)
154             d.addBoth(self._got_response, tracker, shares_to_ask,
155-                      self.contacted_trackers)
156+                      self.second_pass_trackers)
157             return d
158hunk ./src/allmydata/immutable/upload.py 425
159-        elif self.contacted_trackers:
160+        elif self.second_pass_trackers:
161             # ask a server that we've already asked.
162             if not self._started_second_pass:
163                 self.log("starting second pass",
164hunk ./src/allmydata/immutable/upload.py 432
165                         level=log.NOISY)
166                 self._started_second_pass = True
167             num_shares = mathutil.div_ceil(len(self.homeless_shares),
168-                                           len(self.contacted_trackers))
169-            tracker = self.contacted_trackers.pop(0)
170+                                           len(self.second_pass_trackers))
171+            tracker = self.second_pass_trackers.pop(0)
172             shares_to_ask = set(sorted(self.homeless_shares)[:num_shares])
173             self.homeless_shares -= shares_to_ask
174             self.query_count += 1
175hunk ./src/allmydata/immutable/upload.py 444
176                                            len(self.homeless_shares)))
177             d = tracker.query(shares_to_ask)
178             d.addBoth(self._got_response, tracker, shares_to_ask,
179-                      self.contacted_trackers2)
180+                      self.next_pass_trackers)
181             return d
182hunk ./src/allmydata/immutable/upload.py 446
183-        elif self.contacted_trackers2:
184+        elif self.next_pass_trackers:
185             # we've finished the second-or-later pass. Move all the remaining
186hunk ./src/allmydata/immutable/upload.py 448
187-            # servers back into self.contacted_trackers for the next pass.
188-            self.contacted_trackers.extend(self.contacted_trackers2)
189-            self.contacted_trackers2[:] = []
190+            # servers back into self.second_pass_trackers for the next pass.
191+            self.second_pass_trackers.extend(self.next_pass_trackers)
192+            self.next_pass_trackers[:] = []
193             return self._loop()
194         else:
195             # no more servers. If we haven't placed enough shares, we fail.
196hunk ./src/allmydata/immutable/upload.py 485
197             self.error_count += 1
198             self.bad_query_count += 1
199             self.homeless_shares |= shares_to_ask
200-            if (self.uncontacted_trackers
201-                or self.contacted_trackers
202-                or self.contacted_trackers2):
203+            if (self.first_pass_trackers
204+                or self.second_pass_trackers
205+                or self.next_pass_trackers):
206                 # there is still hope, so just loop
207                 pass
208             else:
209hunk ./src/allmydata/immutable/upload.py 938
210         d.addCallback(_done)
211         return d
212 
213-    def set_shareholders(self, (upload_trackers, already_servers), encoder):
214+    def set_shareholders(self, holders, encoder):
215         """
216         @param upload_trackers: a sequence of ServerTracker objects that
217                                 have agreed to hold some shares for us (the
218hunk ./src/allmydata/immutable/upload.py 943
219                                 shareids are stashed inside the ServerTracker)
220-        @paran already_servers: a dict mapping sharenum to a set of serverids
221-                                that claim to already have this share
222+
223+        @paran already_serverids: a dict mapping sharenum to a set of
224+                                  serverids for servers that claim to already
225+                                  have this share
226         """
227hunk ./src/allmydata/immutable/upload.py 948
228-        msgtempl = "set_shareholders; upload_trackers is %s, already_servers is %s"
229+        (upload_trackers, already_serverids) = holders
230+        msgtempl = "set_shareholders; upload_trackers is %s, already_serverids is %s"
231         values = ([', '.join([str_shareloc(k,v)
232                               for k,v in st.buckets.iteritems()])
233hunk ./src/allmydata/immutable/upload.py 952
234-                   for st in upload_trackers], already_servers)
235+                   for st in upload_trackers], already_serverids)
236         self.log(msgtempl % values, level=log.OPERATIONAL)
237         # record already-present shares in self._results
238hunk ./src/allmydata/immutable/upload.py 955
239-        self._results.preexisting_shares = len(already_servers)
240+        self._results.preexisting_shares = len(already_serverids)
241 
242         self._server_trackers = {} # k: shnum, v: instance of ServerTracker
243         for tracker in upload_trackers:
244hunk ./src/allmydata/immutable/upload.py 961
245             assert isinstance(tracker, ServerTracker)
246         buckets = {}
247-        servermap = already_servers.copy()
248+        servermap = already_serverids.copy()
249         for tracker in upload_trackers:
250             buckets.update(tracker.buckets)
251             for shnum in tracker.buckets:
252}
253[replace IServer.name() with get_name(), and get_longname()
254warner@lothar.com**20110615174938
255 Ignore-this: 4c1ddc68e63a7e3fba96c0f52e2edba5
256] {
257hunk ./src/allmydata/control.py 103
258         if not everyone_left:
259             return results
260         server = everyone_left.pop(0)
261-        server_name = server.longname()
262+        server_name = server.get_longname()
263         connection = server.get_rref()
264         start = time.time()
265         d = connection.callRemote("get_buckets", "\x00"*16)
266hunk ./src/allmydata/immutable/checker.py 506
267             cancel_secret = self._get_cancel_secret(lease_seed)
268             d2 = rref.callRemote("add_lease", storageindex,
269                                  renew_secret, cancel_secret)
270-            d2.addErrback(self._add_lease_failed, s.name(), storageindex)
271+            d2.addErrback(self._add_lease_failed, s.get_name(), storageindex)
272 
273         d = rref.callRemote("get_buckets", storageindex)
274         def _wrap_results(res):
275hunk ./src/allmydata/immutable/downloader/finder.py 90
276 
277     # internal methods
278     def loop(self):
279-        pending_s = ",".join([rt.server.name()
280+        pending_s = ",".join([rt.server.get_name()
281                               for rt in self.pending_requests]) # sort?
282         self.log(format="ShareFinder loop: running=%(running)s"
283                  " hungry=%(hungry)s, pending=%(pending)s",
284hunk ./src/allmydata/immutable/downloader/finder.py 135
285     def send_request(self, server):
286         req = RequestToken(server)
287         self.pending_requests.add(req)
288-        lp = self.log(format="sending DYHB to [%(name)s]", name=server.name(),
289+        lp = self.log(format="sending DYHB to [%(name)s]", name=server.get_name(),
290                       level=log.NOISY, umid="Io7pyg")
291         time_sent = now()
292         d_ev = self._download_status.add_dyhb_sent(server.get_serverid(),
293hunk ./src/allmydata/immutable/downloader/finder.py 171
294         d_ev.finished(shnums, time_received)
295         dyhb_rtt = time_received - time_sent
296         if not buckets:
297-            self.log(format="no shares from [%(name)s]", name=server.name(),
298+            self.log(format="no shares from [%(name)s]", name=server.get_name(),
299                      level=log.NOISY, parent=lp, umid="U7d4JA")
300             return
301         shnums_s = ",".join([str(shnum) for shnum in shnums])
302hunk ./src/allmydata/immutable/downloader/finder.py 176
303         self.log(format="got shnums [%(shnums)s] from [%(name)s]",
304-                 shnums=shnums_s, name=server.name(),
305+                 shnums=shnums_s, name=server.get_name(),
306                  level=log.NOISY, parent=lp, umid="0fcEZw")
307         shares = []
308         for shnum, bucket in buckets.iteritems():
309hunk ./src/allmydata/immutable/downloader/finder.py 223
310     def _got_error(self, f, server, req, d_ev, lp):
311         d_ev.finished("error", now())
312         self.log(format="got error from [%(name)s]",
313-                 name=server.name(), failure=f,
314+                 name=server.get_name(), failure=f,
315                  level=log.UNUSUAL, parent=lp, umid="zUKdCw")
316 
317 
318hunk ./src/allmydata/immutable/downloader/share.py 96
319         self.had_corruption = False # for unit tests
320 
321     def __repr__(self):
322-        return "Share(sh%d-on-%s)" % (self._shnum, self._server.name())
323+        return "Share(sh%d-on-%s)" % (self._shnum, self._server.get_name())
324 
325     def is_alive(self):
326         # XXX: reconsider. If the share sees a single error, should it remain
327hunk ./src/allmydata/immutable/downloader/share.py 792
328         log.msg(format="error requesting %(start)d+%(length)d"
329                 " from %(server)s for si %(si)s",
330                 start=start, length=length,
331-                server=self._server.name(), si=self._si_prefix,
332+                server=self._server.get_name(), si=self._si_prefix,
333                 failure=f, parent=lp, level=log.UNUSUAL, umid="BZgAJw")
334         # retire our observers, assuming we won't be able to make any
335         # further progress
336hunk ./src/allmydata/immutable/offloaded.py 67
337         # buckets is a dict: maps shum to an rref of the server who holds it
338         shnums_s = ",".join([str(shnum) for shnum in buckets])
339         self.log("got_response: [%s] has %d shares (%s)" %
340-                 (server.name(), len(buckets), shnums_s),
341+                 (server.get_name(), len(buckets), shnums_s),
342                  level=log.NOISY)
343         self._found_shares.update(buckets.keys())
344         for k in buckets:
345hunk ./src/allmydata/immutable/upload.py 96
346 
347     def __repr__(self):
348         return ("<ServerTracker for server %s and SI %s>"
349-                % (self._server.name(), si_b2a(self.storage_index)[:5]))
350+                % (self._server.get_name(), si_b2a(self.storage_index)[:5]))
351 
352     def get_serverid(self):
353         return self._server.get_serverid()
354hunk ./src/allmydata/immutable/upload.py 100
355-    def name(self):
356-        return self._server.name()
357+    def get_name(self):
358+        return self._server.get_name()
359 
360     def query(self, sharenums):
361         rref = self._server.get_rref()
362hunk ./src/allmydata/immutable/upload.py 289
363             self.num_servers_contacted += 1
364             self.query_count += 1
365             self.log("asking server %s for any existing shares" %
366-                     (tracker.name(),), level=log.NOISY)
367+                     (tracker.get_name(),), level=log.NOISY)
368         dl = defer.DeferredList(ds)
369         dl.addCallback(lambda ign: self._loop())
370         return dl
371hunk ./src/allmydata/immutable/upload.py 303
372         serverid = tracker.get_serverid()
373         if isinstance(res, failure.Failure):
374             self.log("%s got error during existing shares check: %s"
375-                    % (tracker.name(), res), level=log.UNUSUAL)
376+                    % (tracker.get_name(), res), level=log.UNUSUAL)
377             self.error_count += 1
378             self.bad_query_count += 1
379         else:
380hunk ./src/allmydata/immutable/upload.py 311
381             if buckets:
382                 self.serverids_with_shares.add(serverid)
383             self.log("response to get_buckets() from server %s: alreadygot=%s"
384-                    % (tracker.name(), tuple(sorted(buckets))),
385+                    % (tracker.get_name(), tuple(sorted(buckets))),
386                     level=log.NOISY)
387             for bucket in buckets:
388                 self.preexisting_shares.setdefault(bucket, set()).add(serverid)
389hunk ./src/allmydata/immutable/upload.py 419
390             if self._status:
391                 self._status.set_status("Contacting Servers [%s] (first query),"
392                                         " %d shares left.."
393-                                        % (tracker.name(),
394+                                        % (tracker.get_name(),
395                                            len(self.homeless_shares)))
396             d = tracker.query(shares_to_ask)
397             d.addBoth(self._got_response, tracker, shares_to_ask,
398hunk ./src/allmydata/immutable/upload.py 440
399             if self._status:
400                 self._status.set_status("Contacting Servers [%s] (second query),"
401                                         " %d shares left.."
402-                                        % (tracker.name(),
403+                                        % (tracker.get_name(),
404                                            len(self.homeless_shares)))
405             d = tracker.query(shares_to_ask)
406             d.addBoth(self._got_response, tracker, shares_to_ask,
407hunk ./src/allmydata/immutable/upload.py 501
408         else:
409             (alreadygot, allocated) = res
410             self.log("response to allocate_buckets() from server %s: alreadygot=%s, allocated=%s"
411-                    % (tracker.name(),
412+                    % (tracker.get_name(),
413                        tuple(sorted(alreadygot)), tuple(sorted(allocated))),
414                     level=log.NOISY)
415             progress = False
416hunk ./src/allmydata/storage_client.py 193
417         self._trigger_cb = None
418 
419     def __repr__(self):
420-        return "<NativeStorageServer for %s>" % self.name()
421+        return "<NativeStorageServer for %s>" % self.get_name()
422     def get_serverid(self):
423         return self._tubid
424     def get_permutation_seed(self):
425hunk ./src/allmydata/storage_client.py 202
426         if self.rref:
427             return self.rref.version
428         return None
429-    def name(self): # keep methodname short
430+    def get_name(self): # keep methodname short
431         return self.serverid_s
432hunk ./src/allmydata/storage_client.py 204
433-    def longname(self):
434+    def get_longname(self):
435         return idlib.nodeid_b2a(self._tubid)
436     def get_lease_seed(self):
437         return self._tubid
438hunk ./src/allmydata/storage_client.py 231
439 
440     def _got_connection(self, rref):
441         lp = log.msg(format="got connection to %(name)s, getting versions",
442-                     name=self.name(),
443+                     name=self.get_name(),
444                      facility="tahoe.storage_broker", umid="coUECQ")
445         if self._trigger_cb:
446             eventually(self._trigger_cb)
447hunk ./src/allmydata/storage_client.py 239
448         d = add_version_to_remote_reference(rref, default)
449         d.addCallback(self._got_versioned_service, lp)
450         d.addErrback(log.err, format="storageclient._got_connection",
451-                     name=self.name(), umid="Sdq3pg")
452+                     name=self.get_name(), umid="Sdq3pg")
453 
454     def _got_versioned_service(self, rref, lp):
455         log.msg(format="%(name)s provided version info %(version)s",
456hunk ./src/allmydata/storage_client.py 243
457-                name=self.name(), version=rref.version,
458+                name=self.get_name(), version=rref.version,
459                 facility="tahoe.storage_broker", umid="SWmJYg",
460                 level=log.NOISY, parent=lp)
461 
462hunk ./src/allmydata/storage_client.py 256
463         return self.rref
464 
465     def _lost(self):
466-        log.msg(format="lost connection to %(name)s", name=self.name(),
467+        log.msg(format="lost connection to %(name)s", name=self.get_name(),
468                 facility="tahoe.storage_broker", umid="zbRllw")
469         self.last_loss_time = time.time()
470         self.rref = None
471hunk ./src/allmydata/test/no_network.py 125
472         self.serverid = serverid
473         self.rref = rref
474     def __repr__(self):
475-        return "<NoNetworkServer for %s>" % self.name()
476+        return "<NoNetworkServer for %s>" % self.get_name()
477     def get_serverid(self):
478         return self.serverid
479     def get_permutation_seed(self):
480hunk ./src/allmydata/test/no_network.py 132
481         return self.serverid
482     def get_lease_seed(self):
483         return self.serverid
484-    def name(self):
485+    def get_name(self):
486         return idlib.shortnodeid_b2a(self.serverid)
487hunk ./src/allmydata/test/no_network.py 134
488-    def longname(self):
489+    def get_longname(self):
490         return idlib.nodeid_b2a(self.serverid)
491     def get_nickname(self):
492         return "nickname"
493hunk ./src/allmydata/test/test_download.py 1285
494         self._server = server
495         self._dyhb_rtt = rtt
496     def __repr__(self):
497-        return "sh%d-on-%s" % (self._shnum, self._server.name())
498+        return "sh%d-on-%s" % (self._shnum, self._server.get_name())
499 
500 class MySegmentFetcher(SegmentFetcher):
501     def __init__(self, *args, **kwargs):
502hunk ./src/allmydata/test/test_download.py 1343
503         def _check2(ign):
504             self.failUnless(node.failed)
505             self.failUnless(node.failed.check(NotEnoughSharesError))
506-            sname = serverA.name()
507+            sname = serverA.get_name()
508             self.failUnlessIn("complete= pending=sh0-on-%s overdue= unused="  % sname,
509                               str(node.failed))
510         d.addCallback(_check2)
511hunk ./src/allmydata/test/test_download.py 1565
512         def _check4(ign):
513             self.failUnless(node.failed)
514             self.failUnless(node.failed.check(NotEnoughSharesError))
515-            sname = servers["peer-2"].name()
516+            sname = servers["peer-2"].get_name()
517             self.failUnlessIn("complete=sh0 pending= overdue=sh2-on-%s unused=" % sname,
518                               str(node.failed))
519         d.addCallback(_check4)
520hunk ./src/allmydata/test/test_immutable.py 106
521                 return self.serverid
522             def get_rref(self):
523                 return self.rref
524-            def name(self):
525+            def get_name(self):
526                 return "name-%s" % self.serverid
527             def get_version(self):
528                 return self.rref.version
529hunk ./src/allmydata/web/check_results.py 154
530             shareids.reverse()
531             shareids_s = [ T.tt[shareid, " "] for shareid in sorted(shareids) ]
532             servermap.append(T.tr[T.td[T.div(class_="nickname")[nickname],
533-                                       T.div(class_="nodeid")[T.tt[s.name()]]],
534+                                       T.div(class_="nodeid")[T.tt[s.get_name()]]],
535                                   T.td[shareids_s],
536                                   ])
537             num_shares_left -= len(shareids)
538hunk ./src/allmydata/web/root.py 259
539     def render_service_row(self, ctx, server):
540         nodeid = server.get_serverid()
541 
542-        ctx.fillSlots("peerid", server.longname())
543+        ctx.fillSlots("peerid", server.get_longname())
544         ctx.fillSlots("nickname", server.get_nickname())
545         rhost = server.get_remote_host()
546         if rhost:
547}
548[test_immutable.Test: rewrite to use NoNetworkGrid, now takes 2.7s not 97s
549warner@lothar.com**20110615175011
550 Ignore-this: 99644e8a104d413d1eaa5ba1cbad1965
551] {
552hunk ./src/allmydata/test/test_immutable.py 1
553-from allmydata.test import common
554-from allmydata.interfaces import NotEnoughSharesError
555-from allmydata.util.consumer import download_to_data
556-from allmydata import uri
557-from twisted.internet import defer
558-from twisted.trial import unittest
559 import random
560 
561hunk ./src/allmydata/test/test_immutable.py 3
562+from twisted.trial import unittest
563+from twisted.internet import defer
564+import mock
565 from foolscap.api import eventually
566hunk ./src/allmydata/test/test_immutable.py 7
567+
568+from allmydata.test import common
569+from allmydata.test.no_network import GridTestMixin
570+from allmydata.test.common import TEST_DATA
571+from allmydata import uri
572 from allmydata.util import log
573hunk ./src/allmydata/test/test_immutable.py 13
574+from allmydata.util.consumer import download_to_data
575 
576hunk ./src/allmydata/test/test_immutable.py 15
577+from allmydata.interfaces import NotEnoughSharesError
578+from allmydata.immutable.upload import Data
579 from allmydata.immutable.downloader import finder
580 
581hunk ./src/allmydata/test/test_immutable.py 19
582-import mock
583-
584 class MockNode(object):
585     def __init__(self, check_reneging, check_fetch_failed):
586         self.got = 0
587hunk ./src/allmydata/test/test_immutable.py 135
588 
589         return mocknode.when_finished()
590 
591-class Test(common.ShareManglingMixin, common.ShouldFailMixin, unittest.TestCase):
592+
593+class Test(GridTestMixin, unittest.TestCase, common.ShouldFailMixin):
594+    def startup(self, basedir):
595+        self.basedir = basedir
596+        self.set_up_grid(num_clients=2, num_servers=5)
597+        c1 = self.g.clients[1]
598+        # We need multiple segments to test crypttext hash trees that are
599+        # non-trivial (i.e. they have more than just one hash in them).
600+        c1.DEFAULT_ENCODING_PARAMETERS['max_segment_size'] = 12
601+        # Tests that need to test servers of happiness using this should
602+        # set their own value for happy -- the default (7) breaks stuff.
603+        c1.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
604+        d = c1.upload(Data(TEST_DATA, convergence=""))
605+        def _after_upload(ur):
606+            self.uri = ur.uri
607+            self.filenode = self.g.clients[0].create_node_from_uri(ur.uri)
608+            return self.uri
609+        d.addCallback(_after_upload)
610+        return d
611+
612+    def _stash_shares(self, shares):
613+        self.shares = shares
614+
615+    def _download_and_check_plaintext(self, ign=None):
616+        num_reads = self._count_reads()
617+        d = download_to_data(self.filenode)
618+        def _after_download(result):
619+            self.failUnlessEqual(result, TEST_DATA)
620+            return self._count_reads() - num_reads
621+        d.addCallback(_after_download)
622+        return d
623+
624+    def _shuffled(self, num_shnums):
625+        shnums = range(10)
626+        random.shuffle(shnums)
627+        return shnums[:num_shnums]
628+
629+    def _count_reads(self):
630+        return sum([s.stats_provider.get_stats() ['counters'].get('storage_server.read', 0)
631+                    for s in self.g.servers_by_number.values()])
632+
633+
634+    def _count_allocates(self):
635+        return sum([s.stats_provider.get_stats() ['counters'].get('storage_server.allocate', 0)
636+                    for s in self.g.servers_by_number.values()])
637+
638+    def _count_writes(self):
639+        return sum([s.stats_provider.get_stats() ['counters'].get('storage_server.write', 0)
640+                    for s in self.g.servers_by_number.values()])
641+
642     def test_test_code(self):
643         # The following process of stashing the shares, running
644         # replace_shares, and asserting that the new set of shares equals the
645hunk ./src/allmydata/test/test_immutable.py 189
646         # old is more to test this test code than to test the Tahoe code...
647-        d = defer.succeed(None)
648-        d.addCallback(self.find_all_shares)
649-        stash = [None]
650-        def _stash_it(res):
651-            stash[0] = res
652-            return res
653-        d.addCallback(_stash_it)
654+        d = self.startup("immutable/Test/code")
655+        d.addCallback(self.copy_shares)
656+        d.addCallback(self._stash_shares)
657+        d.addCallback(self._download_and_check_plaintext)
658 
659         # The following process of deleting 8 of the shares and asserting
660         # that you can't download it is more to test this test code than to
661hunk ./src/allmydata/test/test_immutable.py 197
662         # test the Tahoe code...
663-        def _then_delete_8(unused=None):
664-            self.replace_shares(stash[0], storage_index=self.uri.get_storage_index())
665-            for i in range(8):
666-                self._delete_a_share()
667+        def _then_delete_8(ign):
668+            self.restore_all_shares(self.shares)
669+            self.delete_shares_numbered(self.uri, range(8))
670         d.addCallback(_then_delete_8)
671hunk ./src/allmydata/test/test_immutable.py 201
672-
673-        def _then_download(unused=None):
674-            d2 = download_to_data(self.n)
675-
676-            def _after_download_callb(result):
677-                self.fail() # should have gotten an errback instead
678-                return result
679-            def _after_download_errb(failure):
680-                failure.trap(NotEnoughSharesError)
681-                return None # success!
682-            d2.addCallbacks(_after_download_callb, _after_download_errb)
683-            return d2
684-        d.addCallback(_then_download)
685-
686+        d.addCallback(lambda ign:
687+                      self.shouldFail(NotEnoughSharesError, "download-2",
688+                                      "ran out of shares",
689+                                      download_to_data, self.filenode))
690         return d
691 
692     def test_download(self):
693hunk ./src/allmydata/test/test_immutable.py 212
694         tested by test code in other modules, but this module is also going
695         to test some more specific things about immutable download.)
696         """
697-        d = defer.succeed(None)
698-        before_download_reads = self._count_reads()
699-        def _after_download(unused=None):
700-            after_download_reads = self._count_reads()
701-            #print before_download_reads, after_download_reads
702-            self.failIf(after_download_reads-before_download_reads > 41,
703-                        (after_download_reads, before_download_reads))
704+        d = self.startup("immutable/Test/download")
705         d.addCallback(self._download_and_check_plaintext)
706hunk ./src/allmydata/test/test_immutable.py 214
707+        def _after_download(ign):
708+            num_reads = self._count_reads()
709+            #print num_reads
710+            self.failIf(num_reads > 41, num_reads)
711         d.addCallback(_after_download)
712         return d
713 
714hunk ./src/allmydata/test/test_immutable.py 224
715     def test_download_from_only_3_remaining_shares(self):
716         """ Test download after 7 random shares (of the 10) have been
717         removed."""
718-        d = defer.succeed(None)
719-        def _then_delete_7(unused=None):
720-            for i in range(7):
721-                self._delete_a_share()
722-        before_download_reads = self._count_reads()
723-        d.addCallback(_then_delete_7)
724-        def _after_download(unused=None):
725-            after_download_reads = self._count_reads()
726-            #print before_download_reads, after_download_reads
727-            self.failIf(after_download_reads-before_download_reads > 41, (after_download_reads, before_download_reads))
728+        d = self.startup("immutable/Test/download_from_only_3_remaining_shares")
729+        d.addCallback(lambda ign:
730+                      self.delete_shares_numbered(self.uri, range(7)))
731         d.addCallback(self._download_and_check_plaintext)
732hunk ./src/allmydata/test/test_immutable.py 228
733+        def _after_download(num_reads):
734+            #print num_reads
735+            self.failIf(num_reads > 41, num_reads)
736         d.addCallback(_after_download)
737         return d
738 
739hunk ./src/allmydata/test/test_immutable.py 237
740     def test_download_from_only_3_shares_with_good_crypttext_hash(self):
741         """ Test download after 7 random shares (of the 10) have had their
742         crypttext hash tree corrupted."""
743-        d = defer.succeed(None)
744-        def _then_corrupt_7(unused=None):
745-            shnums = range(10)
746-            random.shuffle(shnums)
747-            for i in shnums[:7]:
748-                self._corrupt_a_share(None, common._corrupt_offset_of_block_hashes_to_truncate_crypttext_hashes, i)
749-        #before_download_reads = self._count_reads()
750-        d.addCallback(_then_corrupt_7)
751+        d = self.startup("download_from_only_3_shares_with_good_crypttext_hash")
752+        def _corrupt_7(ign):
753+            c = common._corrupt_offset_of_block_hashes_to_truncate_crypttext_hashes
754+            self.corrupt_shares_numbered(self.uri, self._shuffled(7), c)
755+        d.addCallback(_corrupt_7)
756         d.addCallback(self._download_and_check_plaintext)
757         return d
758 
759hunk ./src/allmydata/test/test_immutable.py 248
760     def test_download_abort_if_too_many_missing_shares(self):
761         """ Test that download gives up quickly when it realizes there aren't
762         enough shares out there."""
763-        for i in range(8):
764-            self._delete_a_share()
765-        d = self.shouldFail(NotEnoughSharesError, "delete 8", None,
766-                            download_to_data, self.n)
767+        d = self.startup("download_abort_if_too_many_missing_shares")
768+        d.addCallback(lambda ign:
769+                      self.delete_shares_numbered(self.uri, range(8)))
770+        d.addCallback(lambda ign:
771+                      self.shouldFail(NotEnoughSharesError, "delete 8",
772+                                      "Last failure: None",
773+                                      download_to_data, self.filenode))
774         # the new downloader pipelines a bunch of read requests in parallel,
775         # so don't bother asserting anything about the number of reads
776         return d
777hunk ./src/allmydata/test/test_immutable.py 264
778         enough uncorrupted shares out there. It should be able to tell
779         because the corruption occurs in the sharedata version number, which
780         it checks first."""
781-        d = defer.succeed(None)
782-        def _then_corrupt_8(unused=None):
783-            shnums = range(10)
784-            random.shuffle(shnums)
785-            for shnum in shnums[:8]:
786-                self._corrupt_a_share(None, common._corrupt_sharedata_version_number, shnum)
787-        d.addCallback(_then_corrupt_8)
788-
789-        before_download_reads = self._count_reads()
790-        def _attempt_to_download(unused=None):
791-            d2 = download_to_data(self.n)
792+        d = self.startup("download_abort_if_too_many_corrupted_shares")
793+        def _corrupt_8(ign):
794+            c = common._corrupt_sharedata_version_number
795+            self.corrupt_shares_numbered(self.uri, self._shuffled(8), c)
796+        d.addCallback(_corrupt_8)
797+        def _try_download(ign):
798+            start_reads = self._count_reads()
799+            d2 = self.shouldFail(NotEnoughSharesError, "corrupt 8",
800+                                 "LayoutInvalid",
801+                                 download_to_data, self.filenode)
802+            def _check_numreads(ign):
803+                num_reads = self._count_reads() - start_reads
804+                #print num_reads
805 
806hunk ./src/allmydata/test/test_immutable.py 278
807-            def _callb(res):
808-                self.fail("Should have gotten an error from attempt to download, not %r" % (res,))
809-            def _errb(f):
810-                self.failUnless(f.check(NotEnoughSharesError))
811-            d2.addCallbacks(_callb, _errb)
812+                # To pass this test, you are required to give up before
813+                # reading all of the share data. Actually, we could give up
814+                # sooner than 45 reads, but currently our download code does
815+                # 45 reads. This test then serves as a "performance
816+                # regression detector" -- if you change download code so that
817+                # it takes *more* reads, then this test will fail.
818+                self.failIf(num_reads > 45, num_reads)
819+            d2.addCallback(_check_numreads)
820             return d2
821hunk ./src/allmydata/test/test_immutable.py 287
822-
823-        d.addCallback(_attempt_to_download)
824-
825-        def _after_attempt(unused=None):
826-            after_download_reads = self._count_reads()
827-            #print before_download_reads, after_download_reads
828-            # To pass this test, you are required to give up before reading
829-            # all of the share data. Actually, we could give up sooner than
830-            # 45 reads, but currently our download code does 45 reads. This
831-            # test then serves as a "performance regression detector" -- if
832-            # you change download code so that it takes *more* reads, then
833-            # this test will fail.
834-            self.failIf(after_download_reads-before_download_reads > 45,
835-                        (after_download_reads, before_download_reads))
836-        d.addCallback(_after_attempt)
837+        d.addCallback(_try_download)
838         return d
839 
840 
841}
842[remove now-unused ShareManglingMixin
843warner@lothar.com**20110615175045
844 Ignore-this: abf3f361b6789eca18522b72b0d53eb3
845] {
846hunk ./src/allmydata/test/common.py 17
847      DeepCheckResults, DeepCheckAndRepairResults
848 from allmydata.mutable.common import CorruptShareError
849 from allmydata.mutable.layout import unpack_header
850-from allmydata.storage.server import storage_index_to_dir
851 from allmydata.storage.mutable import MutableShareFile
852 from allmydata.util import hashutil, log, fileutil, pollmixin
853 from allmydata.util.assertutil import precondition
854hunk ./src/allmydata/test/common.py 20
855-from allmydata.util.consumer import download_to_data
856 from allmydata.stats import StatsGathererService
857 from allmydata.key_generator import KeyGeneratorService
858 import allmydata.test.common_util as testutil
859hunk ./src/allmydata/test/common.py 918
860 
861 TEST_DATA="\x02"*(immutable.upload.Uploader.URI_LIT_SIZE_THRESHOLD+1)
862 
863-class ShareManglingMixin(SystemTestMixin):
864-
865-    def setUp(self):
866-        # Set self.basedir to a temp dir which has the name of the current
867-        # test method in its name.
868-        self.basedir = self.mktemp()
869-
870-        d = defer.maybeDeferred(SystemTestMixin.setUp, self)
871-        d.addCallback(lambda x: self.set_up_nodes())
872-
873-        def _upload_a_file(ignored):
874-            cl0 = self.clients[0]
875-            # We need multiple segments to test crypttext hash trees that are
876-            # non-trivial (i.e. they have more than just one hash in them).
877-            cl0.DEFAULT_ENCODING_PARAMETERS['max_segment_size'] = 12
878-            # Tests that need to test servers of happiness using this should
879-            # set their own value for happy -- the default (7) breaks stuff.
880-            cl0.DEFAULT_ENCODING_PARAMETERS['happy'] = 1
881-            d2 = cl0.upload(immutable.upload.Data(TEST_DATA, convergence=""))
882-            def _after_upload(u):
883-                filecap = u.uri
884-                self.n = self.clients[1].create_node_from_uri(filecap)
885-                self.uri = uri.CHKFileURI.init_from_string(filecap)
886-                return cl0.create_node_from_uri(filecap)
887-            d2.addCallback(_after_upload)
888-            return d2
889-        d.addCallback(_upload_a_file)
890-
891-        def _stash_it(filenode):
892-            self.filenode = filenode
893-        d.addCallback(_stash_it)
894-        return d
895-
896-    def find_all_shares(self, unused=None):
897-        """Locate shares on disk. Returns a dict that maps
898-        (clientnum,sharenum) to a string that contains the share container
899-        (copied directly from the disk, containing leases etc). You can
900-        modify this dict and then call replace_shares() to modify the shares.
901-        """
902-        shares = {} # k: (i, sharenum), v: data
903-
904-        for i, c in enumerate(self.clients):
905-            sharedir = c.getServiceNamed("storage").sharedir
906-            for (dirp, dirns, fns) in os.walk(sharedir):
907-                for fn in fns:
908-                    try:
909-                        sharenum = int(fn)
910-                    except TypeError:
911-                        # Whoops, I guess that's not a share file then.
912-                        pass
913-                    else:
914-                        data = open(os.path.join(sharedir, dirp, fn), "rb").read()
915-                        shares[(i, sharenum)] = data
916-
917-        return shares
918-
919-    def replace_shares(self, newshares, storage_index):
920-        """Replace shares on disk. Takes a dictionary in the same form
921-        as find_all_shares() returns."""
922-
923-        for i, c in enumerate(self.clients):
924-            sharedir = c.getServiceNamed("storage").sharedir
925-            for (dirp, dirns, fns) in os.walk(sharedir):
926-                for fn in fns:
927-                    try:
928-                        sharenum = int(fn)
929-                    except TypeError:
930-                        # Whoops, I guess that's not a share file then.
931-                        pass
932-                    else:
933-                        pathtosharefile = os.path.join(sharedir, dirp, fn)
934-                        os.unlink(pathtosharefile)
935-            for ((clientnum, sharenum), newdata) in newshares.iteritems():
936-                if clientnum == i:
937-                    fullsharedirp=os.path.join(sharedir, storage_index_to_dir(storage_index))
938-                    fileutil.make_dirs(fullsharedirp)
939-                    wf = open(os.path.join(fullsharedirp, str(sharenum)), "wb")
940-                    wf.write(newdata)
941-                    wf.close()
942-
943-    def _delete_a_share(self, unused=None, sharenum=None):
944-        """ Delete one share. """
945-
946-        shares = self.find_all_shares()
947-        ks = shares.keys()
948-        if sharenum is not None:
949-            k = [ key for key in shares.keys() if key[1] == sharenum ][0]
950-        else:
951-            k = random.choice(ks)
952-        del shares[k]
953-        self.replace_shares(shares, storage_index=self.uri.get_storage_index())
954-
955-        return unused
956-
957-    def _corrupt_a_share(self, unused, corruptor_func, sharenum):
958-        shares = self.find_all_shares()
959-        ks = [ key for key in shares.keys() if key[1] == sharenum ]
960-        assert ks, (shares.keys(), sharenum)
961-        k = ks[0]
962-        shares[k] = corruptor_func(shares[k])
963-        self.replace_shares(shares, storage_index=self.uri.get_storage_index())
964-        return corruptor_func
965-
966-    def _corrupt_all_shares(self, unused, corruptor_func):
967-        """ All shares on disk will be corrupted by corruptor_func. """
968-        shares = self.find_all_shares()
969-        for k in shares.keys():
970-            self._corrupt_a_share(unused, corruptor_func, k[1])
971-        return corruptor_func
972-
973-    def _corrupt_a_random_share(self, unused, corruptor_func):
974-        """ Exactly one share on disk will be corrupted by corruptor_func. """
975-        shares = self.find_all_shares()
976-        ks = shares.keys()
977-        k = random.choice(ks)
978-        self._corrupt_a_share(unused, corruptor_func, k[1])
979-        return k[1]
980-
981-    def _count_reads(self):
982-        sum_of_read_counts = 0
983-        for thisclient in self.clients:
984-            counters = thisclient.stats_provider.get_stats()['counters']
985-            sum_of_read_counts += counters.get('storage_server.read', 0)
986-        return sum_of_read_counts
987-
988-    def _count_allocates(self):
989-        sum_of_allocate_counts = 0
990-        for thisclient in self.clients:
991-            counters = thisclient.stats_provider.get_stats()['counters']
992-            sum_of_allocate_counts += counters.get('storage_server.allocate', 0)
993-        return sum_of_allocate_counts
994-
995-    def _count_writes(self):
996-        sum_of_write_counts = 0
997-        for thisclient in self.clients:
998-            counters = thisclient.stats_provider.get_stats()['counters']
999-            sum_of_write_counts += counters.get('storage_server.write', 0)
1000-        return sum_of_write_counts
1001-
1002-    def _download_and_check_plaintext(self, unused=None):
1003-        d = download_to_data(self.n)
1004-        def _after_download(result):
1005-            self.failUnlessEqual(result, TEST_DATA)
1006-        d.addCallback(_after_download)
1007-        return d
1008-
1009 class ShouldFailMixin:
1010     def shouldFail(self, expected_failure, which, substring,
1011                    callable, *args, **kwargs):
1012}
1013[remove get_serverid from DownloadStatus.add_dyhb_sent and customers
1014warner@lothar.com**20110615175104
1015 Ignore-this: 6f4776aec7152bd46c93d21a2b1e81a
1016] {
1017hunk ./src/allmydata/immutable/downloader/finder.py 138
1018         lp = self.log(format="sending DYHB to [%(name)s]", name=server.get_name(),
1019                       level=log.NOISY, umid="Io7pyg")
1020         time_sent = now()
1021-        d_ev = self._download_status.add_dyhb_sent(server.get_serverid(),
1022-                                                   time_sent)
1023+        d_ev = self._download_status.add_dyhb_sent(server, time_sent)
1024         # TODO: get the timer from a Server object, it knows best
1025         self.overdue_timers[req] = reactor.callLater(self.OVERDUE_TIMEOUT,
1026                                                      self.overdue, req)
1027hunk ./src/allmydata/immutable/downloader/status.py 43
1028         self.helper = False
1029         self.started = None
1030         # self.dyhb_requests tracks "do you have a share" requests and
1031-        # responses. It maps serverid to a tuple of:
1032+        # responses. It maps an IServer instance to a tuple of:
1033         #  send time
1034         #  tuple of response shnums (None if response hasn't arrived, "error")
1035         #  response time (None if response hasn't arrived yet)
1036hunk ./src/allmydata/immutable/downloader/status.py 81
1037         self.problems = []
1038 
1039 
1040-    def add_dyhb_sent(self, serverid, when):
1041+    def add_dyhb_sent(self, server, when):
1042         r = (when, None, None)
1043hunk ./src/allmydata/immutable/downloader/status.py 83
1044-        if serverid not in self.dyhb_requests:
1045-            self.dyhb_requests[serverid] = []
1046-        self.dyhb_requests[serverid].append(r)
1047-        tag = (serverid, len(self.dyhb_requests[serverid])-1)
1048+        if server not in self.dyhb_requests:
1049+            self.dyhb_requests[server] = []
1050+        self.dyhb_requests[server].append(r)
1051+        tag = (server, len(self.dyhb_requests[server])-1)
1052         return DYHBEvent(self, tag)
1053 
1054     def add_dyhb_finished(self, tag, shnums, when):
1055hunk ./src/allmydata/immutable/downloader/status.py 91
1056         # received="error" on error, else tuple(shnums)
1057-        (serverid, index) = tag
1058-        r = self.dyhb_requests[serverid][index]
1059+        (server, index) = tag
1060+        r = self.dyhb_requests[server][index]
1061         (sent, _, _) = r
1062         r = (sent, shnums, when)
1063hunk ./src/allmydata/immutable/downloader/status.py 95
1064-        self.dyhb_requests[serverid][index] = r
1065+        self.dyhb_requests[server][index] = r
1066 
1067     def add_request_sent(self, serverid, shnum, start, length, when):
1068         r = (shnum, start, length, when, None, None)
1069hunk ./src/allmydata/test/test_web.py 77
1070     def get_helper_info(self):
1071         return (None, False)
1072 
1073+class FakeIServer:
1074+    def get_name(self): return "short"
1075+    def get_longname(self): return "long"
1076+    def get_serverid(self): return "binary-serverid"
1077+
1078 def build_one_ds():
1079     ds = DownloadStatus("storage_index", 1234)
1080     now = time.time()
1081hunk ./src/allmydata/test/test_web.py 86
1082 
1083+    serverA = FakeIServer()
1084+    serverB = FakeIServer()
1085     ds.add_segment_request(0, now)
1086     # segnum, when, start,len, decodetime
1087     ds.add_segment_delivery(0, now+1, 0, 100, 0.5)
1088hunk ./src/allmydata/test/test_web.py 101
1089     ds.add_segment_request(4, now)
1090     ds.add_segment_delivery(4, now, 0, 140, 0.5)
1091 
1092-    e = ds.add_dyhb_sent("serverid_a", now)
1093+    e = ds.add_dyhb_sent(serverA, now)
1094     e.finished([1,2], now+1)
1095hunk ./src/allmydata/test/test_web.py 103
1096-    e = ds.add_dyhb_sent("serverid_b", now+2) # left unfinished
1097+    e = ds.add_dyhb_sent(serverB, now+2) # left unfinished
1098 
1099     e = ds.add_read_event(0, 120, now)
1100     e.update(60, 0.5, 0.1) # bytes, decrypttime, pausetime
1101hunk ./src/allmydata/web/status.py 367
1102         req.setHeader("content-type", "text/plain")
1103         data = {}
1104         dyhb_events = []
1105-        for serverid,requests in self.download_status.dyhb_requests.iteritems():
1106+        for server,requests in self.download_status.dyhb_requests.iteritems():
1107             for req in requests:
1108hunk ./src/allmydata/web/status.py 369
1109-                dyhb_events.append( (base32.b2a(serverid),) + req )
1110+                dyhb_events.append( (server.get_longname(),) + req )
1111         dyhb_events.sort(key=lambda req: req[1])
1112         data["dyhb"] = dyhb_events
1113         request_events = []
1114hunk ./src/allmydata/web/status.py 392
1115         t[T.tr[T.th["serverid"], T.th["sent"], T.th["received"],
1116                T.th["shnums"], T.th["RTT"]]]
1117         dyhb_events = []
1118-        for serverid,requests in self.download_status.dyhb_requests.iteritems():
1119+        for server,requests in self.download_status.dyhb_requests.iteritems():
1120             for req in requests:
1121hunk ./src/allmydata/web/status.py 394
1122-                dyhb_events.append( (serverid,) + req )
1123+                dyhb_events.append( (server,) + req )
1124         dyhb_events.sort(key=lambda req: req[1])
1125         for d_ev in dyhb_events:
1126hunk ./src/allmydata/web/status.py 397
1127-            (serverid, sent, shnums, received) = d_ev
1128-            serverid_s = idlib.shortnodeid_b2a(serverid)
1129+            (server, sent, shnums, received) = d_ev
1130             rtt = None
1131             if received is not None:
1132                 rtt = received - sent
1133hunk ./src/allmydata/web/status.py 403
1134             if not shnums:
1135                 shnums = ["-"]
1136-            t[T.tr(style="background: %s" % self.color(serverid))[
1137-                [T.td[serverid_s], T.td[srt(sent)], T.td[srt(received)],
1138+            color = self.color(server.get_serverid())
1139+            t[T.tr(style="background: %s" % color)[
1140+                [T.td[server.get_name()], T.td[srt(sent)], T.td[srt(received)],
1141                  T.td[",".join([str(shnum) for shnum in shnums])],
1142                  T.td[self.render_time(None, rtt)],
1143                  ]]]
1144}
1145[remove get_serverid from DownloadStatus.add_request_sent and customers
1146warner@lothar.com**20110615175127
1147 Ignore-this: 92a77e724f17bc450aca7944276f4fc9
1148] {
1149hunk ./src/allmydata/immutable/downloader/share.py 729
1150                          share=repr(self),
1151                          start=start, length=length,
1152                          level=log.NOISY, parent=self._lp, umid="sgVAyA")
1153-            req_ev = ds.add_request_sent(self._server.get_serverid(),
1154-                                         self._shnum,
1155+            req_ev = ds.add_request_sent(self._server, self._shnum,
1156                                          start, length, now())
1157             d = self._send_request(start, length)
1158             d.addCallback(self._got_data, start, length, req_ev, lp)
1159hunk ./src/allmydata/immutable/downloader/status.py 50
1160         self.dyhb_requests = {}
1161 
1162         # self.requests tracks share-data requests and responses. It maps
1163-        # serverid to a tuple of:
1164+        # IServer instance to a tuple of:
1165         #  shnum,
1166         #  start,length,  (of data requested)
1167         #  send time
1168hunk ./src/allmydata/immutable/downloader/status.py 97
1169         r = (sent, shnums, when)
1170         self.dyhb_requests[server][index] = r
1171 
1172-    def add_request_sent(self, serverid, shnum, start, length, when):
1173+    def add_request_sent(self, server, shnum, start, length, when):
1174         r = (shnum, start, length, when, None, None)
1175hunk ./src/allmydata/immutable/downloader/status.py 99
1176-        if serverid not in self.requests:
1177-            self.requests[serverid] = []
1178-        self.requests[serverid].append(r)
1179-        tag = (serverid, len(self.requests[serverid])-1)
1180+        if server not in self.requests:
1181+            self.requests[server] = []
1182+        self.requests[server].append(r)
1183+        tag = (server, len(self.requests[server])-1)
1184         return RequestEvent(self, tag)
1185 
1186     def add_request_finished(self, tag, received, when):
1187hunk ./src/allmydata/immutable/downloader/status.py 107
1188         # received="error" on error, else len(data)
1189-        (serverid, index) = tag
1190-        r = self.requests[serverid][index]
1191+        (server, index) = tag
1192+        r = self.requests[server][index]
1193         (shnum, start, length, sent, _, _) = r
1194         r = (shnum, start, length, sent, received, when)
1195hunk ./src/allmydata/immutable/downloader/status.py 111
1196-        self.requests[serverid][index] = r
1197+        self.requests[server][index] = r
1198 
1199     def add_segment_request(self, segnum, when):
1200         if self.started is None:
1201hunk ./src/allmydata/test/test_web.py 110
1202     e.finished(now+1)
1203     e = ds.add_read_event(120, 30, now+2) # left unfinished
1204 
1205-    e = ds.add_request_sent("serverid_a", 1, 100, 20, now)
1206+    e = ds.add_request_sent(serverA, 1, 100, 20, now)
1207     e.finished(20, now+1)
1208hunk ./src/allmydata/test/test_web.py 112
1209-    e = ds.add_request_sent("serverid_a", 1, 120, 30, now+1) # left unfinished
1210+    e = ds.add_request_sent(serverA, 1, 120, 30, now+1) # left unfinished
1211 
1212     # make sure that add_read_event() can come first too
1213     ds1 = DownloadStatus("storage_index", 1234)
1214hunk ./src/allmydata/web/status.py 373
1215         dyhb_events.sort(key=lambda req: req[1])
1216         data["dyhb"] = dyhb_events
1217         request_events = []
1218-        for serverid,requests in self.download_status.requests.iteritems():
1219+        for server,requests in self.download_status.requests.iteritems():
1220             for req in requests:
1221hunk ./src/allmydata/web/status.py 375
1222-                request_events.append( (base32.b2a(serverid),) + req )
1223+                request_events.append( (server.get_longname(),) + req )
1224         request_events.sort(key=lambda req: (req[4],req[1]))
1225         data["requests"] = request_events
1226         data["segment"] = self.download_status.segment_events
1227hunk ./src/allmydata/web/status.py 466
1228                        T.td[segtime], T.td[speed]]]
1229             elif etype == "error":
1230                 t[T.tr[T.td["error"], T.td["seg%d" % segnum]]]
1231-               
1232+
1233         l[T.h2["Segment Events:"], t]
1234         l[T.br(clear="all")]
1235 
1236hunk ./src/allmydata/web/status.py 475
1237                T.th["txtime"], T.th["rxtime"], T.th["received"], T.th["RTT"]]]
1238         reqtime = (None, None)
1239         request_events = []
1240-        for serverid,requests in self.download_status.requests.iteritems():
1241+        for server,requests in self.download_status.requests.iteritems():
1242             for req in requests:
1243hunk ./src/allmydata/web/status.py 477
1244-                request_events.append( (serverid,) + req )
1245+                request_events.append( (server,) + req )
1246         request_events.sort(key=lambda req: (req[4],req[1]))
1247         for r_ev in request_events:
1248hunk ./src/allmydata/web/status.py 480
1249-            (peerid, shnum, start, length, sent, receivedlen, received) = r_ev
1250+            (server, shnum, start, length, sent, receivedlen, received) = r_ev
1251             rtt = None
1252             if received is not None:
1253                 rtt = received - sent
1254hunk ./src/allmydata/web/status.py 484
1255-            peerid_s = idlib.shortnodeid_b2a(peerid)
1256-            t[T.tr(style="background: %s" % self.color(peerid))[
1257-                T.td[peerid_s], T.td[shnum],
1258+            color = self.color(server.get_serverid())
1259+            t[T.tr(style="background: %s" % color)[
1260+                T.td[server.get_name()], T.td[shnum],
1261                 T.td["[%d:+%d]" % (start, length)],
1262                 T.td[srt(sent)], T.td[srt(received)], T.td[receivedlen],
1263                 T.td[self.render_time(None, rtt)],
1264hunk ./src/allmydata/web/status.py 491
1265                 ]]
1266-               
1267+
1268         l[T.h2["Requests:"], t]
1269         l[T.br(clear="all")]
1270 
1271}
1272[web/status.py: remove spurious whitespace, no code changes
1273warner@lothar.com**20110615175157
1274 Ignore-this: b59b929dea09be4a5e48b8e15c2d44a
1275] {
1276hunk ./src/allmydata/web/status.py 387
1277             return
1278         srt = self.short_relative_time
1279         l = T.div()
1280-       
1281+
1282         t = T.table(align="left", class_="status-download-events")
1283         t[T.tr[T.th["serverid"], T.th["sent"], T.th["received"],
1284                T.th["shnums"], T.th["RTT"]]]
1285hunk ./src/allmydata/web/status.py 409
1286                  T.td[",".join([str(shnum) for shnum in shnums])],
1287                  T.td[self.render_time(None, rtt)],
1288                  ]]]
1289-       
1290         l[T.h2["DYHB Requests:"], t]
1291         l[T.br(clear="all")]
1292hunk ./src/allmydata/web/status.py 411
1293-       
1294+
1295         t = T.table(align="left",class_="status-download-events")
1296         t[T.tr[T.th["range"], T.th["start"], T.th["finish"], T.th["got"],
1297                T.th["time"], T.th["decrypttime"], T.th["pausedtime"],
1298hunk ./src/allmydata/web/status.py 431
1299                    T.td[bytes], T.td[rtt], T.td[decrypt], T.td[paused],
1300                    T.td[speed],
1301                    ]]
1302-       
1303         l[T.h2["Read Events:"], t]
1304         l[T.br(clear="all")]
1305hunk ./src/allmydata/web/status.py 433
1306-       
1307+
1308         t = T.table(align="left",class_="status-download-events")
1309         t[T.tr[T.th["type"], T.th["segnum"], T.th["when"], T.th["range"],
1310                T.th["decodetime"], T.th["segtime"], T.th["speed"]]]
1311hunk ./src/allmydata/web/status.py 448
1312                     T.td["-"],
1313                     T.td["-"],
1314                     T.td["-"]]]
1315-                   
1316                 reqtime = (segnum, when)
1317             elif etype == "delivery":
1318                 if reqtime[0] == segnum:
1319}
1320[DownloadStatus.add_known_share wants to be used by Finder, web.status
1321warner@lothar.com**20110615175222
1322 Ignore-this: 10bc40413e7a4980a96e16a55a84800f
1323] hunk ./src/allmydata/immutable/downloader/status.py 146
1324         r = (start, length, requesttime, finishtime, bytes, decrypt, paused)
1325         self.read_events[tag] = r
1326 
1327-    def add_known_share(self, serverid, shnum):
1328-        self.known_shares.append( (serverid, shnum) )
1329+    def add_known_share(self, server, shnum): # XXX use me
1330+        self.known_shares.append( (server, shnum) )
1331 
1332     def add_problem(self, p):
1333         self.problems.append(p)
1334[remove nodeid from WriteBucketProxy classes and customers
1335warner@lothar.com**20110615175245
1336 Ignore-this: f239f65c4d8b99d555de16c10db71fec
1337] {
1338hunk ./src/allmydata/immutable/downloader/share.py 125
1339         # use the upload-side code to get this as accurate as possible
1340         ht = IncompleteHashTree(N)
1341         num_share_hashes = len(ht.needed_hashes(0, include_leaf=True))
1342-        wbp = make_write_bucket_proxy(None, share_size, r["block_size"],
1343-                                      r["num_segments"], num_share_hashes, 0,
1344-                                      None)
1345+        wbp = make_write_bucket_proxy(None, None, share_size, r["block_size"],
1346+                                      r["num_segments"], num_share_hashes, 0)
1347         self._fieldsize = wbp.fieldsize
1348         self._fieldstruct = wbp.fieldstruct
1349         self.guessed_offsets = wbp._offsets
1350hunk ./src/allmydata/immutable/layout.py 79
1351 
1352 FORCE_V2 = False # set briefly by unit tests to make small-sized V2 shares
1353 
1354-def make_write_bucket_proxy(rref, data_size, block_size, num_segments,
1355-                            num_share_hashes, uri_extension_size_max, nodeid):
1356+def make_write_bucket_proxy(rref, server,
1357+                            data_size, block_size, num_segments,
1358+                            num_share_hashes, uri_extension_size_max):
1359     # Use layout v1 for small files, so they'll be readable by older versions
1360     # (<tahoe-1.3.0). Use layout v2 for large files; they'll only be readable
1361     # by tahoe-1.3.0 or later.
1362hunk ./src/allmydata/immutable/layout.py 88
1363     try:
1364         if FORCE_V2:
1365             raise FileTooLargeError
1366-        wbp = WriteBucketProxy(rref, data_size, block_size, num_segments,
1367-                               num_share_hashes, uri_extension_size_max, nodeid)
1368+        wbp = WriteBucketProxy(rref, server,
1369+                               data_size, block_size, num_segments,
1370+                               num_share_hashes, uri_extension_size_max)
1371     except FileTooLargeError:
1372hunk ./src/allmydata/immutable/layout.py 92
1373-        wbp = WriteBucketProxy_v2(rref, data_size, block_size, num_segments,
1374-                                  num_share_hashes, uri_extension_size_max, nodeid)
1375+        wbp = WriteBucketProxy_v2(rref, server,
1376+                                  data_size, block_size, num_segments,
1377+                                  num_share_hashes, uri_extension_size_max)
1378     return wbp
1379 
1380 class WriteBucketProxy:
1381hunk ./src/allmydata/immutable/layout.py 102
1382     fieldsize = 4
1383     fieldstruct = ">L"
1384 
1385-    def __init__(self, rref, data_size, block_size, num_segments,
1386-                 num_share_hashes, uri_extension_size_max, nodeid,
1387-                 pipeline_size=50000):
1388+    def __init__(self, rref, server, data_size, block_size, num_segments,
1389+                 num_share_hashes, uri_extension_size_max, pipeline_size=50000):
1390         self._rref = rref
1391hunk ./src/allmydata/immutable/layout.py 105
1392+        self._server = server
1393         self._data_size = data_size
1394         self._block_size = block_size
1395         self._num_segments = num_segments
1396hunk ./src/allmydata/immutable/layout.py 109
1397-        self._nodeid = nodeid
1398 
1399         effective_segments = mathutil.next_power_of_k(num_segments,2)
1400         self._segment_hash_size = (2*effective_segments - 1) * HASH_SIZE
1401hunk ./src/allmydata/immutable/layout.py 166
1402         self._offset_data = offset_data
1403 
1404     def __repr__(self):
1405-        if self._nodeid:
1406-            nodeid_s = idlib.nodeid_b2a(self._nodeid)
1407-        else:
1408-            nodeid_s = "[None]"
1409-        return "<WriteBucketProxy for node %s>" % nodeid_s
1410+        return "<WriteBucketProxy for node %s>" % self._server.get_name()
1411 
1412     def put_header(self):
1413         return self._write(0, self._offset_data)
1414hunk ./src/allmydata/immutable/layout.py 248
1415         return self._rref.callRemoteOnly("abort")
1416 
1417 
1418+    def get_servername(self):
1419+        return self._server.get_name()
1420     def get_peerid(self):
1421hunk ./src/allmydata/immutable/layout.py 251
1422-        if self._nodeid:
1423-            return self._nodeid
1424-        return None
1425+        return self._server.get_serverid()
1426 
1427 class WriteBucketProxy_v2(WriteBucketProxy):
1428     fieldsize = 8
1429hunk ./src/allmydata/immutable/upload.py 80
1430         self.buckets = {} # k: shareid, v: IRemoteBucketWriter
1431         self.sharesize = sharesize
1432 
1433-        wbp = layout.make_write_bucket_proxy(None, sharesize,
1434+        wbp = layout.make_write_bucket_proxy(None, None, sharesize,
1435                                              blocksize, num_segments,
1436                                              num_share_hashes,
1437hunk ./src/allmydata/immutable/upload.py 83
1438-                                             EXTENSION_SIZE, server.get_serverid())
1439+                                             EXTENSION_SIZE)
1440         self.wbp_class = wbp.__class__ # to create more of them
1441         self.allocated_size = wbp.get_allocated_size()
1442         self.blocksize = blocksize
1443hunk ./src/allmydata/immutable/upload.py 123
1444         #log.msg("%s._got_reply(%s)" % (self, (alreadygot, buckets)))
1445         b = {}
1446         for sharenum, rref in buckets.iteritems():
1447-            bp = self.wbp_class(rref, self.sharesize,
1448+            bp = self.wbp_class(rref, self._server, self.sharesize,
1449                                 self.blocksize,
1450                                 self.num_segments,
1451                                 self.num_share_hashes,
1452hunk ./src/allmydata/immutable/upload.py 127
1453-                                EXTENSION_SIZE,
1454-                                self._server.get_serverid())
1455+                                EXTENSION_SIZE)
1456             b[sharenum] = bp
1457         self.buckets.update(b)
1458         return (alreadygot, set(b.keys()))
1459hunk ./src/allmydata/immutable/upload.py 151
1460 
1461 
1462 def str_shareloc(shnum, bucketwriter):
1463-    return "%s: %s" % (shnum, idlib.shortnodeid_b2a(bucketwriter._nodeid),)
1464+    return "%s: %s" % (shnum, bucketwriter.get_servername(),)
1465 
1466 class Tahoe2ServerSelector(log.PrefixingLogMixin):
1467 
1468hunk ./src/allmydata/immutable/upload.py 207
1469         num_share_hashes = len(ht.needed_hashes(0, include_leaf=True))
1470 
1471         # figure out how much space to ask for
1472-        wbp = layout.make_write_bucket_proxy(None, share_size, 0, num_segments,
1473-                                             num_share_hashes, EXTENSION_SIZE,
1474-                                             None)
1475+        wbp = layout.make_write_bucket_proxy(None, None,
1476+                                             share_size, 0, num_segments,
1477+                                             num_share_hashes, EXTENSION_SIZE)
1478         allocated_size = wbp.get_allocated_size()
1479         all_servers = storage_broker.get_servers_for_psi(storage_index)
1480         if not all_servers:
1481hunk ./src/allmydata/test/test_storage.py 138
1482 
1483     def test_create(self):
1484         bw, rb, sharefname = self.make_bucket("test_create", 500)
1485-        bp = WriteBucketProxy(rb,
1486+        bp = WriteBucketProxy(rb, None,
1487                               data_size=300,
1488                               block_size=10,
1489                               num_segments=5,
1490hunk ./src/allmydata/test/test_storage.py 143
1491                               num_share_hashes=3,
1492-                              uri_extension_size_max=500, nodeid=None)
1493+                              uri_extension_size_max=500)
1494         self.failUnless(interfaces.IStorageBucketWriter.providedBy(bp), bp)
1495 
1496     def _do_test_readwrite(self, name, header_size, wbp_class, rbp_class):
1497hunk ./src/allmydata/test/test_storage.py 169
1498         uri_extension = "s" + "E"*498 + "e"
1499 
1500         bw, rb, sharefname = self.make_bucket(name, sharesize)
1501-        bp = wbp_class(rb,
1502+        bp = wbp_class(rb, None,
1503                        data_size=95,
1504                        block_size=25,
1505                        num_segments=4,
1506hunk ./src/allmydata/test/test_storage.py 174
1507                        num_share_hashes=3,
1508-                       uri_extension_size_max=len(uri_extension),
1509-                       nodeid=None)
1510+                       uri_extension_size_max=len(uri_extension))
1511 
1512         d = bp.put_header()
1513         d.addCallback(lambda res: bp.put_block(0, "a"*25))
1514}
1515[remove get_serverid() from ReadBucketProxy and customers, including Checker
1516warner@lothar.com**20110615175303
1517 Ignore-this: b90a83b7fd88c897c1f9920ff2f65246
1518 and debug.py dump-share commands
1519] {
1520hunk ./src/allmydata/immutable/checker.py 500
1521 
1522         rref = s.get_rref()
1523         lease_seed = s.get_lease_seed()
1524-        serverid = s.get_serverid()
1525         if self._add_lease:
1526             renew_secret = self._get_renewal_secret(lease_seed)
1527             cancel_secret = self._get_cancel_secret(lease_seed)
1528hunk ./src/allmydata/immutable/checker.py 509
1529 
1530         d = rref.callRemote("get_buckets", storageindex)
1531         def _wrap_results(res):
1532-            return (res, serverid, True)
1533+            return (res, True)
1534 
1535         def _trap_errs(f):
1536             level = log.WEIRD
1537hunk ./src/allmydata/immutable/checker.py 518
1538             self.log("failure from server on 'get_buckets' the REMOTE failure was:",
1539                      facility="tahoe.immutable.checker",
1540                      failure=f, level=level, umid="AX7wZQ")
1541-            return ({}, serverid, False)
1542+            return ({}, False)
1543 
1544         d.addCallbacks(_wrap_results, _trap_errs)
1545         return d
1546hunk ./src/allmydata/immutable/checker.py 557
1547                 level=log.WEIRD, umid="hEGuQg")
1548 
1549 
1550-    def _download_and_verify(self, serverid, sharenum, bucket):
1551+    def _download_and_verify(self, server, sharenum, bucket):
1552         """Start an attempt to download and verify every block in this bucket
1553         and return a deferred that will eventually fire once the attempt
1554         completes.
1555hunk ./src/allmydata/immutable/checker.py 577
1556         results."""
1557 
1558         vcap = self._verifycap
1559-        b = layout.ReadBucketProxy(bucket, serverid, vcap.get_storage_index())
1560+        b = layout.ReadBucketProxy(bucket, server, vcap.get_storage_index())
1561         veup = ValidatedExtendedURIProxy(b, vcap)
1562         d = veup.start()
1563 
1564hunk ./src/allmydata/immutable/checker.py 660
1565 
1566     def _verify_server_shares(self, s):
1567         """ Return a deferred which eventually fires with a tuple of
1568-        (set(sharenum), serverid, set(corruptsharenum),
1569+        (set(sharenum), server, set(corruptsharenum),
1570         set(incompatiblesharenum), success) showing all the shares verified
1571         to be served by this server, and all the corrupt shares served by the
1572         server, and all the incompatible shares served by the server. In case
1573hunk ./src/allmydata/immutable/checker.py 684
1574         d = self._get_buckets(s, self._verifycap.get_storage_index())
1575 
1576         def _got_buckets(result):
1577-            bucketdict, serverid, success = result
1578+            bucketdict, success = result
1579 
1580             shareverds = []
1581             for (sharenum, bucket) in bucketdict.items():
1582hunk ./src/allmydata/immutable/checker.py 688
1583-                d = self._download_and_verify(serverid, sharenum, bucket)
1584+                d = self._download_and_verify(s, sharenum, bucket)
1585                 shareverds.append(d)
1586 
1587             dl = deferredutil.gatherResults(shareverds)
1588hunk ./src/allmydata/immutable/checker.py 705
1589                             corrupt.add(sharenum)
1590                         elif whynot == 'incompatible':
1591                             incompatible.add(sharenum)
1592-                return (verified, serverid, corrupt, incompatible, success)
1593+                return (verified, s, corrupt, incompatible, success)
1594 
1595             dl.addCallback(collect)
1596             return dl
1597hunk ./src/allmydata/immutable/checker.py 712
1598 
1599         def _err(f):
1600             f.trap(RemoteException, DeadReferenceError)
1601-            return (set(), s.get_serverid(), set(), set(), False)
1602+            return (set(), s, set(), set(), False)
1603 
1604         d.addCallbacks(_got_buckets, _err)
1605         return d
1606hunk ./src/allmydata/immutable/checker.py 719
1607 
1608     def _check_server_shares(self, s):
1609         """Return a deferred which eventually fires with a tuple of
1610-        (set(sharenum), serverid, set(), set(), responded) showing all the
1611+        (set(sharenum), server, set(), set(), responded) showing all the
1612         shares claimed to be served by this server. In case the server is
1613hunk ./src/allmydata/immutable/checker.py 721
1614-        disconnected then it fires with (set() serverid, set(), set(), False)
1615+        disconnected then it fires with (set(), server, set(), set(), False)
1616         (a server disconnecting when we ask it for buckets is the same, for
1617         our purposes, as a server that says it has none, except that we want
1618         to track and report whether or not each server responded.)"""
1619hunk ./src/allmydata/immutable/checker.py 726
1620         def _curry_empty_corrupted(res):
1621-            buckets, serverid, responded = res
1622-            return (set(buckets), serverid, set(), set(), responded)
1623+            buckets, responded = res
1624+            return (set(buckets), s, set(), set(), responded)
1625         d = self._get_buckets(s, self._verifycap.get_storage_index())
1626         d.addCallback(_curry_empty_corrupted)
1627         return d
1628hunk ./src/allmydata/immutable/checker.py 743
1629         corruptsharelocators = [] # (serverid, storageindex, sharenum)
1630         incompatiblesharelocators = [] # (serverid, storageindex, sharenum)
1631 
1632-        for theseverifiedshares, thisserverid, thesecorruptshares, theseincompatibleshares, thisresponded in results:
1633+        for theseverifiedshares, thisserver, thesecorruptshares, theseincompatibleshares, thisresponded in results:
1634+            thisserverid = thisserver.get_serverid()
1635             servers.setdefault(thisserverid, set()).update(theseverifiedshares)
1636             for sharenum in theseverifiedshares:
1637                 verifiedshares.setdefault(sharenum, set()).add(thisserverid)
1638hunk ./src/allmydata/immutable/layout.py 6
1639 from twisted.internet import defer
1640 from allmydata.interfaces import IStorageBucketWriter, IStorageBucketReader, \
1641      FileTooLargeError, HASH_SIZE
1642-from allmydata.util import mathutil, idlib, observer, pipeline
1643+from allmydata.util import mathutil, observer, pipeline
1644 from allmydata.util.assertutil import precondition
1645 from allmydata.storage.server import si_b2a
1646 
1647hunk ./src/allmydata/immutable/layout.py 297
1648 
1649     MAX_UEB_SIZE = 2000 # actual size is closer to 419, but varies by a few bytes
1650 
1651-    def __init__(self, rref, peerid, storage_index):
1652+    def __init__(self, rref, server, storage_index):
1653         self._rref = rref
1654hunk ./src/allmydata/immutable/layout.py 299
1655-        self._peerid = peerid
1656-        peer_id_s = idlib.shortnodeid_b2a(peerid)
1657-        storage_index_s = si_b2a(storage_index)
1658-        self._reprstr = "<ReadBucketProxy %s to peer [%s] SI %s>" % (id(self), peer_id_s, storage_index_s)
1659+        self._server = server
1660+        self._storage_index = storage_index
1661         self._started = False # sent request to server
1662         self._ready = observer.OneShotObserverList() # got response from server
1663 
1664hunk ./src/allmydata/immutable/layout.py 305
1665     def get_peerid(self):
1666-        return self._peerid
1667+        return self._server.get_serverid()
1668 
1669     def __repr__(self):
1670hunk ./src/allmydata/immutable/layout.py 308
1671-        return self._reprstr
1672+        return "<ReadBucketProxy %s to peer [%s] SI %s>" % \
1673+               (id(self), self._server.get_name(), si_b2a(self._storage_index))
1674 
1675     def _start_if_needed(self):
1676         """ Returns a deferred that will be fired when I'm ready to return
1677hunk ./src/allmydata/immutable/offloaded.py 88
1678             self.log("no readers, so no UEB", level=log.NOISY)
1679             return
1680         b,server = self._readers.pop()
1681-        rbp = ReadBucketProxy(b, server.get_serverid(), si_b2a(self._storage_index))
1682+        rbp = ReadBucketProxy(b, server, si_b2a(self._storage_index))
1683         d = rbp.get_uri_extension()
1684         d.addCallback(self._got_uri_extension)
1685         d.addErrback(self._ueb_error)
1686hunk ./src/allmydata/scripts/debug.py 71
1687     from allmydata.util.encodingutil import quote_output, to_str
1688 
1689     # use a ReadBucketProxy to parse the bucket and find the uri extension
1690-    bp = ReadBucketProxy(None, '', '')
1691+    bp = ReadBucketProxy(None, None, '')
1692     offsets = bp._parse_offsets(f.read_share_data(0, 0x44))
1693     print >>out, "%20s: %d" % ("version", bp._version)
1694     seek = offsets['uri_extension']
1695hunk ./src/allmydata/scripts/debug.py 613
1696         class ImmediateReadBucketProxy(ReadBucketProxy):
1697             def __init__(self, sf):
1698                 self.sf = sf
1699-                ReadBucketProxy.__init__(self, "", "", "")
1700+                ReadBucketProxy.__init__(self, None, None, "")
1701             def __repr__(self):
1702                 return "<ImmediateReadBucketProxy>"
1703             def _read(self, offset, size):
1704hunk ./src/allmydata/scripts/debug.py 771
1705     else:
1706         # otherwise assume it's immutable
1707         f = ShareFile(fn)
1708-        bp = ReadBucketProxy(None, '', '')
1709+        bp = ReadBucketProxy(None, None, '')
1710         offsets = bp._parse_offsets(f.read_share_data(0, 0x24))
1711         start = f._data_offset + offsets["data"]
1712         end = f._data_offset + offsets["plaintext_hash_tree"]
1713hunk ./src/allmydata/test/test_storage.py 26
1714 from allmydata.interfaces import BadWriteEnablerError
1715 from allmydata.test.common import LoggingServiceParent
1716 from allmydata.test.common_web import WebRenderingMixin
1717+from allmydata.test.no_network import NoNetworkServer
1718 from allmydata.web.storage import StorageStatus, remove_prefix
1719 
1720 class Marker:
1721hunk ./src/allmydata/test/test_storage.py 193
1722             br = BucketReader(self, sharefname)
1723             rb = RemoteBucket()
1724             rb.target = br
1725-            rbp = rbp_class(rb, peerid="abc", storage_index="")
1726+            server = NoNetworkServer("abc", None)
1727+            rbp = rbp_class(rb, server, storage_index="")
1728             self.failUnlessIn("to peer", repr(rbp))
1729             self.failUnless(interfaces.IStorageBucketReader.providedBy(rbp), rbp)
1730 
1731}
1732
1733Context:
1734
1735[Rename test_package_initialization.py to (much shorter) test_import.py .
1736Brian Warner <warner@lothar.com>**20110611190234
1737 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
1738 
1739 The former name was making my 'ls' listings hard to read, by forcing them
1740 down to just two columns.
1741] 
1742[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
1743zooko@zooko.com**20110611163741
1744 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
1745 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
1746 fixes #1412
1747] 
1748[wui: right-align the size column in the WUI
1749zooko@zooko.com**20110611153758
1750 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
1751 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
1752 fixes #1412
1753] 
1754[docs: three minor fixes
1755zooko@zooko.com**20110610121656
1756 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
1757 CREDITS for arc for stats tweak
1758 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
1759 English usage tweak
1760] 
1761[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
1762david-sarah@jacaranda.org**20110609223719
1763 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
1764] 
1765[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
1766wilcoxjg@gmail.com**20110527120135
1767 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
1768 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
1769 NEWS.rst, stats.py: documentation of change to get_latencies
1770 stats.rst: now documents percentile modification in get_latencies
1771 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
1772 fixes #1392
1773] 
1774[corrected "k must never be smaller than N" to "k must never be greater than N"
1775secorp@allmydata.org**20110425010308
1776 Ignore-this: 233129505d6c70860087f22541805eac
1777] 
1778[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
1779david-sarah@jacaranda.org**20110517011214
1780 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
1781] 
1782[docs: convert NEWS to NEWS.rst and change all references to it.
1783david-sarah@jacaranda.org**20110517010255
1784 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
1785] 
1786[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
1787david-sarah@jacaranda.org**20110512140559
1788 Ignore-this: 784548fc5367fac5450df1c46890876d
1789] 
1790[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
1791david-sarah@jacaranda.org**20110130164923
1792 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
1793] 
1794[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
1795zooko@zooko.com**20110128142006
1796 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
1797 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
1798] 
1799[M-x whitespace-cleanup
1800zooko@zooko.com**20110510193653
1801 Ignore-this: dea02f831298c0f65ad096960e7df5c7
1802] 
1803[docs: fix typo in running.rst, thanks to arch_o_median
1804zooko@zooko.com**20110510193633
1805 Ignore-this: ca06de166a46abbc61140513918e79e8
1806] 
1807[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
1808david-sarah@jacaranda.org**20110204204902
1809 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
1810] 
1811[relnotes.txt: forseeable -> foreseeable. refs #1342
1812david-sarah@jacaranda.org**20110204204116
1813 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
1814] 
1815[replace remaining .html docs with .rst docs
1816zooko@zooko.com**20110510191650
1817 Ignore-this: d557d960a986d4ac8216d1677d236399
1818 Remove install.html (long since deprecated).
1819 Also replace some obsolete references to install.html with references to quickstart.rst.
1820 Fix some broken internal references within docs/historical/historical_known_issues.txt.
1821 Thanks to Ravi Pinjala and Patrick McDonald.
1822 refs #1227
1823] 
1824[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
1825zooko@zooko.com**20110428055232
1826 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
1827] 
1828[munin tahoe_files plugin: fix incorrect file count
1829francois@ctrlaltdel.ch**20110428055312
1830 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
1831 fixes #1391
1832] 
1833[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
1834david-sarah@jacaranda.org**20110411190738
1835 Ignore-this: 7847d26bc117c328c679f08a7baee519
1836] 
1837[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
1838david-sarah@jacaranda.org**20110410155844
1839 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
1840] 
1841[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
1842david-sarah@jacaranda.org**20110410155705
1843 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
1844] 
1845[remove unused variable detected by pyflakes
1846zooko@zooko.com**20110407172231
1847 Ignore-this: 7344652d5e0720af822070d91f03daf9
1848] 
1849[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
1850david-sarah@jacaranda.org**20110401202750
1851 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
1852] 
1853[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
1854Brian Warner <warner@lothar.com>**20110325232511
1855 Ignore-this: d5307faa6900f143193bfbe14e0f01a
1856] 
1857[control.py: remove all uses of s.get_serverid()
1858warner@lothar.com**20110227011203
1859 Ignore-this: f80a787953bd7fa3d40e828bde00e855
1860] 
1861[web: remove some uses of s.get_serverid(), not all
1862warner@lothar.com**20110227011159
1863 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
1864] 
1865[immutable/downloader/fetcher.py: remove all get_serverid() calls
1866warner@lothar.com**20110227011156
1867 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
1868] 
1869[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
1870warner@lothar.com**20110227011153
1871 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
1872 
1873 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
1874 _shares_from_server dict was being popped incorrectly (using shnum as the
1875 index instead of serverid). I'm still thinking through the consequences of
1876 this bug. It was probably benign and really hard to detect. I think it would
1877 cause us to incorrectly believe that we're pulling too many shares from a
1878 server, and thus prefer a different server rather than asking for a second
1879 share from the first server. The diversity code is intended to spread out the
1880 number of shares simultaneously being requested from each server, but with
1881 this bug, it might be spreading out the total number of shares requested at
1882 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
1883 segment, so the effect doesn't last very long).
1884] 
1885[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
1886warner@lothar.com**20110227011150
1887 Ignore-this: d8d56dd8e7b280792b40105e13664554
1888 
1889 test_download.py: create+check MyShare instances better, make sure they share
1890 Server objects, now that finder.py cares
1891] 
1892[immutable/downloader/finder.py: reduce use of get_serverid(), one left
1893warner@lothar.com**20110227011146
1894 Ignore-this: 5785be173b491ae8a78faf5142892020
1895] 
1896[immutable/offloaded.py: reduce use of get_serverid() a bit more
1897warner@lothar.com**20110227011142
1898 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
1899] 
1900[immutable/upload.py: reduce use of get_serverid()
1901warner@lothar.com**20110227011138
1902 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
1903] 
1904[immutable/checker.py: remove some uses of s.get_serverid(), not all
1905warner@lothar.com**20110227011134
1906 Ignore-this: e480a37efa9e94e8016d826c492f626e
1907] 
1908[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
1909warner@lothar.com**20110227011132
1910 Ignore-this: 6078279ddf42b179996a4b53bee8c421
1911 MockIServer stubs
1912] 
1913[upload.py: rearrange _make_trackers a bit, no behavior changes
1914warner@lothar.com**20110227011128
1915 Ignore-this: 296d4819e2af452b107177aef6ebb40f
1916] 
1917[happinessutil.py: finally rename merge_peers to merge_servers
1918warner@lothar.com**20110227011124
1919 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
1920] 
1921[test_upload.py: factor out FakeServerTracker
1922warner@lothar.com**20110227011120
1923 Ignore-this: 6c182cba90e908221099472cc159325b
1924] 
1925[test_upload.py: server-vs-tracker cleanup
1926warner@lothar.com**20110227011115
1927 Ignore-this: 2915133be1a3ba456e8603885437e03
1928] 
1929[happinessutil.py: server-vs-tracker cleanup
1930warner@lothar.com**20110227011111
1931 Ignore-this: b856c84033562d7d718cae7cb01085a9
1932] 
1933[upload.py: more tracker-vs-server cleanup
1934warner@lothar.com**20110227011107
1935 Ignore-this: bb75ed2afef55e47c085b35def2de315
1936] 
1937[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
1938warner@lothar.com**20110227011103
1939 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
1940] 
1941[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
1942warner@lothar.com**20110227011100
1943 Ignore-this: 7ea858755cbe5896ac212a925840fe68
1944 
1945 No behavioral changes, just updating variable/method names and log messages.
1946 The effects outside these three files should be minimal: some exception
1947 messages changed (to say "server" instead of "peer"), and some internal class
1948 names were changed. A few things still use "peer" to minimize external
1949 changes, like UploadResults.timings["peer_selection"] and
1950 happinessutil.merge_peers, which can be changed later.
1951] 
1952[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
1953warner@lothar.com**20110227011056
1954 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
1955] 
1956[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
1957warner@lothar.com**20110227011051
1958 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
1959] 
1960[test: increase timeout on a network test because Francois's ARM machine hit that timeout
1961zooko@zooko.com**20110317165909
1962 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
1963 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
1964] 
1965[docs/configuration.rst: add a "Frontend Configuration" section
1966Brian Warner <warner@lothar.com>**20110222014323
1967 Ignore-this: 657018aa501fe4f0efef9851628444ca
1968 
1969 this points to docs/frontends/*.rst, which were previously underlinked
1970] 
1971[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
1972"Brian Warner <warner@lothar.com>"**20110221061544
1973 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
1974] 
1975[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
1976david-sarah@jacaranda.org**20110221015817
1977 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
1978] 
1979[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
1980david-sarah@jacaranda.org**20110221020125
1981 Ignore-this: b0744ed58f161bf188e037bad077fc48
1982] 
1983[Refactor StorageFarmBroker handling of servers
1984Brian Warner <warner@lothar.com>**20110221015804
1985 Ignore-this: 842144ed92f5717699b8f580eab32a51
1986 
1987 Pass around IServer instance instead of (peerid, rref) tuple. Replace
1988 "descriptor" with "server". Other replacements:
1989 
1990  get_all_servers -> get_connected_servers/get_known_servers
1991  get_servers_for_index -> get_servers_for_psi (now returns IServers)
1992 
1993 This change still needs to be pushed further down: lots of code is now
1994 getting the IServer and then distributing (peerid, rref) internally.
1995 Instead, it ought to distribute the IServer internally and delay
1996 extracting a serverid or rref until the last moment.
1997 
1998 no_network.py was updated to retain parallelism.
1999] 
2000[TAG allmydata-tahoe-1.8.2
2001warner@lothar.com**20110131020101] 
2002Patch bundle hash:
20032494631bc26ff8c52b1d2a57bfb49f0079da50e7