| 1 | | See http://tahoebs1.allmydata.com:8123/file/URI%3ACHK%3Ah2laisecpigc7uqg6uyctzmeq4%3Aaoef77jjwsryjbz6cav5lhid3lisqhxm5ym7qdrn5vhm24oa4oyq%3A3%3A10%3A4173/@@named=/peer-selection-tahoe3.txt |
| | 1 | Copied from http://tahoebs1.allmydata.com:8123/file/URI%3ACHK%3Ah2laisecpigc7uqg6uyctzmeq4%3Aaoef77jjwsryjbz6cav5lhid3lisqhxm5ym7qdrn5vhm24oa4oyq%3A3%3A10%3A4173/@@named=/peer-selection-tahoe3.txt |
| | 2 | |
| | 3 | = THIS PAGE DESCRIBES HISTORICAL ARCHITECTURE CHOICES: THE CURRENT CODE DOES NOT WORK AS DESCRIBED HERE. = |
| | 4 | |
| | 5 | When a file is uploaded, the encoded shares are sent to other peers. But to |
| | 6 | which ones? The PeerSelection algorithm is used to make this choice. |
| | 7 | |
| | 8 | In the old (May 2007) version, the verifierid is used to consistently-permute |
| | 9 | the set of all peers (by sorting the peers by HASH(verifierid+peerid)). Each |
| | 10 | file gets a different permutation, which (on average) will evenly distribute |
| | 11 | shares among the grid and avoid hotspots. |
| | 12 | |
| | 13 | This permutation places the peers around a 2^256^-sized ring, like the rim of |
| | 14 | a big clock. The 100-or-so shares are then placed around the same ring (at 0, |
| | 15 | 1/100*2^256^, 2/100*2^256^, ... 99/100*2^256^). Imagine that we start at 0 with |
| | 16 | an empty basket in hand and proceed clockwise. When we come to a share, we |
| | 17 | pick it up and put it in the basket. When we come to a peer, we ask that peer |
| | 18 | if they will give us a lease for every share in our basket. |
| | 19 | |
| | 20 | The peer will grant us leases for some of those shares and reject others (if |
| | 21 | they are full or almost full). If they reject all our requests, we remove |
| | 22 | them from the ring, because they are full and thus unhelpful. Each share they |
| | 23 | accept is removed from the basket. The remainder stay in the basket as we |
| | 24 | continue walking clockwise. |
| | 25 | |
| | 26 | We keep walking, accumulating shares and distributing them to peers, until |
| | 27 | either we find a home for all shares, or there are no peers left in the ring |
| | 28 | (because they are all full). If we run out of peers before we run out of |
| | 29 | shares, the upload may be considered a failure, depending upon how many |
| | 30 | shares we were able to place. The current parameters try to place 100 shares, |
| | 31 | of which 25 must be retrievable to recover the file, and the peer selection |
| | 32 | algorithm is happy if it was able to place at least 75 shares. These numbers |
| | 33 | are adjustable: 25-out-of-100 means an expansion factor of 4x (every file in |
| | 34 | the grid consumes four times as much space when totalled across all |
| | 35 | StorageServers), but is highly reliable (the actual reliability is a binomial |
| | 36 | distribution function of the expected availability of the individual peers, |
| | 37 | but in general it goes up very quickly with the expansion factor). |
| | 38 | |
| | 39 | If the file has been uploaded before (or if two uploads are happening at the |
| | 40 | same time), a peer might already have shares for the same file we are |
| | 41 | proposing to send to them. In this case, those shares are removed from the |
| | 42 | list and assumed to be available (or will be soon). This reduces the number |
| | 43 | of uploads that must be performed. |
| | 44 | |
| | 45 | When downloading a file, the current release just asks all known peers for |
| | 46 | any shares they might have, chooses the minimal necessary subset, then starts |
| | 47 | downloading and processing those shares. A later release will use the full |
| | 48 | algorithm to reduce the number of queries that must be sent out. This |
| | 49 | algorithm uses the same consistent-hashing permutation as on upload, but |
| | 50 | instead of one walker with one basket, we have 100 walkers (one per share). |
| | 51 | They each proceed clockwise in parallel until they find a peer, and put that |
| | 52 | one on the "A" list: out of all peers, this one is the most likely to be the |
| | 53 | same one to which the share was originally uploaded. The next peer that each |
| | 54 | walker encounters is put on the "B" list, etc. |
| | 55 | |
| | 56 | All the "A" list peers are asked for any shares they might have. If enough of |
| | 57 | them can provide a share, the download phase begins and those shares are |
| | 58 | retrieved and decoded. If not, the "B" list peers are contacted, etc. This |
| | 59 | routine will eventually find all the peers that have shares, and will find |
| | 60 | them quickly if there is significant overlap between the set of peers that |
| | 61 | were present when the file was uploaded and the set of peers that are present |
| | 62 | as it is downloaded (i.e. if the "peerlist stability" is high). Some limits |
| | 63 | may be imposed in large grids to avoid querying a million peers; this |
| | 64 | provides a tradeoff between the work spent to discover that a file is |
| | 65 | unrecoverable and the probability that a retrieval will fail when it could |
| | 66 | have succeeded if we had just tried a little bit harder. The appropriate |
| | 67 | value of this tradeoff will depend upon the size of the grid, and will change |
| | 68 | over time. |