[tahoe-dev] One Grid to Rule Them All

Avi Freedman freedman at freedman.net
Sat Jun 29 04:56:21 UTC 2013


Dear Comrade Nathan,

My first thought was that one could publish hashes of readcaps into DNS,
where the DNS response would be the introducer for the cluster.  But...
With the introducer furl I think they could upload as well as retrieve.

We've been looking at something related for Havenco, which is getting
ready to launch LAFS and S3 bucketed storage using private nodes per
customer (to solve the lack of accounting).

One question that's come up is how users could share LAFS-stored
data without giving away the keys to their cluster (for uploads).

We haven't implemented it yet but it'd seem pretty simple to have an
nginx proxy that sat on a public port, accepted caps using the same URL
as the tahoe-lafs web server:

http://x.y.z.q:3456/file/URI%3ACHK%3Acbb4d3bb6dgiqwiygidqolabve%3Ag6jf2rutbf3pzeltxytm5tbf3f3xu2hhj2yrbnn4vcw2nvrrs4va%3A3%3A10%3A4720/@@named=/tahoe-test

That just proxies to a local tahoe-lafs web server bound to localhost.

Then you wind up sharing a URL instead of a cap.

Adding in basic auth would be pretty simple as well if desired, though
in the LAFS religion that would be heresey (sorry, not sure if you believe
in religion, Comrade).

If there's interest we have been thinking about setting this up anyway
and we can do a quick test.

Another argument has been for adding basic or more advanced (cert?)
auth to the tahoe web server and having the tahoe web server restrict
functionality to read-only or something less than being able to fully
use the cluster and upload new objects.  But since we're still poking
at LAFS we'd rather not start hacking on the core, so the proxy 
solution seems less intrusive and like it should work to allow 
publishing content to 3rd parties you don't want to cross the inner
bit streams of friendnets with.

Complexity could be added with having a DNS db of cap <-> cluster
public-facing web sever options.  If there was interest we could
build and run something like that, at least to the level of millions
of caps.  Doing so for billions+ would need some of the economic 
incentives to which you were referring.

Although...

If latency/QoS weren't an issue, one could perhaps have a group of
cooperating sites run an LAFS cluster with a few thousand well-known 
caps storing pieces of a DB of just the cap<->url links, with a local 
DNS server doing the lookups and responses, with some caching.

Avi
(a part-time Havenco bit janitor)

> The time has come to shed our conspiratorial pretense of being nothing but
> small disparate bands of neighborly do gooders sharing storage with their
> friends.  It is time to reveal to the world our true conquest of world
> domination and announce our intent to create The One Grid to Rule Them All!

> I personally want to be able to email or tweet or inscribe on papyrus a URL
> containing a read cap, and anyone who sees that and has Tahoe-LAFS version
> Glorious Future installed should have a reasonable chance to retrieve the
> content.
> 

> Regards,
> Comrade Nathan
> Grid Universalist



More information about the tahoe-dev mailing list