-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPIP-342: Ambient Discovery of Content Routers #342
base: main
Are you sure you want to change the base?
Conversation
This follows the previously circulated proposal outline at https://hackmd.io/bh4-SCWfTBG2vfClG0NUFg A basic motivation is included in the PR - but essentially this is the best path I've heard for reducing our dependence on hydras as a centrally operated choke point for moving the bulk of the IPFS network beyond sole reliance on the current KAD DHT.
Co-authored-by: Max Inden <mail@max-inden.de>
… into feat/content-router-discovery
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm missing a way to link content with the provider because I seriously doubt that all parties will be eager to provide all the CIDs in the universe, they will focus on providing the content they care about.
We could link root CID/CIDs with the provider to let know to nodes where they can find the DAG for specific content.
properties: | ||
* reliability - how many good vs bad responses has this router responded | ||
with. This statistic should be windowed, such that the client can calculate | ||
it in terms of the last week or month. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shall we be more specific here?
* reliability - how many good vs bad responses has this router responded | ||
with. This statistic should be windowed, such that the client can calculate | ||
it in terms of the last week or month. | ||
* performance - how quickly does this router respond. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this metric also windowed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this would be windowed. I was imagining a window of ~ "last week" by default, but this seems like a good candidate to evaluate through simulation.
The protocol will follow a request-response model. | ||
A node will open a a stream on the protocol when it wants to discover new | ||
content routers it does not already know. | ||
It will send a bloom filter as it's query. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we specify here first the data that we want to share between nodes, and after that define the way to do it?
list of known content routers, hashing them against the bloom filter and | ||
selecting the top routers that are not already known to the client. It will | ||
return this list, along with it's reliability score for each. This response | ||
is structured as an IPLD list lists, conceptually: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we simplify here? do we really need an IPLD list? Reducing the number of new concepts needed to make this work will ramp up the development of different implementations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a json or cbor array are both examples that would fulfill this, I'll leave it more generic but i have a hard time imagining we'd encode this in a way that wouldn't conform to being considered an IPLD list
The protocol will follow a request-response model. | ||
A node will open a a stream on the protocol when it wants to discover new | ||
content routers it does not already know. | ||
It will send a bloom filter as it's query. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe interesting for this use case: IBLTs: https://arxiv.org/abs/1101.2245
Proposal for sharing bitcoin transactions between nodes faster using IBLTs: https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we don't want invertability here - the use of the bloom filter is not only performance but also to loose some data to not directly reveal what the client knows. We can consider using cuckoo tables or vacuum filters as more space efficient alternatives to a classic bloom filter.
|
||
This design is self-contained - it does not require standing up additional | ||
infrastructure or making additional connections for discovery but rather | ||
gossips routers over existing peer connections. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is the first time gossip is mentioned. Should we be more specific in the Detailed design
section about the protocol and how nodes will be interconnected?
not be directly discovered. Instead, the gossip discovery protocol is | ||
ambiently discovered in much the same way as circuit relays. | ||
|
||
#### Advertisement in the DHT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The good things about advertising using the DHT are:
- Network that is already there, no need to create a new protocol to "provide" new providers instead of CIDs.
- You can provide associating your provider with a specific root CID content. I seriously doubt that all providers will be eager to provide all CIDS in the universe.
This is a philosophical disagreement about what a 'content router' is. We currently have a couple examples of content routers that do have all the CIDs in the universe, and we do not have convincing examples of, or a definition for, sub-content-routers as you're proposing here. Why are we trying to compromise to a much-harder-to-make-work complexity without trying for the thing that makes sense and the direction we're heading first? |
@willscott I think more than a philosophical disagreement is a physical one. Right now we are able to keep all the CIDs on the network in one provider for two reasons:
When we start to have different ways of providing CIDs, will be near impossible to have everything replicated by everyone. Also when the network scales, having centralized all the information in several places will be quite challenging and costly. But on the other hand, allowing both approaches (providing everything vs providing a subset of the CIDs) will have for sure their use cases for people with not a huge amount of money to maintain big providers. |
We currently have providers stepping up to provide full replicas of a content routing database. That is what network indexers have been doing over the last year. In designing delegated routing so far, the eye has been towards a design where delegated routers need to fall back and do the additional work of querying other routers in order to collect a full replica if they don't possess it themselves, rather than making that the end kubo node's responsibility, as that leads to an untenable performance and decision process for end user nodes that are not equipped to handle that. I'm not entirely sure of your counter proposal here: I think there are very strong counter arguments against both #322 - which compromises trying to be a content addressed network, or limited DHT providing (e.g. to roots) - which still couldn't handle the current indexer database scale. |
IPFS nodes will advertise and coordinate discover of content routers using a | ||
new libp2p protocol advertised as "/ipfs/content-router-discovery/1.0.0". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As things currently are the name, purpose, or formatting of this protocol seem off. This relates to some of the middle-ground in this discussion between @willscott and @ajnavarro
with #342 (comment) and #342 (comment) around content routing.
High level:
- This currently seems to be specifically for IPNI routers so at best this is
/ipfs/ipni-discovery/1.0.0
- I can guarantee with 100% certainty that there will be people who want additional content routing systems than
/ipfs/kad/1.0.0
and the IPNI protocol. However, the model of this discovery system works for any system that has a set of endpoints which are supposed to be able to locate all data within the system (e.g. delegated routing endpoints for/ipfs/kad/1.0.0
, IPNI endpoints, delegated routing endpoints for BitTorrent's mainline DHT, etc.). If you want it to be generic enough to cover that then there needs to be some name/identifier for the system you want (e.g. asking for bloom filters or peers specific to a given routing system)
If we leave this as IPNI only, ok 🤷. However, almost the same logic is going to be needed for browser nodes trying to leverage multiple delegated routing endpoints so they'll either up defaulting back to one of the "rejected options" here (e.g. hard coding them or DHT discovery) or reimplementing this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This currently seems to be specifically for IPNI routers so at best this is /ipfs/ipni-discovery/1.0.0
this is for discovery of content routers per the delegated content router API - what about this is IPNI specific?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The requirement (implied by the proposed reputation scoring) of keeping track of all CIDs in existence makes this sound like an IPNI-specific proposal. Who else would keep the whole index if not "InterPlanetary Network Indexer" (even if it is a composite/reverse proxy one)?
Due to this, renaming it to /ipni-discovery/
calls it what it is and avoids undesired feature creep.
Alternatives:
- make this more generic,
/router-discovery/
: extend the lookup spec to include explicit type of router (for now all lookups will be IPNIs, but allows us to expand this in the future, as suggested). I see this being useful for gossiping/discovering things like IPNS, peer routers, or even DoH, DoT DNS resolvers. - having different router types allows us to have different reputation systems, which may be a way for future/separate support routers which have only a partial view of entire CID space
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused as to how the current draft addresses the issues here:
will have knowledge of the entire CID space
This line (from the spec) seems problematic even in the IPNI case. IPNI != the entire CID space which starts to make implementations complicated. The idea that one routing system is going to cover every use case is IMO not the way to go (and also the reason why there's even discussion of a delegated content routing API rather than just an IPNI API).
An example issue is to say we have 4 routers:
- cid.contact (IPNI)
- FilSwan (IPNI)
- routing.delegate.ipfs.io (proxies DHT + IPNI)
- ipfsdht.delegate.ipfs.io (proxies DHT, this is the only one that doesn't exist today, but certainly could)
While router 3 provides strictly more information than routers 1 or 2 it's also likely to be slower than them. It seems optimal to either contact 3 or contact 1/2 + 4 in parallel. For a node running its own DHT client would contact routers 1 or 2 and never contact 4. However, a naïve implementation may just result in all requests going through router 3 as it has the most CIDs covered which is not good. Perhaps a classification algorithm would be able to tease-out the optimizations here without further protocol adjustments, but that seems like a lot of complexity that could be alleviated by a small protocol adjustment.
This case also seems more problematic than the one that's been resolved by flagging router "type" like content-routing
, peer-routing
, etc. since support for a given API (content/peer routing) can be discovered with a single query to the endpoint whereas this requires a bunch of code complexity.
This seems like it'd be largely resolvable based on allowing users to query and return a set of named routing systems, or just calling this the "IPNI discovery system" so that routers like 3 + 4 know not to participate. I'd rather the former, but understand the latter.
While I wouldn't be surprised if down the road we also ended up requiring some of that ML-style classification code anyway I suspect walking down that path now is premature and likely to cause us problems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
It seems like there's a lot of complexity in expressing this 'composition complexity' either directly or through classification. we don't have this problem today so I would prefer to defer this sort of grammar to a subsequent IPIP - you worry that 3 would do better than the others, but I would argue that would be incentive for the IPNI team to build what i think you have previously called 'radar' to incorporate DHT results into IPNI such that 1,2,3 are all equal :)
-
As you say, "IPNI != the entire CID space". I think it's a mistake to limiting our framing of this to an "IPNI discovery system" when it is simply discovering 'the most complete' available content routers. We're trying to be inclusive/general here - and I don't see huge harm in calling this content routers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't have this problem today
I guess it depends what you mean by "today". I would like libraries like https://github.com/libp2p/js-libp2p-delegated-content-routing to switch to the latest content routing API (or have alternative which have switched) at which point js-libp2p in browsers should be able to leverage both the DHT and IPNI to get data from any peer that speaks wss/webtransport/webrtc.
Ideally they could use this protocol for discovery rather than hardcoding a DHT resolution endpoint.
@lidel could probably speak more about desired timings here.
but I would argue that would be incentive for the IPNI team to build what i think you have previously called 'radar' to incorporate DHT results into IPNI such that 1,2,3 are all equal :)
That's cool and would certainly resolve at least this use case 😄.
We're trying to be inclusive/general here
❤️
I think it's a mistake to limiting our framing of this to an "IPNI discovery system" when it is simply discovering 'the most complete' available content routers.
That's an interesting framing. By pushing for the "most complete set" it seems like you're essentially trying to get routers to compete for attention and content and make it so that only a single request to a single system needs to happen for clients to get what they need. If this is how the system evolves this is very nice to client machines.
However, if routers try and cut costs or code complexity by serving more-specific data (e.g. only data advertised over a specific pubsub channel, only data put in the IPFS Public DHT, only data from the BitTorrent network, ...) then the client code could start becoming problematic as it tries to figure out who to ask, without spamming all the routers.
I'm more cautious in advocating for the latter, but could see this go either way 😅. As long as the more immediate case around DHT + IPNI data being available to browser nodes is covered I'm happy 😄.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 that delegated DHT is the core use case for IPFS on Web Platform (JS in HTML in web browser), at least mid-term, because self-hosted user data is (at least for now) on DHT, and rarely on IPNI (which becomes the way for big paid providers to handle announcement of huge number of CIDs).
- The gist of https://github.com/libp2p/js-libp2p-delegated-content-routing story is that is uses
/api/v0/dht
from Kubo RPC and we want to move away from that model. - Switching to HTTP API at
routing.delegate.ipfs.io
(proxies DHT + IPNI) is an easy win, and we would want to do this ASAP. - Having ambient discovery via bootstrap nodes talking
/wss
and/webtransport
will allow for basic resiliency / redundancy
IPFS nodes, and the other side are the bootstrap and core-infrastructural | ||
nodes with high connectivity in the network. | ||
|
||
### 1. content-routing as a libp2p protocol |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the expected plan for this to work with browser-based nodes? Are they supposed to fallback to one of your rejected alternatives (e.g. hardcoded nodes, hardcoded bootstrap nodes, advertising in the DHT, advertising in the Indexers, ...)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect the idea is for /dnsaddr/bootstrap.libp2p.io
(or any other bootstrapper set by JS user, as long its /webtransport
or /wss
) to speak this new protocol, avoiding hardcoding anything new.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what prevents them from participating in this protocol as described?
browser nodes will need to contact to other existing nodes, as they do today. they would learn about the existence of content routers through those same channels via the new protocol, and could then make use of them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what prevents them from participating in this protocol as described?
CORS. If the only type of router this protocol returns is HTTP URL, then by default JS-IPFS running on a website won't be able to read data via cross-origin requests to the discovered router due to CORS limitations.
We have two ways of solving the problem:
- (easy spec fix) Add a paragraph that requires
https://
servers returned by this discovery protocol to to ALWAYS haveAccess-Control-Allow-Origin: *
etc set up- cc @guseggert for visibility, as we should include note about CORS headers in IPIP-337: Delegated Content Routing HTTP API #337 too
- (more involved) Create libp2p version of IPIP-337: Delegated Content Routing HTTP API #337 that browser peers could use over existing
/wss
or/webtransport
listeners. Another argument Why IPFS needs Delegated Routing over libp2p.
Cons: | ||
* Nodes cannot drop use of the DHT / other content routing options always are 'second tier'. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't really the case. Even with this proposal you still need a bootstrap node somewhere to get going (e.g. a /ipfs/kad/1.0.0
bootstrapper, or someone supporting this libp2p protocol). For IPNI you could advertise to IPNI as well and you'd be fine. Perhaps a more accurate con is that this gives less of that subjective information that may/may not come in handy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the alternative of other content routers being found in the DHT, which is what this alternative is trying to describe, does mean that no ipfs node could be run without kad dht code for DHT lookups. Being a DHT participant is more complexity than just having libp2p code to be able to connect to other peers, and at least my understanding of what's being proposed in this alternative is that it is intertwined with the DHT and not equivalent to hardcoded bootstrap nodes through which content routers can be learned.
|
||
#### Static list of known routers distributed with IPFS clients | ||
|
||
This has worked for the current IPFS bootstrap node, but leads to the need for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed that IPNI (and delegated routers in general) are different from /ipfs/kad/1.0.0
in that the DHT has a discovery mechanism built in once there is bootstrapping and currently IPNI does not. However, any implementation is going to still need some level of hard-coding to get going here and having additional discovery is needed.
support this prioritization without leaking the exact list of known content | ||
routers that the client already knows. | ||
|
||
* The size of the bloom filter is chosen by the client. It is sized such |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately GitHub doesn't allow threads not tied to a line, but wanted to add some thoughts to this discussion #342 (comment) in a way that responses would be easy to trace.
- Per @ajnavarro's comment IPIP-342: Ambient Discovery of Content Routers #342 (comment) I too find it a little hard to believe that there will be so many routers each providing full replicas of all data tracked by IPNI that a bloom filter would be required given that running these servers is expensive and incentivization is IIUC mostly TBD (I think @guseggert had some napkin math here showing the large costs around storing 10^15 CIDs even if we exclude bandwidth costs). That being said this is
#not-this-ipips-problem
. If the IPNI team thinks thousands of nodes all over the world will spring up hosting PBs of data and that lack of consistency between replicas isn't going to cause problems with the evaluation criteria that clients use that problem seems to live elsewhere. - Whether or not IPIP-322: Content Routing Hints #322 is a good/bad idea is also
#not-this-ipips-problem
since this describes how to find routers for a given content routing system (i.e. IPNI) not whether it should be passable as a hint.- As an aside my 2c is that you've got to be careful here to not break IPLD properties if you go this route as I've flagged in IPIP-322: Content Routing Hints #322, however it's potentially useful to add hints as long as they're not mandatory.
IPFS nodes, and the other side are the bootstrap and core-infrastructural | ||
nodes with high connectivity in the network. | ||
|
||
### 1. content-routing as a libp2p protocol |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect the idea is for /dnsaddr/bootstrap.libp2p.io
(or any other bootstrapper set by JS user, as long its /webtransport
or /wss
) to speak this new protocol, avoiding hardcoding anything new.
@@ -0,0 +1,267 @@ | |||
# IPIP 0342: Content Router Ambient Discovery |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since I can't expose a partial index without being punished, would this be more correct?
# IPIP 0342: Content Router Ambient Discovery | |
# IPIP 0342: IPNI Content Router Ambient Discovery |
API. These routers currently are considered to directly support queries using | ||
the protocols specified by | ||
[IPIP-337](https://github.com/ipfs/specs/pulls) | ||
and/or | ||
[IPIP-327](https://github.com/ipfs/specs/pull/327). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to either decide which one is the future, or document how client decides which one to use for sending requests.
IF we do the latter, including content type along with the router URL is the way to go: Reframe endpoint is application/vnd.ipfs.rpc[..]; version=n
.
Clarify generality potential of protocol
I updated to hopefully address your review, @lidel
|
Protocol messages are encoded using *cbor*. The following protocol examples demonstrate | ||
the schemas of requests and responses if they were to be encoded with JSON. | ||
|
||
A query on the "/ipfs/router-discovery/1.0.0" protocol will look like: | ||
```json | ||
{ | ||
"router": "string", | ||
"filter": "bytes of the bloom filter" | ||
} | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if this protocol for discovering routers could be useful for some libp2p users (we could have a separate type for discovering routers that support peer routing). Indexers already have the peer data (mapping from peerid to multiaddrs), could be useful for reducing peer routing on light clients (use DHT as fallback / only when necessary).
@mxinden @marten-seemann thoughts on the use case and the wire format here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(we could have a separate type for discovering routers that support peer routing).
Thus they would serve the same use-case as a rendezvous server?
thoughts on the use case and the wire format here?
I can not think of a project outside of the IPFS realm that is in need for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thus they would serve the same use-case as a rendezvous server?
It extends the rendezvous protocol in two ways:
- not requiring the need for a single hard-coded rendezvous point
- adding reputational gossip in addition to just directory listing in the rendezvous protocol
nodes do not have geographic locality. As a result, performance is | ||
separated in the tracking of content routers because it will not be | ||
effective as a ranking factor in the non-geographically-aware | ||
gossip system described here. As an optimization, nodes may choose to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gossiping should be geographically-aware and happen only between peers that are geographically close to each other. Otherwise, I may share a content router that is geographically close from me, but it will be too slow for you, and you won't use it at all.
So sharing content routers with geographically far peers becomes irrelevant, as long as we had enough content routers and that they are distributed around the globe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we believe ipfs nodes will generally have enough knowledge to ambiently identify which peers are geographically close to them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A peer knows the RTT between itself and all of its directly connected peers. I would argue that a node cannot learn useful information about new content routers from a node that is 150+ms away from itself (except if it is in a desert). Hence nodes could gossip about content routers only with their closest nodes (in ping distance).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
libp2p/specs#413 (GossipSub v1.2) would probably solve this, since it's all about minimising latency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could use the same modeling / structure as gossip sub, but this is meant to be pull-based rather than push based. I have concerns about dropping in GossipSub directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, routers identified with DNS names (cid.contact) could use things like Anycast to ensure the client is routed to the closest one. (I believe we already do that for ipfs.io gateway..). Meaning, reports about "the same router" may be actually about different instance entirely. At the very least, spec should note that distance between peers should impact a router's score evaluation.
Co-authored-by: Guillaume Michel - guissou <guillaumemichel@users.noreply.github.com>
In addition, this protocol expects that content routers that may be considered | ||
for auto-configuration/discovery by IPFS nodes will have knowledge of the | ||
entire CID space - in other words a delegation to such a router may be | ||
considered 'exhaustive'. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this happen? What kind of consistency SLAs should routers have, and how can they achieve it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to say 'that's outside of this direct IPIP' - in that if routers fail to be consistent they would risk loosing priority.
In practice:
- indexers follow the list of providers from other indexers so that the constituents they follow are consistent
- they gossip announcements they see to each other so new updates are propagated between them
- [in progress] they can come to snapshot consensus periodically over a vector of providers & latest advertisements.
Co-authored-by: Gus Eggert <gus@gus.dev>
This came up in https://pl-strflt.notion.site/2023-05-30-Content-Routing-WG-12-b2ed74834fe44e359bbcdd02740e2084 There is going to be implementation need for this in the next quarter or two. As a result, want to get ahead before code gets written and decisions ossify. Some next steps:
Implementation notes: |
has served useful content in the past. | ||
* Latency / ping time of the peer. | ||
|
||
### 3. selection of routers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note to self: this section of the spec should be more specific about "bare minimum reputation system", and provide enough for implementer to do the right thing, and not say clients do "as they wish".
Expected probing behavior (or lack of it) on non-client services like bootstrappers should also be specified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A response is a list of entries, which looks like: | ||
|
||
```json | ||
[ | ||
{ | ||
"peer": "multiaddr.MultiAddr", | ||
"score": float |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- single
score
may be too vague. based on Rhea/Saturn alone, we may want to track "lookup" score and "retrieval" score separately - if we want this API to be not limited to IPNI, then we could have
type
per result
IPFS nodes, and the other side are the bootstrap and core-infrastructural | ||
nodes with high connectivity in the network. | ||
|
||
### 1. content-routing as a libp2p protocol |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel that sooner or later we will need a HTTP version of this,
better to address HTTP plan for this protocol from the start.
If this is a generic discovery protocol, http story could be as basic as a section that states the dag-json/dag-cbor wire format described here can be exposed as /routing/v1/routers
or /routing/v1/discovery
(making it part of the existing routing story for HTTP).
Since this is request-response protocol, perhaps leverage libp2p+http work from libp2p/specs#508?
It will enable us to describe protocol in terms of HTTP semantics, and expose the same socket over libp2p and HTTP (like we expose trustless gateway over libp2p in Kubo experiment at ipfs/kubo#10049)
This collapses complexity related to testing, and maximizes utility: HTTP client is enough to query public endpoint for useful routers.
Explicit response "router" field expressing the content router type, and explicit example of the "ipni" use case.
[IPIP-337](https://github.com/ipfs/specs/pulls/337) | ||
and/or | ||
[IPIP-327](https://github.com/ipfs/specs/pull/327). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be worth calling out these names as well, so users unfamiliar with the ipip-### dont have to click through
In addition, this protocol expects that content routers that may be considered | ||
for auto-configuration/discovery by IPFS nodes will have knowledge of the | ||
entire CID space - in other words a delegation to such a router may be | ||
considered 'exhaustive'. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to have an exhaustive list of the entire CID space?
Also, the wording here is hard to follow. Maybe something like:
In addition, this protocol expects that content routers that may be considered | |
for auto-configuration/discovery by IPFS nodes will have knowledge of the | |
entire CID space - in other words a delegation to such a router may be | |
considered 'exhaustive'. | |
In addition, implementers of this protocol may specify default content routers, but this protocol comes with a few expectations for default content-routers: | |
Default content routers configured by those implementing this protocol: | |
* MUST know of the | |
entire CID space - in other words, a delegation to such a router may be | |
considered 'exhaustive' | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also doubt that any single/federated entity will know the entire CID space. It may be possible now, but this certainly doesn't scale.
If we shard the CIDs on different instances, we basically get a DHT (with O(1) lookup if all shards fit in memory). However, it inherits the same weaknesses that the current DHT has, when providing a fair amount of CIDs, you need to open a connection to every single shard. To mitigate this, we might consider introducing intermediary services tasked with CID-to-shard allocation, streamlining access and reducing the number of direct connections needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am going to classify this as out of scope for this IPIP.
We have examples of content routers being used today that are exhaustive. This IPIP is aimed to solve the immediate federation problem there. We don't have existing ones federating shards of space, so can not yet solve the general problem concretely.
Given nobody has implemented this for a year, I don't feel comfortable trying to increase scope at this point.
Nodes will conceptually track a registry about known content routers. | ||
This registry will be able to understand for a given content router two | ||
properties: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nodes will conceptually track a registry about known content routers. | |
This registry will be able to understand for a given content router two | |
properties: | |
Nodes will conceptually track a registry of known content routers. | |
This registry will maintain two properties for a given content router: |
* reliability - how many good vs bad responses has this router responded | ||
with. This statistic should be windowed, such that the client can calculate | ||
it in terms of the last week or month. This will in practice be stored as | ||
daily buckets of successful and unsuccessful queries against a router, where | ||
success indicates that the router was queried, and the data was subsequently | ||
retrieved from a node returned as a provider by that router. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible for a content router who returns a peer that should have content, to know that they successfully provided that content?
- A asks B for providers of bafy1
- B finds C and tells A about it
- A asks C for bafy1.
Is this spec requiring A to tell B the result of item 3 above? How is this tracking accomplished?
2. When its AutoNAT status indicates it is eligible to be a DHT server, and | ||
it has not successfully performed a sync in over a day. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the reasoning here?
This follows the previously circulated proposal outline at https://hackmd.io/bh4-SCWfTBG2vfClG0NUFg
A basic motivation is included in the PR - but essentially this is the best path I've heard for reducing our dependence on hydras as a centrally operated choke point for moving the bulk of the IPFS network beyond sole reliance on the current KAD DHT.