Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A previously valid key that is compromised can rewrite the history of the DID Log by removing newer entries. #59

Closed
TimoGlastra opened this issue Jun 28, 2024 · 4 comments

Comments

@TimoGlastra
Copy link

This was mentioned by someone at DICE (I don't know who anymore, but thought it would be good to open an issue of what I understood from the problem).

The updateKeys defines which keys can create an update to the did document. This limits updates of the did document to only an entity in posession of an updateKey. So getting access to a web server is not enough to hijack the document and add keys.

However because of how web servers work, you can just replace the contents of the jsonl file and remove any older entries (like rewriting the history of a blockchain). So if a malicious actor gets hold of any key that was ever used in the did document, they could just remove all entries after that key was valid, and rewrite the history by adding their own key.

So an updateKeys list that contains all keys ever used, is the same in terms of security as an updateKey only containing the currently valid updateKeys.

Resolvers could somehow store the latest version they have resolved and thus recognize that history has been rewritten, but it will only be detectable if you've previously fetched the did, and the rewrite of history didn't occur between your last resolving of the did and the next resoling of the did.

I'm not sure how this could easily be solved without using something like e.g. witnesses and basically building a blockchain again, but it does seem like a critical thing to address.

@TimoGlastra
Copy link
Author

Something about adding an extra verification step to a DNS record was also mentioned, I'm not sure if this is described somewhere already? But I guess including the latest/recent hash of the latest log entry in a DNS record would mean only entries created after the last hash published in a DNS record could be rewritten assuming the DNS server hasn't been compromised as well (and wouldn't mean you have to update the DNS entry and the did:tdw document at the same time, but you should probably do it in a timely manner).

@swcurran
Copy link
Contributor

Thanks for the valuable feedback. This is the second time we’ve received this comment (both from the Netherlands — clearly "Dutch minds think alike"! :-) ), and happily, the answers are fairly clear. Here's our perspective on the attack you described:

Let's first clarify the conditions around the attack. In the second paragraph, you mention that gaining access to a web server is insufficient to hijack the document. However, the subsequent text seems to suggest that the proposed attack could be executed with mere web access. To rectify any ambiguity, it's essential to emphasize that the attack you describe requires both a compromised key and a compromised website.

So what to do about the described attack?

For those currently using did:web, a move to did:tdw makes the attack much more complex. Not perfect, as you mention, but much more difficult, since both compromises (key and website) are required. With did:web, simply getting web access and replacing the did.json file is all that is needed. With did:tdw, a website-only compromise and change to the did.jsonl file is detected during verification. Annoying of course, but not to the level of loss of control of the DID. Thus with did:tdw that (significant) extra mitigation is achieved with relatively little extra effort on the part of the Controller — and they get all the other did:tdw features (history, pre-rotation, portability, authorization evidence, and so on).

As you mention, the application of the High Assurance DID with DNS specification that applies to did:web applies equally to did:tdw, so that same mitigation applies. With that, the attacker needs to compromise DNS, the web server, and an updateKeys key to achieve the attack that you describe.

As mentioned in my presentation at DICE, we have defined and are adding to the specification what KERI calls “witnesses” — a capability we plan to call “approvers.” Approvers (like witnesses) are collaborators with the Controller that approve DID version updates before they are published, providing evidence of their approval. A verifier checks the approvers evidence as part of their verification of the DID. Like the rest of did:tdw, we’ve tried specifying this in a very lightweight manner (to quote Mike Jones at EIC Berlin — “make the difficult possible”), with the DID Controller simply including a list of approvers, and the approvers verifying version updates and providing a verifiable credential indicating their approval of the change before the Controller publishes the update. While the spec definition and mechanics are simple, it is left to the implementations (and ecosystems) to decide how complex their deployment needs to be (number of approvers, approval threshold, approval protocol flow, etc.). The spec will be updated in the next couple of days with the definition of how approvers work. I’ll update this issue when the draft is available. Implementations to follow.

The concepts that KERI calls “watchers” (and “judges”, “juries”, and so on), are all components that are independent of the DID Controller, and thus, their use is outside of the specification. Anyone can set up to monitor (“watch”) a DID did:tdw however they want, and verifiers aware of such monitors can use them as they deem appropriate. That becomes an implementation question for any verifier and ecosystem. Perhaps in the implementator’s guide, but we don’t currently see a need for those in the spec.

An additional comment, independent of any particular DID Method. We think the more substantive threat in the event of compromised keys (especially in a post-quantum world) is not the loss of control of the DID (e.g. the attack you describe), but rather the attacker simply using the key to sign “stuff”. When that happens, the value of the attestations from the Controller is diminished / destroyed, because verifiers lose trust in who actually signed the payload. We have ideas on how that can be mitigated, but will try to pursue those independent of did:tdw — doing so at the DID or key publication level (e.g., JWK, etc.), so that the mitigation applies to all signing events, regardless of how the key is published. We might experiment with the mitigation in did:tdw as an example, but the right place for such a capability is as close to the use of any signing key as possible.

@andrewwhitehead
Copy link
Member

I do think that High Assurance DID with DNS will be useful here, but we will need to specify a minimum/earliest entry hash to accept when resolving the associated did:tdw. I've added a related issue for supporting resolution parameters, although perhaps another mechanism would be used: CIRALabs/high-assurance-dids-with-dns#37

Essentially you would want the DID record(s) to specify a recent entry hash, ideally the latest, but allowing for a delay in between publishing a new version of the DID and getting the new DNS record propagated. Then if the log is truncated prior to that point, then DID resolution would fail.

@swcurran
Copy link
Contributor

Closing this issue as resolved. There could be other issues created from this. I'll add at least one, about using the High Assurance DIDs specification with did:tdw.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants