Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define our threat model #12

Open
jyasskin opened this issue Sep 24, 2021 · 4 comments
Open

Define our threat model #12

jyasskin opened this issue Sep 24, 2021 · 4 comments

Comments

@jyasskin
Copy link
Collaborator

We need to document a threat model for this work. In particular, we should describe what capabilities actors are assumed to have and which of their goals we plan to either block or allow. Like in https://w3cping.github.io/privacy-threat-model/#model-cross-site-recognition, there may be multiple kinds of actors with different capabilities, and some kinds of actors might be able to achieve goals that we could frustrate for other kinds of actors.

Some possible capabilities:

  • Can run Javascript on a web page.
  • Can modify server-side request handling. (As is needed to decorate the path of a URL)
  • Willing to add redirections to the critical path. (As is needed for bounce tracking)

Some possible goals:

  • Associate a user ID on one site with a user ID on another site that represents the same person.
  • Probabilistically associate a user ID on one site with a user ID on another site that represents the same person.
  • Tell one site about some of a user's actions on another site.

I think we should start by brainstorming more capabilities and goals, and after that discuss what goals we can and should frustrate or allow.

@BrianLefler
Copy link

BrianLefler commented Oct 6, 2021

Other capabilities:

  • Receive a resource request on a significant amount of a site's page loads with a user ID attached via decoration or cookies.
  • Has a 3P presence on a significant number of sites. (Such that they will frequently be a 3P on both sides of a navigation)
  • Has a 1P interaction for a significant number of users.
  • Has a significant 1P presence. (Such that they will frequently be a 1P on one side of a navigation).
  • Can run a server on the same eLTD+1. (Through CNAME cloaking or other means)
  • Willing to cause redirections or popups outside of the critical navigation path.

Other goals:

  • Track a single person's actions on and across all sites where user IDs have been associated, in perpetuity.

It might be useful to group up fine-grained capabilities into some smaller set of actor personas, since sometimes it is the combination of capabilities that determine an attacker's options. For example, if an attacker has significant 3P presence and can run javascript, they could frequently transfer identities without adding a navigational redirect or a bounce (by decorating via javascript on the source and reading via javascript on the destination, no extra hops needed). And I'm not sure if associating opaque user IDs is very useful for tracking purposes if an actor can't receive ID-annotated resource requests on page visits.

@jyasskin jyasskin added the agenda+ Request to add this issue to the agenda of our next telcon or F2F label Oct 13, 2021
@jyasskin jyasskin removed the agenda+ Request to add this issue to the agenda of our next telcon or F2F label Oct 20, 2021
@bvandersloot-mozilla
Copy link

I'm hoping to help write a threat model with a few different actors and capabilities to at least get the ball rolling in the spec. I think I have my head around list based approaches, however I'm a little confused while going through the mitigations in place on Safari. In particular, the last two paragraphs as written have me scratching my head a bit.

If the registrable domain that the user is being automatically redirected from has been classified as having cross-site tracking capabilities, Safari will delete all non-cookie storage on the site the user is being redirected to, if the user does not interact (i.e., register a user activation) on the destination site within seven days of browser use.

Additionally, if the URL the user is navigating to has either query parameters or a URL fragment, the lifetime of client-side set cookies on the destination page is capped at 24 hours.

The goal seems to be to restrict storage of the destination of a navigational tracker, which makes sense and is in line with other defenses. But why is there a carve out for cookies in the first protection? Webcompat? Additionally, is there a reason the cookies for a non-parameter having URL do not have a lifetime cap too?

@jyasskin
Copy link
Collaborator Author

jyasskin commented May 19, 2022

https://webkit.org/tracking-prevention/#7-day-cap-on-all-script-writeable-storage says all storage that's written by the client has the 7-day cap, with a bit of variation in that cookies are (max) 7 calendar days while other storage is 7 days of browser use. So client-written cookies are deleted after 7 days, but server-written cookies aren't. That indicates that Safari's threat model is that actors with server cooperation are allowed to do more tracking than actors that are constrained to the client. (I have PR #21 to better incorporate this into the existing-mitigations section. Comments and improvements are welcome.)

@bvandersloot-mozilla
Copy link

Thanks, those changes help make things clearer, and seems to have a substantive update!

That is a good insight about server cooperation vs included javascript. I'm still curious about the particular choices with navigational tracking protection though. There are enough specifics that it seems like there some well thought out reasoning behind the curtain!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants