Replies: 1 comment
-
Hey thanks @cqueern. I should add something like this to the docs (WIP) but how I've been thinking about it is that these are general best practices and YMMV in terms of actual performance benefit. For example, while I was looking into the issue of meta CSP disabling the preload scanner, one site appeared to improve by 31% in local testing. But testing the same fix on a different site resulted in no change, due to other factors like how the scripts were loaded. So what I'd say for now is to A/B test the recommendations in a development environment to get an idea of their impact first. If the changes are significantly good, developers can use that data to convince their team that it's worth fixing in production. And if possible, collect even more A/B data from real users to validate the change. In my CSP tests, I did use LCP as the metric for comparison. First Paint or some application-specific user timing metric could work as well. FWIW I didn't have much luck testing the changes in WPT experiments, as Stoyan recommended, due to TTFB variability. Testing locally with DevTools overrides worked for me since I didn't have development access to the sites I was testing, but that shouldn't be an issue for actual site owners. |
Beta Was this translation helpful? Give feedback.
-
Again, great job with Capo, Rick.
Sorry if I missed this somewhere already.
Dev teams are going to want to know what they get if they make these changes so they can prioritize the corresponding ticket in the backlog. What's the recommended way to communicate the impact of making Capo's suggested changes?
Perhaps there's something to be taken from the LCP metric that suggests how many seconds might be saved if the recommendations are made.
Beta Was this translation helpful? Give feedback.
All reactions