A database of information from AWS, GitHub, Synk, and other sources, Service Catalogue aims to provide a picture of the Guardian's estate, broken down by Product & Engineering (P&E) team.
In contrast with Prism, which collects data from a subset of AWS resources, Service Catalogue offers a more complete picture of production services, as we may provision a resource that Prism doesn't know about.
The Guardian has hundreds of EC2, lambda, and other services in AWS, each built from one of thousands of GitHub repositories, by one of many P&E teams.
Some of the questions Service Catalogue aims to answer include:
- For P&E teams:
- Which services do I own?
- Which services follow DevX best practice/use tooling?
- Which repo do services come from?
- What is my service reliability? (time since last incident)
- For the Developer Experience stream:
- What proportion of all services follow best practice/use tooling?
- What kinds of technologies are different streams using?
- Which teams are struggling with reliability and need more support?
- Which services belong to specific P&E product teams
Pricing information is not yet available in Service Catalogue, therefore, we're unable to answer questions such as:
- What does each service cost?
- What services are costing us the most money?
Service Catalogue has two parts:
- Data collection
- Data analysis
We use CloudQuery to collect data from AWS, GitHub, Snyk, and other sources.
We've implemented CloudQuery as a set of ECS tasks, writing to a Postgres database. For more details, see CloudQuery implementation.
Tip
To update CloudQuery, see Updating CloudQuery.
The data in Service Catalogue is analysed in two ways:
- Grafana, at https://metrics.gutools.co.uk
- AWS Lambda functions, for example RepoCop or data-audit
Service Catalogue has a runbook you can access here that explains how deal with common problems, respond to alerts and how to perform useful operations like triggering tasks manually.
Follow the instruction in the dev-environment README to run cloudquery locally. Then follow the instructions in the repocop README to run repocop locally.
The diagram below outlines the architecture of the major components of the service catalogue.
flowchart TB
DB[(Cloudquery Database)]
snyk[Snyk Rest API]
github[GitHub Rest API]
cq[CloudQuery Batch Jobs]
devxDev[Developer on the DevX team]
dev[P&E Developer]
repocop[Repocop Lambdas]
aws[AWS APIs]
snyk --> |Data from snyk populates Cloudquery tables|cq
github --> |Data from dependabot populates Cloudquery tables|cq
aws --> |Data from aws populates Cloudquery tables|cq
cq --> |Cloudquery writes data to the DB|DB
DB --> |1 - Cloudquery data is used to calculate departmental compliance with obligations|repocop
repocop --> |2 - Repocop stores compliance information about repos as a table in the cloudquery DB|DB
repocop --> |Repocop raises PRs to fix issues, that are reviewed by developers|dev
Grafana --> |Compliance dashboards are used by DevX developers to track departmental progress towards obligations|devxDev
repocop --> |Repocop sends notifications of events, or warnings to teams via Anghammarad|Anghammarad
Anghammarad --> |Anghammarad delivers messages to developers about changes to their systems|dev
DB --> |Cloudquery data powers \n grafana dashboards|Grafana
Grafana --> |Compliance dashboards are used by developers to track their team's progress towards obligations. They also have read access to raw cloudquery tables.|dev