Note
This is a project used by the Ministry of Justice UK and agencies. https://intranet.justice.gov.uk/
Nb.
README.md
is located in.github/
The application uses Docker. This repository provides two separate local test environments, namely:
- Docker Compose
- Kubernetes
Where docker compose
provides a pre-production environment to apply upgrades and develop features, Kubernetes allows
us to test and debug our deployments to the Cloud Platform.
In a terminal, move to the directory where you want to install the application. You may then run:
git clone https://github.com/ministryofjustice/intranet.git
Change directories:
cd intranet
Next, depending on the environment you would like to launch, choose one of the following:
This environment has been set up to develop and improve the application.
The following make command will get you up and running.
It creates the environment and starts all services,
the service is called php-fpm
:
make
During the make
process, the Dory proxy will attempt to install. You will be guided though an installation, if needed.
You will have ten services running in total, all with different access points. They are:
Nginx
http://intranet.docker/
PHP-FPM
make bash
Node
This service watches and compiles our assets, no need to access. The output of this service is available on STDOUT.
When working with JS files in the src
directory it can be useful to develop from inside the node container.
Using a devcontainer will allow the editor to have access to the node_modules
directory, which is good for intellisense and type-safety.
When using a devcontainer, first start the required services with make
and then open the project in the devcontainer.
Be sure to keep an eye on the node container's terminal output for any laravel mix errors.
The folder src/components
is used for when it makes sense to keep a group of scss/js/php files together.
The folder src/components/post-meta
is an example where php is required to register fields in the backend, and js is used to register fields in the frontend.
MariaDB
Internally accessed by PHP-FPM on port 3306
PHPMyAdmin
http://intranet.docker:9191/
Login information can be found in .env
Opensearch
We use this
Opensearch Dashboard
Dashboards that allow us to query indexed data.
Minio
Minio acts just like an AWS S3 bucket.
CDN
This service acts like a distributed CloudFront service allowing us to imitate a CDN.
CRON
In production we have a scalable cron container. It's only job right now is to make a head request to wp-cron.php
There is no need to access this container. However, with every running container you can reach the OS.
docker compose exec -it wp-cron ash
There is no need to install application software on your computer.
All required software is built within the services - all services are ephemeral.
Composer
We match the process that occurs in production CI locally to ensure we test against the same criteria. As such, during development it will be necessary to rebuild directories when updating composer.
After making changes to composer.json
...
make composer-update
This will fire off a set of checks, ensuring composer updates and all static assets are distributed correctly. For more information, review Dockerfile and local assets files.
There are multiple volume mounts created in this project and shared across the services. The approach has been taken to speed up and optimise the development experience.
This environment is useful to test Kubernetes deployment scripts.
Local setup attempts to get as close to development on Cloud Platform as possible, with a production-first approach.
- Docker
- kubectl
- Kind
- Hosts file update, you could...
sudo nano /etc/hosts
... on a new line, add:127.0.0.1 intranet.local
Once the above requirements have been met, we are able to launch our application by executing the following make command:
make kube
The following will take place:
- If running; the Dory proxy is stopped
- A Kind cluster is created with configuration from:
deploy/config/local/cluster.yml
- The cluster Ingress is configured
- Nginx and PHP-FPM images are built
- Images are transferred to the Kind Control Plane
- Local deployment is applied using
kubectl apply -f deploy/local
- Verifies pods using
kubectl get pods -w
Access the running application here: http://intranet.local/
In the MariaDB YAML file you will notice a persistent volume claim. This will assist you in keeping application data, preventing you from having to reinstall WordPress every time you stop and start the service.
Most secrets are managed via GitHub settings
It is the intention that WordPress keys and salts are auto generated, before the initial GHA build stage. Lots of testing occurred yet the result wasn't desired; dynamic secrets could not be hidden in the log outputs. Due to this, secrets are managed in settings.
# Make interaction a little easier; we can create repeatable
# variables, our namespace is the same name as the app, defined
# in ./deploy/development/deployment.tpl
#
# If interacting with a different stack, change the NSP var.
# For example;
# - production, change to 'intranet-prod'
# Set some vars, gets the first available pod
NSP="intranet-dev"; \
POD=$(kubectl -n $NSP get pod -l app=$NSP -o jsonpath="{.items[0].metadata.name}");
# Local interaction is a little different:
# - local, change NSP to `default` and app to `intranet-local`
NSP="default"; \
POD=$(kubectl -n $NSP get pod -l app=intranet-local -o jsonpath="{.items[0].metadata.name}");
After setting the above variables (via copy -> paste -> execute
) the following blocks of commands will work
using copy -> paste -> execute
too.
# list available pods and their status for the namespace
kubectl get pods -n $NSP
# watch for updates, add the -w flag
kubectl get pods -w -n $NSP
# describe the first available pod
kubectl describe pods -n $NSP
# monitor the system log of the first pod container
kubectl logs -f $POD -n $NSP
# monitor the system log of the fpm container
kubectl logs -f $POD -n $NSP fpm
# open an interactive shell on an active pod
kubectl exec -it $POD -n $NSP -- ash
# open an interactive shell on the FPM container
kubectl exec -it $POD -n $NSP -c fpm -- ash
To access the OpenSearch dashboards, use port forwarding on the OpenSearch Proxy pod.
OS_POD=$(kubectl -n $NSP get pods --no-headers -o custom-columns=":metadata.name" | awk '{if ($1 ~ "opensearch-proxy-cloud-platform-") print $0}');
kubectl -n $NSP port-forward $OS_POD 8181:8080
And visit the dashboards in your browser at http://localhost:8181/_dashboards
Create a bucket with the following settings:
- Region:
eu-west-2
- Object Ownership:
- ACLs enabled
- Bucket owner preferred
- Block all public access:
- Block public access to buckets and objects granted through new access control lists (ACLs): NO
- Block public access to buckets and objects granted through any access control lists (ACLs): YES
- Block public access to buckets and objects granted through new public bucket or access point policies: YES
- Block public and cross-account access to buckets and objects through any public bucket or access point policies: YES
Create a deployment with the following settings:
- Cache key and origin requests
- Legacy cache settings
- Query strings: All
- Legacy cache settings
To restrict access to the Amazon S3 bucket follow the guide to implement origin access control (OAC) https://repost.aws/knowledge-center/cloudfront-access-to-amazon-s3
For using u user's keys, create a user with a policy similar to the following:
{
"Sid": "s3-bucket-access",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name"
}
An access key can then be used for testing actions related to the S3 bucket, use env vars:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
When deployed, server-roles should be used.
To verify that S3 & CloudFront are working correctly.
- Go to WP Offload Media Lite settings page. There should be green checks for Storage & Delivery settings.
- Upload an image via Media Library.
- Image should be shown correctly in the Media Library.
- The img source domain should be CloudFront.
- Directly trying to access an image via the S3 bucket url should return an access denied message.
- Ministry of Justice | Overview
- App justicedigital-centraldigital-intranet
- App justicedigital-centraldigital-intranet-staging-placeholder
- Go to the Azure portal and sign in with your account.
- Click on the
Microsoft Entra ID
service. - Click on
App registrations
. - Click on
New registration
. - Fill in the form (adjust to the environment):
- Name:
justicedigital-centraldigital-intranet-staging
- Supported account types:
Accounts in this organizational directory only
- Redirect URI:
Web
andhttps://staging.intranet.justice.gov.uk/oauth2/callback
orhttps://intranet.justice.gov.uk/oauth2/callback
etc.
- Name:
- Copy the
Application (client) ID
andDirectory (tenant) ID
values, make them available as environment variablesOAUTH_CLIENT_ID
,OAUTH_TENANT_ID
. - Click on
Certificates & secrets
>New client secret
. - Fill in the form:
- Description:
Staging-Intranet
- Expires:
18 months
- Description:
- Set a reminder to update the client secret before it expires.
- Copy the
Value
value, make it available as environment variableOAUTH_CLIENT_SECRET
. - Make a request the Identity Team, that
User.Read
API permissions be added to the app.
The oauth2 flow should now work with the Azure AD/Entra ID application. You can get an Access Token, Refresh Token and an expiry of the token.
The App for localhost and dev.intranet.justice.gov.uk is managed by the Identity Team.
The App is on the development tenant, any you'll need to use a development email address for access.
The OAUTH_*
values are stored in a shared note in 1password.
They can be copied to .env
to use oauth locally.
To view the intranet content, visitors must meet one of the following criteria.
- Be in an Allow List of IP ranges.
- Or, have a Microsoft Azure account, within the organisation.
The visitor's IP is checked first, then if that check fails, they are redirected to the project's Entra application.
This project uses nginx to serve content, either by sending requests to fpm (php) or serving static assets.
We use nginx to control access to the site's content by using the ngx_http_auth_request_module
.
[It] implements client authorization based on the result of a subrequest. If the subrequest returns a 2xx response code, the access is allowed. If it returns 401 or 403, the access is denied with the corresponding error code.
Documentation is found at https://nginx.org/en/docs/http/ngx_http_auth_request_module.html
The internals of the /auth/verify
endpoint will be explained next, for now the following diagrams show how the ngx_http_auth_request_module
works.
For an authorized user, the internal subrequest to /auth/verify
will return a 2xx status code, and the user will see the requested content.
sequenceDiagram
actor Client
Note left of Client: Authorized user
Client->>nginx: Content request
nginx->>nginx (/auth/verify): Auth subrequest
Note right of nginx (/auth/verify): Request is authorized
nginx (/auth/verify)->>nginx: 200 response code
nginx->>Client: Content response.
For an un-authorized user, the internal subrequest to /auth/verify
will return a 401 or 403 status code, and the user will see an error page.
sequenceDiagram
actor Client
Note left of Client: Un-authorized user
Client->>nginx: Content request
nginx->>nginx (/auth/verify): Auth subrequest
Note right of nginx (/auth/verify): Request is not authorized
nginx (/auth/verify)->>nginx: 401 response code
nginx->>Client: Error page response.
The auth request rules can be found in the auth-request.conf.
auth_request
sets the endpoint.auth_request_set
is used to set a variable, that's available after the subrequest.
This file is then include
d in all protected locations in nginx server.conf.
The first step in handling a subrequest to /auth/verify
is comparing the client's IP address to a list of know allowed IP ranges.
To achieve this efficiently, the ngx_http_geo_module
module is used.
The
ngx_http_geo_module
module creates variables with values depending on the client IP address.
Documentation is found at https://nginx.org/en/docs/http/ngx_http_geo_module.html
Our implementation is in nginx server.conf.
The geo
block towards the start of the file contains some module config, along with an include include /etc/nginx/geo.conf;
geo.conf
is a list of IPs with group value. The file is not checked into source control, instead:
- an enviromnet variable
IPS_FORMATTED
is generated during deployment, from the private ministryofjustice/moj-ip-addresses repository. geo.conf
is generated whennginx
containers startup, based on the value ofIPS_FORMATTED
.
See .github/workflows/ip-ranges-configure.yml for the script that downloads and transforms the IP ranges. See deploy/config/init/nginx-geo.sh for for the nginx init script.
A flow diagram of ngx_http_auth_request_module
& ngx_http_geo_module
responding to a client (who has a valid IP address).
sequenceDiagram
actor Client
Note left of Client: Has allowed IP
Client->>nginx: Content request
nginx->>nginx (/auth/verify): Auth subrequest
Note right of nginx (/auth/verify): IP is verified via geo module
nginx (/auth/verify)->>nginx: 200 response code
nginx->>Client: Content response
This diagram shows how a user without a privileged IP will be redirected to /auth/login
when they first try to visit the intranet.
sequenceDiagram
actor Client
Note left of Client: Unprivileged IP
Client->>nginx: Content request
nginx->>nginx (/auth/verify): Auth subrequest
Note right of nginx (/auth/verify): geo module ⛔️
nginx (/auth/verify)->>nginx: 401 response code
nginx->>fpm: Load dynamic 401 page
Note right of fpm: Generate JWT with success_url,<br/>serve document with meta/js<br/>redirect to /auth/login
fpm->>nginx: 401, JWT & doc.
nginx->>Client: Forward 401, JWT & doc.
Note left of Client: User is redircted
Client->>nginx: Request /auth/login
This diagram shows how a user with an organisation email address will logged in via Entra.
Nginx is transparent for these requests, so it's omitted from the diagram.
sequenceDiagram
actor Client
Note left of Client: Unprivileged IP
Client->>fpm: Request /auth/login
Note right of fpm: Start OAuth flow,<br/>hash state and send cookie,<br/> save pkce<br/>redirect to Entra.
fpm->>Client: 302 & state cooke.
Client->>Entra: Authorization URL.
Note right of Entra: Prompt for login<br/>or use existing session.
Entra->>Client: Redirect to callback URL
Client->>fpm: Request /auth/callback?state=...
Note right of fpm: Callback state is validated,<br/>refresh tokens stored<br/>JWT generated with role and expiry<br/>cleanup state cookie & pkce
fpm->>Client: 302 to success_url or / & JWT.
Here, the user has a JWT with an expiry time in the future, and a necessary role of reader
.
The following diagram shows how this user will access content. The requests/responses have been omitted*, as this step is the same with or without auth.
sequenceDiagram
actor Client
Note left of Client: Has valid JWT
Client->>nginx: Content request
nginx->>nginx (/auth/verify): Auth subrequest
nginx (/auth/verify)->>fpm (moj-auth/verify.php): Handle auth subrequest
Note right of fpm (moj-auth/verify.php): JWT is validated
fpm (moj-auth/verify.php)->>nginx (/auth/verify): 200 response code
nginx (/auth/verify)->>nginx: 200 response code
Note right of nginx: ...<br/>serve content from WP<br/>or static asset*<br/>....
nginx->>Client: Content response
This diagram shows how a user will not be redirected to /auth/login
after too many failed login attempts.
Nginx is transparent for these requests, so it's omitted from the diagram.
sequenceDiagram
actor Client
Note left of Client: Unprivileged IP &<br/>3 failed callback attempts
Client->>nginx: Content request
nginx->>nginx (/auth/verify): Auth subrequest
Note right of nginx (/auth/verify): geo module ⛔️
nginx (/auth/verify)->>nginx: 401 response code
nginx->>fpm: Load dynamic 401 page
Note right of fpm: JWT indicates too many<br/>failed login attempts ⛔️
fpm->>nginx: 401, JWT & doc.
nginx->>Client: Forward 401, JWT & doc.
Note left of Client: Static 401 error page<br/>without redirect to /auth/login ⛔️
In the background, as a visitor is browsing, javascript is requesting the auth/heartbeat
endpoint.
This is for 2 reasons:
- It will keep the OAuth session fresh, the endpoint handler will refresh OAuth tokens, and update JWTs before they expire.
- If a visitor's state has changed, e.g. they have moved from an office with an allowed IP, then their browser content is blurred and they are prompted to refresh the page.