*** DEPRECATED, please see https://github.com/ministryofjustice/cloud-platform-infrastructure
This project is a Kubernetes testbed, intended to allow experimentation and evaluation with minimal setup required - essentially a prototype hosting platform, or Rails scaffold for Kubernetes.
Included here are resources to provision Kubernetes clusters with Kops, as a standin for AWS Elastic Kubernetes Service until that becomes available.
Also included are sample cluster components for common services (authentication, HTTP routing, DNS, etc), and test applications for deployment; the expectation is that all cluster components will be able to be deployed to an EKS cluster as-is.
This readme is split into the following sections:
As this repo uses git-crypt
your GPG key must be added to this repo for you to be able to access sensitive information such as SSH keys and authentication secrets.
A GPG key is not required to obtain Kubernetes credentials and interact with clusters however - cluster admin credentials can be obtained by logging into Kuberos with your Github account. Additionally, as long as you have valid AWS credentials and access to the Kops S3 bucket you can download static admin credentials for kubeconfig
using kops
.
Info on how to modify infrastructure and configuration of existing clusters, and how to provision new clusters.
brew install kops
brew install kubernetes-cli
brew install git-crypt
- Make sure you have credentials for the correct AWS account and that they are active:
$ export AWS_PROFILE=mojds-platforms-integration
- Kops stores resource specifications and credentials in an S3 bucket, which must be specified before using
kops
:$ export KOPS_STATE_STORE=s3://moj-cloud-platforms-kops-state-store
- Download cluster credentials from S3 and configure
kubectl
for use:kops export kubecfg cluster1.kops.integration.dsd.io
- Verify that you can interact with the cluster:
$ kubectl cluster-info
- Check what's running:
$ kubectl get pods --all-namespaces
Edit the cluster.yml
with your desired changes (e.g. adding an additional IAM policy for external-dns), then run
$ kops replace -f $YOUR_CLUSTER.yml
this will replace the existing cluster specification YML in S3 with your new version. Then:
$ kops update cluster
to preview the changes kops will apply to AWS. Then:
$ kops update cluster --yes
to actually apply the changes. If these changes require that EC2 instances be replaced this will be noted in the output; a rolling update can then be performed with:
$ kops rolling-update cluster
to preview changes, and:
$ kops rolling-update cluster --yes
to perform the rolling update.
Set an environment variable for your cluster name (must be DNS compliant - no underscores or similar):
export CLUSTER_NAME=cluster2
Both kops
and the external-dns component will create DNS records for the cluster, so a cluster-specific DNS zone should be created to avoid interfering with other clusters or services in AWS.
- Using either the console or AWS CLI create a Route53 hosted zone called
$CLUSTER_NAME.kops.integration.dsd.io.
- Copy the value of the
NS
record for that zone (the 4 lines likens-1722.awsdns-23.co.uk.
) - In the parent HostedZone,
kops.integration.dsd.io.
, create anNS
record containing those 4 nameservers
Generate a new SSH key and use kops
to generate a new cluster specification - example for cluster1.yaml
:
export KOPS_STATE_STORE=s3://moj-cloud-platforms-kops-state-store
ssh-keygen -t rsa -f ssh/${CLUSTER_NAME}_kops_id_rsa -N '' -C ${CLUSTER_NAME}
kops create cluster \
--name ${CLUSTER_NAME}.kops.integration.dsd.io \
--zones=eu-west-1a,eu-west-1b,eu-west-1c \
--node-size=t2.medium \
--node-count=3 \
--master-zones=eu-west-1a,eu-west-1b,eu-west-1c \
--master-size=t2.medium \
--topology=private \
--dns-zone=${CLUSTER_NAME}.kops.integration.dsd.io \
--ssh-public-key=ssh/${CLUSTER_NAME}_kops_id_rsa.pub \
--authorization=RBAC \
--networking=calico \
--output=yaml \
--bastion \
--dry-run \
> ${CLUSTER_NAME}.yaml
For external-dns to be able to manage records in Route53, the EC2 instances in the cluster must have an IAM policy allowing Route53 changes. Because this functionality is not part of core Kubernetes, an additional policy must be added to the instances. Kops allows you to add additional policies in the cluster specification and have an IAM policy created and attached to the role. To do so, add this block to the spec
section of your cluster.yml
file (see cluster1.yml
for a working example):
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::hostedzone/$YOUR_ZONE_ID"
]
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListResourceRecordSets"
],
"Resource": [
"*"
]
}
]
Where $YOUR_ZONE_ID
is the ID of your Route53 zone for your cluster.
The above command will create a highly-available cluster across all three AZs in eu-west-1, using a private network topology - 3 public subnets containing 3 NAT gateways and a single SSH bastion host, and 3 private subnets containing all masters and worker nodes. To run a smaller, non-HA cluster for testing, specify one AZ only for the --zones
and --master-zones
flag (ie. --zones=eu-west-1a
. To reduce the number of EC2 instances provisioned, you can also specify --topology=public
to deploy all instances to public subnets without NAT gateways, and remove the --bastion
flag to skip provisioning of the SSH bastion host.
$ kops create -f ${CLUSTER_NAME}.yaml
$ kops create secret --name ${CLUSTER_NAME}.kops.integration.dsd.io sshpublickey admin -i ssh/${CLUSTER_NAME}_kops_id_rsa.pub
aka update cluster in AWS according to the yaml specification:
$ kops update cluster ${CLUSTER_NAME}.kops.integration.dsd.io --yes
It takes a few minutes for the cluster to deploy and come up - you can check progress with:
$ kops validate cluster
Once it reports Your cluster ${CLUSTER_NAME}.kops.integration.dsd.io is ready
you can proceed to use kubectl
to interact with the cluster.
Refer to Editing cluster config / changing config
You must create an AWS ACM certificate separately, with a wildcard domain for apps to use. Refer to SSL Termination for details.
For our use case, we want authentication and identity to be handled by Github, and to derive all cluster access control from Github teams - projects will be deployed into namespaces (e.g. pvb-production
, cla-staging
), and access to resources in those namespaces should be available to the appropriate teams only (e.g. PVB
and CLA
teams).
Kubernetes supports authentication from external identity providers, including group definition, via OIDC. Github however only support OAuth2 as an authentication method, so an identity broker is required to translate from OAuth2 to OIDC.
As work on MOJ's identity service is ongoing, a development Auth0 account has been created to act as a standin in the meantime. That Auth0 account has been configured with a rule to add github team membership to the OIDC id_token
, which Kubernetes can than map to a Role or ClusterRole resource - this rule can be viewed in authentication/auth0-rules/whitelist-github-org-and-add-teams-to-group-claim.js
.
For Kubernetes to consume the user and group information in the Auth0 id_token
, configuration options for Auth0 and user/group mapping must be passed to the Kubernetes master servers. This config can be provided in the Kops cluster specification, e.g:
kubeAPIServer:
oidcClientID: nEmRfCMu80zLneVipmzevohG0ECL1Sig
oidcIssuerURL: https://moj-cloud-platforms-dev.eu.auth0.com/
oidcUsernameClaim: nickname
oidcGroupsClaim: https://api.cluster1.kops.integration.dsd.io/groups
This is included in the full cluster1.yaml
specification.
End users require credentials on their local machines in order to be able to use kubectl
, helm
etc. Those credentials contain information obtained from the identity provider - in our case Auth0 - which requires a browser-based authentication flow.
To handle user authentication and generation of cluster credentials, a webapp called Kuberos has been deployed at https://kuberos.apps.cluster1.kops.integration.dsd.io. Kuberos is a per-cluster service, so must be deployed into any other test clusters that are created - see Kuberos.
Important note - the instructions provided by Kuberos will either overwrite your local kubectl
config entirely, or any existing config for this specific cluster, so if you have already obtained static cluster credentials with kops export kubecfg
, you can instead reference the downloaded kubecfg.yaml
credentials as flags to kubectl
:
$ KUBECONFIG=/path/to/kubecfg.yaml \
kubectl --context cluster1.kops.integration.dsd.io \
get pods
Example cluster components/services in cluster-components/
- intended as a starting point for experimentation and discussion, but also providing some useful functionality.
For convenience, most cluster components are installed using Helm
/Tiller
, so that must be installed first. As Helm packages often require arguments to be provided at installation, a values.yml
file has been provided for each component that is using Helm.
The example cluster components and applications config contain references to cluster-specific domain names, so when creating a new cluster you should copy the contents of cluster-components/cluster1
to cluster-components/$YOUR_CLUSTER
and replace references to cluster1.kops.integration.dsd.io
with $YOUR_CLUSTER.kops.integration.dsd.io
in all YAML files before creating any of the components below. For helm-managed components (nginx-ingress and external-dns) these references are in their respective values.yml
files; for kuberos
and example-apps/nginx
, these references are in ingress.yml
in both cases.
Helm - the package manager for Kubernetes. Includes two components - Helm
, the command line client, and Tiller
, the server-side component.
- Install command line client:
$ brew install kubernetes-helm
- Create a ServiceAccount and RoleBinding for Tiller:
$ kubectl apply -f cluster-components/$YOUR_CLUSTER/helm/rbac-config.yml
- Install Tiller, with ServiceAccount config:
$ helm init --service-account tiller
This installs Tiller as a cluster-wide service, with cluster-admin
permissions. These permissions are too broad for a production multi-tenant environment, and are used here for test purposes only - in real-world usage Tiller should be deployed into each tenant namespace, with permissions scoped to that namespaces only.
$ helm repo update
- update package list, a laapt-get update
$ helm search
- see what's available in the public repo$ helm inspect values wordpress
- see what arguments are available for this package$ helm install wordpress
- install Wordpress
To specify arguments when installing, either:
- use the helm
--set
flag:helm install wordpress --set wordpressUsername=joebloggs --set mariadb.enabled=false
- or save the output of
helm inspect values
to a yaml file, change values as required, and pass the filepath to helm:helm install wordpress --values values.yml
An ingress controller based on nginx. Ingress controllers serve as HTTP routers/proxies that watch the Kubernetes API for Ingress
rules, and create routes/proxy configs to route traffic from the internet to services and pods.
$ helm install nginx-ingress -f cluster-components/$YOUR_CLUSTER/nginx-ingress/values.yml
This will deploy the nginx-ingress controller using the arguments specified in values.yml
. By default, nginx-ingress specifies a Service
with type=LoadBalancer
, so in AWS it will automatically create, configure and manage an ELB.
The configuration in cluster-components/cluster1/nginx-ingress/values.yml
includes annotations to attach a pre-existing AWS ACM certificate to nginx-ingress's ELB, and configure SSL termination and HTTP->HTTPS redirect. To use this configuration, you must create an ACM certificate with a wildcard domain matching the one specified for external-dns
/ Route53, e.g. *.apps.cluster1.kops.integration.dsd.io
external-dns is a Kubernetes incubator project that automatically creates/updates/deletes DNS entries in Route53 based on declared hostnames in Ingress
and Service
objects. To install with Helm:
$ helm install external-dns -f cluster-components/$YOUR_CLUSTER/external-dns/values.yml
The configuration in values.yml
allows DNS to be created for Service
objects only, as these examples are using a single nginx-ingress
service and ELB; that nginx-ingress Service
object has a external-dns.alpha.kubernetes.io/hostname
annotation to create a wildcard DNS record for that ELB.
kuberos is a simple app to handle cluster credential generation when using OIDC Authentication - it's not great, but good enough for now.
As no Helm chart is available for Kuberos YAML resource definitions have been created in cluster-components/$YOUR_CLUSTER/kuberos
instead. To create or update these resources, run:
$ kubectl apply -f cluster-components/$YOUR_CLUSTER/kuberos/