weight | title |
---|---|
50 |
Configuration reference |
Please use the configuration reference below to prepare the application for deployment in Kubernetes.
Project wide configuration is OPTIONAL and can be defined in one (or more) of the source compose files used to initialise the project.
It will be applied against all environments unless a specific environment overrides a setting with its own value.
Environment configuration lives in a dedicated docker compose override file. This automatically gets applied to the project's source docker compose files at the render
phase.
Any project wide configuration found will be overridden by environment specific values.
Configuration is divided into the following groups of parameters:
This configuration group contains application composition related settings. Configuration parameters can be individually defined for each application stack component.
Defines whether a component is disabled. All application components are enabled by default.
disabled
version: 3.7
services:
my-service:
x-k8s:
disabled: true
...
This configuration group contains Kubernetes workload
specific settings. Configuration parameters can be individually defined for each application stack component.
Defines a command to run for given workload. If defined it'll take precedence over the default docker image command.
workload.command:
version: 3.7
services:
my-service:
x-k8s:
workload:
command:
- /bin/sh
- -c
- /do/something && /run/my/program
...
Defines a command arguments for given workload. If defined it'll take precedence over the default docker image args.
workload.commandArgs:
version: 3.7
services:
my-service:
x-k8s:
workload:
commandArgs:
- -c
- sleep 10 && /run/my/program
...
A key/value map to attach metadata to a K8s Pod spec in a deployable object, e.g., Deployment, StatefulSet, etc... See the official K8s documentation.
workload.annotations:
version: 3.7
services:
my-service:
x-k8s:
workload:
annotations:
key1: value 1
key2: value 2
key3: |
value 3 and value 4
...
Defines the docker image pull policy, and if applicable, the secret required to access the container registry.
Defines docker image pull policy from the container registry. See the official K8s documentation.
workload.image-pull-policy:
version: 3.7
services:
my-service:
x-k8s:
workload:
imagePull:
policy: IfNotPresent
...
Defines docker image pull secret which should be used to pull images from the container registry. See the official K8s documentation.
workload.imagePull.secret:
version: 3.7
services:
my-service:
x-k8s:
workload:
imagePull:
secret: my-image-pull-secret-name
...
Defines the restart policy for individual application component in the event of a container crash. This setting will be inferred for each compose service defined, however in some cases manual override might be necessary. See the official K8s documentation.
workload.restartPolicy:
version: 3.7
services:
my-service:
x-k8s:
workload:
restartPolicy: Always
...
Defines the kubernetes Service Account name to run a workload with. Useful when specific access level associated with a Service Account is required for a given workload type. See the official K8s documentation.
workload.serviceAccountName:
version: 3.7
services:
my-service:
x-k8s:
workload:
serviceAccountName: my-special-service-account-name
...
Defines the Pod Security Context for the kubernetes workload
This option sets up an appropriate User ID (runAsUser
field) which specifies that for any Containers in the Pod, all processes will run with user ID as specified by the value.
workload.podSecurity.runAsUser:
version: 3.7
services:
x-k8s:
workload:
podSecurity:
runAsUser: 1000
...
This option sets up an appropriate Group ID (runAsGroup
field) which specifies the primary group ID for all processes within any containers of the Pod. If this field is omitted (currently a default), the primary group ID of the container will be root(0). Any files created will also be owned by user with specified user ID (runAsUser
field) and group ID (runAsGroup
field) when runAsGroup is specified.
workload.podSecurity.runAsGroup:
version: 3.7
services:
my-service:
workload:
podSecurity:
runAsGroup: 2000
...
This option is concerned with setting up a supplementary group fsGroup
field. If specified, all processes of the container are also part of this supplementary group ID. The owner for attached volumes and any files created in those volume will be Group ID as specified by the value of this configuration option.
workload.podSecurity.fsGroup:
version: 3.7
services:
my-service:
workload:
podSecurity:
fsGroup: 3000
...
Defines the Kubernetes workload type controller. See the official K8s documentation. The workload type will be inferred from the information specified in the compose file.
The following rules are used to derive the workload type:
If compose file(s) specifies the deploy.mode
attribute key in a compose project service config, and it is set to "global" then DaemonSet
workload type is assumed. Otherwise, workload type will default to Deployment
unless volumes are in use, in which case workload will default to StatefulSet
.
workload.type:
version: 3.7
services:
my-service:
x-k8s:
workload:
type: StatefulSet
...
Defines the number of instances (replicas) for each application component. See the official K8s documentation.
The following rules are used to derive the number of replicas for each service:
If compose file(s) specifies the deploy.replicas
attribute key in a project service config it will use its value.
Otherwise, number of replicas will default to 1
.
workload.replicas:
version: 3.7
services:
my-service:
x-k8s:
workload:
replicas: 1
...
Enables application horizontal pod autoscaling. See K8s documentation
Autoscaling assumes that a workload's number of replicas is smaller than the maximum desired number of replicas - otherwise autoscaling won't be enabled.
Then, it periodically adjusts the number of replicas to match the observed metrics such as average CPU utilisation, average memory utilisation or any other custom metric to the target max replicas specified by the user.
Defines the maximum number of instances (replicas) the application component should automatically scale up to. See K8s documentation. This setting is only taken into account when initial number of replicas is lower than this parameter.
workload.autoscale.maxReplicas:
version: 3.7
services:
my-service:
x-k8s:
workload:
autoscale:
maxReplicas: 10
replicas: 2
...
Defines the CPU utilisation threshold for the horizontal pod autoscaler for the application component. See K8s documentation. This setting is only taken into account maximum number of replicas for the application component is defined.
workload.autoscale.cpuThreshold:
version: 3.7
services:
my-service:
x-k8s:
workload:
autoscale:
maxReplicas: 10
cpuThreshold: 70
replicas: 2
...
Defines the Memory utilization threshold for the horizontal pod autoscaler for the application component. See K8s documentation. This setting is only taken into account maximum number of replicas for the application component is defined.
workload.autoscale.memThreshold:
version: 3.7
services:
my-service:
x-k8s:
workload:
autoscale:
maxReplicas: 10
memThreshold: 70
replicas: 2
...
Defines the number of pods that can be created above the desired amount of pods during an update. See the official K8s documentation.
The following rules are used to derive that information for each service:
If compose file(s) specifies the deploy.update_config.parallelism
attribute key in a service config it will use its value.
Otherwise it will default to 1
.
workload.rollingUpdateMaxSurge:
version: 3.7
services:
my-service:
x-k8s:
workload:
rollingUpdateMaxSurge: 2
...
Defines the resource share request and limits for a given workload using different parameters.
Defines the CPU share request for a given workload. See the official K8s documentation.
The following rules are used to derive that information for each service:
If compose file(s) specifies the deploy.resources.reservations.cpus
attribute key in a project service config it will use its value. Otherwise it'll assume sensible default of 0.1
(equivalent of 100m in Kubernetes).
Possible options: Arbitrary CPU units. Examples: 0.2
== 200m
.
workload.resource.cpu:
version: 3.7
services:
my-service:
x-k8s:
workload:
resource:
cpu: 0.1
...
Defines the CPU share limit for a given workload. See the official K8s documentation.
The following rules are used to derive that information for each service:
If compose file(s) specifies the deploy.resources.limits.cpus
attribute key in a service config it will use its value.
Otherwise, it'll default to a sensible default of 0.5
(equivalent of 500m in Kubernetes).
Possible options: Arbitrary CPU units. Examples: 0.2
== 200m
.
workload.max-cpu:
version: 3.7
services:
my-service:
x-k8s:
workload:
resource:
maxCpu: 2
...
Defines the Memory request for a given workload. See the official K8s documentation.
The following rules are used to derive that information for each service:
If compose file(s) specifies the deploy.resources.reservations.memory
attribute key in a service config it will use its value. Otherwise it'll default to a sensible quantity of 10Mi
.
Possible options: Arbitrary Memory units. Examples: 64Mi
, 1Gi
...
workload.resource.memory:
version: 3.7
services:
my-service:
x-k8s:
workload:
resource:
memory: 200Mi
...
Defines the Memory limit for a given workload. See the official K8s documentation.
The following rules are used to derive that information for each service:
If compose file(s) specifies the deploy.resources.limits.memory
attribute key in a service config it will use its value.
Otherwise it'll default to a sensible quantity of 500Mi
.
Possible options: Arbitrary Memory units. Examples: 64Mi
, 1Gi
...
workload.resource.maxMemory:
version: 3.7
services:
my-service:
x-k8s:
workload:
resource:
maxMemory: 0.3Gi
...
Defines the ephemeral storage size request for a given workload. See the official K8s documentation.
Possible options: Arbitrary Memory units. Examples: 64Mi
, 1Gi
...
workload.resource.storage:
version: 3.7
services:
my-service:
x-k8s:
workload:
resource:
storage: 200M
...
Defines the ephemeral storage size limit for a given workload. See the official K8s documentation.
Possible options: Arbitrary Memory units. Examples: 64Mi
, 1Gi
...
workload.resource.maxStorage:
version: 3.7
services:
my-service:
x-k8s:
workload:
resource:
maxStorage: 10Gi
...
Defines the workload's liveness probe.
This setting defines the workload's liveness probe type.
The following rules are used to derive that information for each service:
If compose file(s) specifies the healthcheck.disable
attribute key in a service config it will set the probe type to none
.
Otherwise it'll default to exec
(liveness probe active!)
workload.livenessProbe.type:
version: 3.7
services:
my-service:
x-k8s:
workload:
livenessProbe:
type: none
...
Defines the liveness probe command to be run for the workload when the type is exec
.
See the official K8s documentation.
The following rules are used to derive that information for each service:
If compose file(s) specifies the healthcheck.test
attribute key in a service config it will use its value.
If probe is not defined, it will prompt the user to define one by injecting a generic echo command.
workload.livenessProbe.exec.command
version: 3.7
services:
my-service:
x-k8s:
workload:
livenessProbe:
type: exec
exec:
command:
- /is-my-service-alive.sh
...
Defines the liveness probe port to be used for the workload when the type is http
.
See the official K8s documentation.
workload.livenessProbe.http.port:
version: 3.7
services:
my-service:
x-k8s:
workload:
livenessProbe:
type: http
http:
port: 8080
...
Defines the liveness probe path to be used for the workload when the type is http
.
See the official K8s documentation.
workload.livenessProbe.http.path:
version: 3.7
services:
my-service:
x-k8s:
workload:
livenessProbe:
type: http
http:
port: 8080
path: /status
...
Defines the liveness probe port to be used for the workload when the type is tcp
.
See the official K8s documentation.
workload.livenessProbe.tcp.port:
version: 3.7
services:
my-service:
x-k8s:
workload:
livenessProbe:
type: tcp
tcp:
port: 8080
...
Defines the failure threshold (number of retries) for the workload before giving up. See the official K8s documentation.
workload.livenessProbe.failureThreshold:
version: 3.7
services:
my-service:
x-k8s:
workload:
livenessProbe:
...
failureThreshold: 3
...
Defines the minimum consecutive successes for the probe to be considered successful. See the official K8s documentation.
workload.livenessProbe.successThreshold:
version: 3.7
services:
my-service:
x-k8s:
workload:
livenessProbe:
...
successThreshold: 1
...
Defines how long to wait before the first liveness probe runs for the workload. See the official K8s documentation.
The following rules are used to derive that information for each service:
If compose file(s) specifies the healthcheck.start_period
attribute key in a service config it will use its value.
Otherwise, it'll default to 1m
(1 minute).
workload.livenessProbe.initialDelay:
version: 3.7
services:
my-service:
x-k8s:
workload:
livenessProbe:
...
initialDelay: 2m
...
Defines how often liveness probe should run for the workload. See the official K8s documentation.
The following rules are used to derive that information for each service:
If compose file(s) specifies the healthcheck.interval
attribute key in a service config it will use its value.
Otherwise, it'll default to 1m
(1 minute).
workload.livenessProbe.period:
version: 3.7
services:
my-service:
x-k8s:
workload:
livenessProbe:
...
period: 1m0s
...
Defines the timeout for the liveness probe for the workload. See the official K8s documentation.
The following rules are used to derive that information for each service:
If compose file(s) specifies the healthcheck.timeout
attribute key in a service config it will use its value.
Otherwise, it'll default to 10s
(10 seconds).
workload.livenessProbe.timeout:
version: 3.7
services:
my-service:
x-k8s:
workload:
livenessProbe:
...
timeout: 30s
...
Defines the workload's readiness probe.
Defines the workload's readiness probe type. See the official K8s documentation.
workload.readinessProbe.type:
version: 3.7
services:
my-service:
x-k8s:
workload:
readinessProbe:
type: none
...
Defines the readiness probe command to be run for the workload when the type is exec
.
See the official K8s documentation.
workload.readinessProbe.exec.command:
version: 3.7
services:
my-service:
x-k8s:
workload:
readinessProbe:
type: exec
exec:
command:
- /is-my-service-ready.sh
...
Defines the readiness probe port to be used for the workload when the type is http
.
See the official K8s documentation.
workload.readinessProbe.http.port:
version: 3.7
services:
my-service:
x-k8s:
workload:
readinessProbe:
type: http
http:
port: 8080
...
Defines the readiness probe path to be used for the workload when the type is http
.
See the official K8s documentation.
workload.readinessProbe.http.path:
version: 3.7
services:
my-service:
x-k8s:
workload:
readinessProbe:
type: http
http:
port: 8080
path: /status
...
Defines the readiness probe path to be used for the workload when the type is tcp
.
See the official K8s documentation.
workload.readinessProbe.tcp.port:
version: 3.7
services:
my-service:
x-k8s:
workload:
readinessProbe:
type: tcp
tcp:
port: 8080
...
Defines how often a readiness probe should run for the workload. See the official K8s documentation.
workload.readinessProbe.period:
version: 3.7
services:
my-service:
x-k8s:
workload:
readinessProbe:
...
period: 30s
...
Defines how long to wait before the first readiness probe runs for the workload. See the official K8s documentation.
workload.readinessProbe.initialDelay:
version: 3.7
services:
my-service:
x-k8s:
workload:
readinessProbe:
...
initialDelay: 10s
...
Defines the timeout for the readiness probe for the workload. See the official K8s documentation.
workload.readinessProbe.timeout:
version: 3.7
services:
my-service:
x-k8s:
workload:
readinessProbe:
...
timeout: 10s
...
Defines the failure threshold (number of retries) for the workload before giving up. See the official K8s documentation.
workload.readinessProbe.failureThreshold:
version: 3.7
services:
my-service:
x-k8s:
workload:
readinessProbe:
...
failureThreshold: 3
...
Defines the minimum consecutive successes for the probe to be considered successful. See the official K8s documentation.
workload.readinessProbe.successThreshold:
version: 3.7
services:
my-service:
x-k8s:
workload:
readinessProbe:
...
successThreshold: 1
...
The service
group contains configuration details around Kubernetes services and how they get exposed externally.
IMPORTANT: Only the first port for each service is processed and used to infer initial configuration!
Defines the type of Kubernetes service for a specific workload. See the official K8s documentation.
Although a variety of service types are supported, only two types of services will be automatically inferred from the compose configuration, namely None
or ClusterIP
.
If you need a different type, please configure it manually. The different types are listed and explained below. Related official K8s
Here is the heuristic used to extract a service type:
- If compose project service publishes a port (i.e. defines a port mapping between host and container ports):
- It will assume a
ClusterIP
service type
- It will assume a
- If compose project service does not publish a port:
- It will assume a
None
service type
- It will assume a
These options are useful for exposing a Service either internally or externally onto an external IP address, that's outside of your cluster.
service.type:
version: 3.7
services:
my-service:
x-k8s:
service:
type: LoadBalancer
...
Simply, no service will be created.
Choosing this type makes the Service only reachable internally from within the cluster by other services. There is no external access.
In development, you can access this service on your localhost using Port Forwarding.
It is ideal for an internal service or locally testing an app before exposing it externally.
This service type is the most basic way to get external traffic directly to your service.
Its opens a specific port on each of the K8s cluster Nodes, and any traffic that is sent to this port is forwarded to the ClusterIP service which is automatically created.
You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>
.
It is ideal for running a service with limited availability, e.g. a demo app.
This is the same as a ClusterIP
service type, but lacks load balancing or proxying. Allowing you to connect to a Pod directly.
Specifically, it does have a service IP, but instead of load-balancing it will return the IPs of the associated Pods.
It is ideal for scenarios such as Pod to Pod communication or clustered applications node discovery.
This service type is the standard way to expose a service to the internet.
All traffic on the port you specify will be forwarded to the service allowing any kind of traffic to it, e.g. HTTP, TCP, UDP, Websockets, gRPC, etc...
Again, it is ideal for exposing a service or app to the internet under a single IP address.
Practically, in non development environments, a LoadBalancer will be used to route traffic to an Ingress to expose multiple services under the same IP address and keep your costs down.
Defines the Node Port value for a Kubernetes service of type NodePort
. See the official K8s documentation.
NOTE: nodeport
attributes will be ignored for any other service type!
service.nodeport:
version: 3.7
services:
my-service:
x-k8s:
service:
type: nodeport
nodeport: 5555
...
Defines how to expose the service externally. By default, all component services aren't exposed i.e. have no ingress attached to them.
- "default" - ingress will be created with Kubernetes cluster defaults.
- "domain.com" - a single domain name for the ingress.
- "domain.com/foo" - a single domain name with a path.
- "domain.com,otherdomain.com/bar,..." - comma separated list of domains (with or without path) for the ingress.
service.expose.domain:
version: 3.7
services:
my-service:
x-k8s:
service:
type: LoadBalancer
expose:
domain: my-awesome-service.com
...
When specified the domain will be prefixed with the value of this attribute.
Prefix will be prepended to the specified domain name.
Example: domainPrefix: "hello."
and domain: world.my-awesome-service.com
will result in hello.world.my-awesome-service.com
.
service.expose.domain:
version: 3.7
services:
my-service:
x-k8s:
service:
type: LoadBalancer
expose:
domainPrefix: hello
domain: world.my-awesome-service.com
...
Defines whether to use TLS for the exposed service and which secret name contains certificates for the service. See the official K8s documentation.
NOTE: This option is only relevant when service is exposed, see: service.expose.domain above.
service.expose.tlsSecret:
version: 3.7
services:
my-service:
x-k8s:
service:
type: LoadBalancer
expose:
domain: "my-domain.com"
tlsSecret: "my-service-tls-secret-name"
...
Ingress annotations are used to configure some options depending on the Ingress controller. Different Ingress controller support different annotations. See the official K8s documentation
NOTE: This option is only relevant when service is exposed, see: service.expose.domain above.
service.expose.tlsSecret:
version: 3.7
services:
my-service:
x-k8s:
service:
type: LoadBalancer
expose:
domain: "my-domain.com"
tlsSecret: "my-service-tls-secret-name"
ingressAnnotations:
kubernetes.io/ingress.class: external
cert-manager.io/cluster-issuer: prod-le-dns01
...
This configuration group contains Kubernetes persistent volume
claim specific settings. Configuration parameters can be individually defined for each volume referenced in the project compose file(s).
Defines the class of persistent volume. See the official K8s documentation.
volume.storageClass:
version: 3.7
volumes:
vol1:
x-k8s:
storageClass: my-custom-storage-class
...
Defines the size of persistent volume. See the official K8s documentation.
volume.size:
version: 3.7
volumes:
vol1:
x-k8s:
size: 10Gi
...
Defines a label selector to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the PVC claim. See the official K8s documentation.
volume.selector:
version: 3.7
volumes:
vol1:
x-k8s:
selector: my-volume-selector
...
This group allows for application component environment
variables configuration.
To set an environment variable with explicit string value
Environment variable with as literal string:
version: 3.7
services:
my-service:
x-k8s:
...
environment:
ENV_VAR_A: some-literal-value # Literal value
When there is a need to reference any dependent environment variables it can be achieved by using double curly braces
Environment variable with as literal string referencing dependent environment variables:
version: 3.7
services:
my-service:
x-k8s:
...
environment:
ENV_VAR_A: foo
ENV_VAR_B: bar
ENV_VAR_C: {{ENV_VAR_A}}/{{ENV_VAR_B}} # referencing other dependent environment variables
To set an environment variable with a value taken from Kubernetes secret, use the following shortcut: secret.{secret-name}.{secret-key}
.
version: 3.7
services:
my-service:
x-k8s:
...
environment:
ENV_VAR_B: secret.{secret-name}.{secret-key} # Refer to a value stored in a secret key
To set an environment variable with a value taken from Kubernetes config map, use the following shortcut: config.{config-name}.{config-key}
.
version: 3.7
services:
my-service:
x-k8s:
...
environment:
ENV_VAR_C: config.{config-name}.{config-key} # Refer to a value stored in a configmap key
To set an environment variable with a value referencing K8s Pod field value, use the following shortcut: pod.{field-path}
.
version: 3.7
services:
my-service:
x-k8s:
...
environment:
ENV_VAR_D: pod.{field-path} # Refer to the a value of the K8s workload Pod field path
# e.g. `pod.metadata.namespace` to get the k8s namespace
# name in which pod operates
metadata.name
- returns current app component K8s Pod namemetadata.namespace
- returns current app component K8s namespace name in which Pod operatesmetadata.labels
- return current app component labelsmetadata.annotations
- returns current app component annotationsspec.nodeName
- returns current app component K8s cluster node namespec.serviceAccountName
- returns current app component K8s service account name with which Pod runsstatus.hostIP
- returns current app component K8s cluster Node IP addressstatus.podIP
- returns current app component K8s Pod IP address
To set an environment variable with a value referencing K8s Container resource field value, use the following shortcut: container.{name}.{....}
.
version: 3.7
services:
my-service:
x-k8s:
...
environment:
ENV_VAR_E: container.{container-name}.{resource-field} # Refer to the a value of the K8s workload Container resource field
# e.g `limits.cpu` to get max CPU allocatable to the container
limits.cpu
,limits.memory
,limits.ephemeral-storage
- return value of selected containerlimit
fieldrequests.cpu
,requests.memory
,requests.ephemeral-storage
- return value of selected containerrequests
field