title | sidebar_position |
---|---|
Container Deployer |
4 |
The container deployer is a controller that allows you to run whatever coding you need to set up your cloud environment. You just need to provide your program as a container image which is then executed by Landscaper in combination with the container deployer.
Your image is provided as a reference in a DeployItem of type landscaper.gardener.cloud/container
. The DeployItem
itself must be embedded in a Blueprint and specifies the input data required by your program. Landscaper in
combination with the container deployer provides these input data during runtime. The output or export data of
your program must be stored in a predefined file and are collected by Landscaper/container deployer once your
program and thus the container terminates.
Index:
This section describes the provider specific configuration for a container DeployItem:
apiVersion: landscaper.gardener.cloud/v1alpha1
kind: DeployItem
metadata:
name: my-custom-container
spec:
type: landscaper.gardener.cloud/container
target:
# only used to determine which instance of the container deployer is responsible,
# unless it is used explicitly in the custom code
import: my-cluster
config:
apiVersion: container.deployer.landscaper.gardener.cloud/v1alpha1
kind: ProviderConfiguration
componentDescriptor:
# inline: # define an inline component descriptor instead of referencing a remote
ref:
componentName: example.com/my-comp
version: v0.1.2
repositoryContext: <abc...>
blueprint:
# inline: # define inline Blueprint instead of referencing a remote
ref:
resourceName: <abc....>
importValues:
{{ toJson . | indent 6 }}
image: <image ref>
command: ["my command"]
args: ["--flag1", "my arg"]
When the image with your program is executed, it gets access to particular information via env variables:
-
The current operation that the image should execute is defined by the env var
OPERATION
which can beRECONCILE
orDELETE
.RECONCILE
means that the program should just execute its usual installation wherebyDELETE
signals that the corresponding DeployItem was deleted and some optional cleanup could be done. -
Imports are provided as a json file at the path given by the env var
IMPORTS_PATH
. -
Exports should be written to a json or yaml file at the path given by the env var
EXPORTS_PATH
. -
The content of the Target referenced in
.spec.target
is stored in a file at the path given by the env varTARGET_PATH
.-
The file contains a json struct with two fields,
target
andcontent
. The first one contains the actual target, as it was read from the cluster, marshalled into json. The second one contains the content of the target. If the target'sspec.secretRef
is set, this is the content of the resolved secret reference. Otherwise, the value is identical (JSON-wise) to the target'sspec.config
.Example
Target:
apiVersion: landscaper.gardener.cloud/v1alpha1 kind: Target metadata: name: my-target namespace: my-namespace spec: type: landscaper.gardener.cloud/my-target-type secretRef: key: secret1 name: my-secret
Secret:
apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace type: Opaque data: secret1: "eyJmb28iOiAiYmFyIn0=" # {"foo": "bar"}
Content of the file at
TARGET_PATH
:{ "target": { "kind": "Target", "apiVersion": "landscaper.gardener.cloud/v1alpha1", "metadata": { "name": "my-target", "namespace": "my-namespace", }, "spec": { "type": "landscaper.gardener.cloud/my-target-type", "secretRef": { "name": "my-secret", "key": "secret1" } } }, "content": "{\"foo\": \"bar\"}" }
If the target didn't contain a secret reference, but
config: {foo: bar}
instead, the value of thecontent
field would be the same.
-
-
An optional state should be written to the directory given by the env var
STATE_PATH
. The complete state directory will be tarred and managed by Landscaper(:warning: no symlinks). The last state data are provided in the next execution or your program. -
If componentDescriptor is specified in the DeployItem, the Component Descriptor can be expected as a json file at the path given by the env var
COMPONENT_DESCRIPTOR_PATH
. The json file contains a resolved component descriptor list which means that all transitive component descriptors are included in a list.{ "meta":{ "schemaVersion": "v2" }, "components": [ { "meta":{ "schemaVersion": "v2" }, "component": {} } ... ] }
-
If componentDescriptor and blueprint is specified, a content blob can be accessed at the directory given by the env var
CONTENT_PATH
. The content blob consists of all data stored in the blueprint, consisting of the blueprint yaml file and all other files and folders you stored together with this. -
The container is not executed as root user due to security reasons. Instead, the following user/group is used (see the Kubernetes Documentation for details):
- User: 1000
- Group: 3000
- FSGroup: 2000
This section describes the provider specific status of the resource.
status:
providerStatus:
apiVersion: container.deployer.landscaper.gardener.cloud
kind: ProviderStatus
# A human readable message indicating details about why the pod is in this condition.
message: string
# A brief CamelCase message indicating details about why the pod is in this state.
# e.g. 'Evicted'
reason: string
# Details about the container's current condition.
state: corev1.ContainerState
# The image the container is running.
image: string
# ImageID of the container's image.
imageID: string
The container deployer reacts on specific annotations that can be set to instruct the container deployer to perform specific actions.
- container.deployer.landscaper.gardener.cloud/force-cleanup=true : triggers the force deletion of the deploy item. Force deletion means that the delete container is skipped and all other resources are cleaned up.
When deploying the container deployer controller it can be configured using the --config
flag and providing a configuration file.
The structure of the provided configuration file is defined as follows.
apiVersion: container.deployer.landscaper.gardener.cloud/v1alpha1
kind: Configuration
namespace: "default" # namespace on the host cluster where the pods should be executed
initContainer:
# image that should be used as init container
# this options should only be used for development purposes.
# it defaults to the landscaper shipped images.
image: "someimage"
command: ""
args: ""
imagePullPolicy: IfNotPresent
waitContainer:
# image that should be used as wait container
# this options should only be used for development purposes.
# it defaults to the landscaper shipped images.
image: "someimage"
command: ""
args: ""
imagePullPolicy: IfNotPresent
defaultImage:
# default image that is used if the provider configuration does not specify a image.
image: "someimage"
command: ""
args: ""
imagePullPolicy: IfNotPresent
oci:
# allow plain http connections to the oci registry.
# Use with care as the default docker registry does not serve http with any authentication
allowPlainHttp: false
# skip the tls validation
insecureSkipVerify: false
# path to docker compatible auth configuration files.
# configFiles:
# - "somepath"
# target selector to only react on specific deploy items.
# see the common config in "./README.md" for detailed documentation.
targetSelector:
annotations: []
labels: []
debug:
# keep the pod and do not delete it after it finishes.
keepPod: false
The container deployer is a Landscaper deployer that can execute arbitrary containers.
The deployer itself is Kubernetes controller that runs inside a kubernetes cluster.
This cluster can be same as the Landscaper Cluster but does not have to.
In the example this execution cluster is called Deployer Host Cluster
.
As the deployer already inside a Kubernetes cluster that is able to execute containers, this cluster is used to execute the container specified in the DeployItems ProviderConfig.
The containers are not executed in the Landscaper Cluster
as this cluster might not be able to execute containers (virtual/nodeless cluster).
And with that approach, it is possible to run the container deployer in a fenced environment and access apis/resources that are not accessible from the Landscaper cluster.
As for the other container, the target is used to determine the responsible container deployer.
Note: The target does not need to be a kubernetes cluster target. And the container deployer does not execute the container in the target cluster. It always executes it in its host cluster (although that host cluster is configurable).
When a Container DeployItem is reconciled basically the following steps are performed:
- The Container Deployer watches the DeployItem
- When the DeployItem is ready to be executed (configuration has changed), the container deployer schedules a Pod.
- That pod contains a InitContainer that fetches the targets, import values, blueprint and the resolved component descriptor. The data is stored in a shared local volume.
- The actual application code is executed in a main container as soon as the initContainer has finished. The main container has access to the shared volume which make it possible for the application to access runtime data and configuration.
- When the main container has successfully finished, optionally data is exported. See Container Export for detailed export architecture.
When the main container in a pod execution of the Container Deployer has been successfully completed. Optionally data can be exported and made available to other deployitems/installation in the cluster.
The export process looks as follows: 0. The data to be exported is written to shared volume
- The shared volume is also accessible from the sidecar container that picks up the export and creates a secret in the host cluster
- The secret in the host cluster is then synced by the Container Deployer to the virtual cluster. This is necessary to keep the credentials in the sidecar container as minimal as possible. (Because the main container executes arbitrary code we do not want to expose credentials in the sidecar.)
When executing a container with the Container Deployer, optionally a state can be used that is handled by the Container Deployer as made available for subsequent runs of the container.
- When a Container Deployer runs a deploy items the pod contains an init container as described above in the general process.
- That initContainer tries to read the state from a secret in the host cluster. If the secret does not exist it assumes that no state has been written or it is the first run. The state is read from the secret and again written to the shared volume so that the application can access the data.
- As soon as the main container has finished and written a state. That state is again on the shared volume and the sidecar container reads the state and creates the state secret.