This repository contains a custom Kubernetes controller that can be used to make secrets and config maps available in multiple namespaces.
-
Add the Mittwald Helm Repo:
$ helm repo add mittwald https://helm.mittwald.de "mittwald" has been added to your repositories $ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "mittwald" chart repository Update Complete. ⎈ Happy Helming!⎈
-
Upgrade or install
kubernetes-replicator
helm upgrade --install kubernetes-replicator mittwald/kubernetes-replicator
$ # Create roles and service accounts
$ kubectl apply -f https://raw.githubusercontent.com/mittwald/kubernetes-replicator/master/deploy/rbac.yaml
$ # Create actual deployment
$ kubectl apply -f https://raw.githubusercontent.com/mittwald/kubernetes-replicator/master/deploy/deployment.yaml
To create a new role, your own account needs to have at least the same set of privileges as the role you're trying to create. The chart currently offers two options to grant these permissions to the service account used by the replicator:
-
Set the value
grantClusterAdmin
totrue
, which grants the service account admin privileges. This is set tofalse
by default, as having a service account with that level of access might be undesirable due to the potential security risks attached. -
Set the lists of needed api groups and resources explicitly. These can be specified using the value
privileges
.privileges
is a list that contains pairs of api group and resource lists.Example:
serviceAccount: create: true annotations: {} name: privileges: - apiGroups: [ "", "apps", "extensions" ] resources: ["secrets", "configmaps", "roles", "rolebindings", "cronjobs", "deployments", "events", "ingresses", "jobs", "pods", "pods/attach", "pods/exec", "pods/log", "pods/portforward", "services"] - apiGroups: [ "batch" ] resources: ["configmaps", "cronjobs", "deployments", "events", "ingresses", "jobs", "pods", "pods/attach", "pods/exec", "pods/log", "pods/portforward", "services"]
These settings permit the replication of Roles and RoleBindings with privileges for the api groups
""
.apps
,batch
andextensions
on the resources specified.
Push-based replication will "push out" the secrets, configmaps, roles and rolebindings into namespaces when new namespaces are created or when the secret/configmap/roles/rolebindings changes.
There are two general methods for push-based replication:
-
name-based; this allows you to either specify your target namespaces by name or by regular expression (which should match the namespace name). To use name-based push replication, add a
replicator.v1.mittwald.de/replicate-to
annotation to your secret, role(binding) or configmap. The value of this annotation should contain a comma separated list of permitted namespaces or regular expressions. (Example:namespace-1,my-ns-2,app-ns-[0-9]*
will replicate only into the namespacesnamespace-1
andmy-ns-2
as well as any namespace that matches the regular expressionapp-ns-[0-9]*
).Example:
apiVersion: v1 kind: Secret metadata: annotations: replicator.v1.mittwald.de/replicate-to: "my-ns-1,namespace-[0-9]*" data: key1: <value>
-
label-based; this allows you to specify a label selector that a namespace should match in order for a secret, role(binding) or configmap to be replicated. To use label-based push replication, add a
replicator.v1.mittwald.de/replicate-to-matching
annotation to the object you want to replicate. The value of this annotation should contain an arbitrary label selector.Example:
apiVersion: v1 kind: Secret metadata: annotations: replicator.v1.mittwald.de/replicate-to-matching: > my-label=value,my-other-label,my-other-label notin (foo,bar) data: key1: <value>
When the labels of a namespace are changed, any resources that were replicated by labels into the namespace and no longer qualify for replication under the new set of labels will be deleted. Afterwards any resources that now match the updated labels will be replicated into the namespace.
It is possible to use both methods of push-based replication together in a single resource, by specifying both annotations.
Pull-based replication makes it possible to create a secret/configmap/role/rolebindings and select a "source" resource from which the data is replicated from.
If a secret or configMap needs to be replicated to other namespaces, annotations should be added in that object permitting replication.
-
Add
replicator.v1.mittwald.de/replication-allowed
annotation with valuetrue
indicating that the object can be replicated. -
Add
replicator.v1.mittwald.de/replication-allowed-namespaces
annotation. Value of this annotation should contain a comma separated list of permitted namespaces or regular expressions. For examplenamespace-1,my-ns-2,app-ns-[0-9]*
: in this case replication will be performed only into the namespacesnamespace-1
andmy-ns-2
as well as any namespace that matches the regular expressionapp-ns-[0-9]*
.apiVersion: v1 kind: Secret metadata: annotations: replicator.v1.mittwald.de/replication-allowed: "true" replicator.v1.mittwald.de/replication-allowed-namespaces: "my-ns-1,namespace-[0-9]*" data: key1: <value>
Add the annotation replicator.v1.mittwald.de/replicate-from
to any Kubernetes secret or config map object. The value
of that annotation should contain the the name of another secret or config map (using <namespace>/<name>
notation).
apiVersion: v1
kind: Secret
metadata:
name: secret-replica
annotations:
replicator.v1.mittwald.de/replicate-from: default/some-secret
data: {}
The replicator will then copy the data
attribute of the referenced object into the annotated object and keep them in
sync.
By default, the replicator adds an annotation replicator.v1.mittwald.de/replicated-from-version
to the target object.
This annotation contains the resource-version of the source object at the time of replication.
When the target object is re-applied with an empty data
attribute, the replicator will not automatically perform replication.
The reason is that the target already has the replicated-from-version
annotation with a matching source resource-version.
For Secrets and ConfigMaps, there is the option to synchronize based on the content, ignoring the replicated-from-version
annotation.
To activate this mode, start the replicator with the --sync-by-content
flag.
Secrets of type kubernetes.io/tls
are treated in a special way and need to have a data["tls.crt"]
and a
data["tls.key"]
property to begin with. In the replicated secrets, these properties need to be present to begin with,
but they may be empty:
apiVersion: v1
kind: Secret
metadata:
name: tls-secret-replica
annotations:
replicator.v1.mittwald.de/replicate-from: default/some-tls-secret
type: kubernetes.io/tls
data:
tls.key: ""
tls.crt: ""
Secrets of type kubernetes.io/dockerconfigjson
also require special treatment. These secrets require to have a
.dockerconfigjson
key that needs to require valid JSON. For this reason, a replicated secret of this type should be
created as follows:
apiVersion: v1
kind: Secret
metadata:
name: docker-secret-replica
annotations:
replicator.v1.mittwald.de/replicate-from: default/some-docker-secret
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: e30K
Operators like https://github.com/strimzi/strimzi-kafka-operator implement an own garbage collection based on specific labels defined on resources. If mittwald replicator replicate secrets to different namespace, the strimzi-kafka-operator will remove the replicated secrets because from operators point of view the secret is a left-over. To mitigate the issue, set the annotation replicator.v1.mittwald.de/strip-labels=true
to remove all labels on the replicated resource.
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/managed-by: "strimzi-kafka-operator"
name: cluster-ca-certs
annotations:
replicator.v1.mittwald.de/strip-labels: "true"
type: kubernetes.io/tls
data:
tls.key: ""
tls.crt: ""
Sometimes, secrets are generated by external components. Such secrets are configured with an ownerReference. By default, the kubernetes-replicator will delete the ownerReference in the target namespace.
ownerReference won't work across different namespaces and the secret at the destination will be removed by the kubernetes garbage collection.
To keep ownerReferences
at the destination, set the annotation replicator.v1.mittwald.de/keep-owner-references=true
apiVersion: v1
kind: Secret
metadata:
name: docker-secret-replica
annotations:
replicator.v1.mittwald.de/keep-owner-references: "true"
ownerReferences:
- apiVersion: v1
kind: Deployment
name: owner
uid: "1234"
type: kubernetes.io/tls
data:
tls.key: ""
tls.crt: ""
See also: #120