Table of Contents generated with DocToc
- Development Guide
If you would like to contribute to the KubeFed project, this guide will help you get started.
The KubeFed deployment depends on kubebuilder
, etcd
, kubectl
, and
kube-apiserver
>= v1.16 being installed in the path. The kubebuilder
(v2.3.1
as of this writing) release packages all of these dependencies together.
These binaries can be installed via the download-binaries.sh
script, which
downloads them to ./bin
:
./scripts/download-binaries.sh
export PATH=$(pwd)/bin:${PATH}
The KubeFed deployment requires kubernetes version >= 1.16. To see a detailed list of binaries required, see the prerequisites section in the user guide
This repo requires docker
>= 1.12 to do the docker build work.
Set up your Docker environment.
As per the docs for kubebuilder, bootstrapping a new KubeFed API type can be accomplished as follows:
# Bootstrap and commit a new type
$ kubebuilder create api --group <your-group> --version v1alpha1 --kind <your-kind>
$ git add .
$ git commit -m 'Bootstrapped a new api resource <your-group>.kubefed.io./v1alpha1/<your-kind>'
# Modify and commit the bootstrapped type
vi pkg/apis/<your-group>/v1alpha1/<your-kind>_types.go
git commit -a -m 'Added fields to <your-kind>'
# Update the generated code and commit
make generate
git add .
git commit -m 'Updated generated code'
The generated code will need to be updated whenever the code for a type is modified. Care should be taken to separate generated from non-generated code in the commit history.
The KubeFed E2E tests must be executed against a KubeFed control plane with one or more registered clusters. Optionally, the KubeFed controllers can be run in-memory to enable debugging.
Many of the tests validate CRUD operations for each of the federated types enabled by default:
- the objects are created in the target clusters.
- a label update is reflected in the objects stored in the target clusters.
- a placement update for the object is reflected in the target clusters.
- optionally if
RawResourceStatusCollection
feature is enabled, tests check the value of theremoteStatus
field of federated resources. - deleted resources are removed from the target clusters.
The read operation is implicit.
In order to run E2E tests, you first need to:
- Create clusters
- See the user guide for a way to deploy clusters for testing KubeFed.
- Deploy the KubeFed control plane
- To deploy the latest version of the KubeFed control plane, follow the Helm chart deployment in the user guide.
- To deploy your own changes, follow the Test Your Changes section of this guide.
Once completed, return here for instructions on running the e2e tests.
Follow the below instructions to run E2E tests against a KubeFed control plane.
To run E2E tests for all types:
cd test/e2e
go test -args -kubeconfig=/path/to/kubeconfig -test.v
To run E2E tests for a single type:
cd test/e2e
go test -args -kubeconfig=/path/to/kubeconfig -test.v \
--ginkgo.focus='Federated "secrets"'
It may be helpful to use the delve debugger to gain insight into the components involved in the test:
cd test/e2e
dlv test -- -kubeconfig=/path/to/kubeconfig -test.v \
--ginkgo.focus='Federated "secrets"'
Running the KubeFed controllers in-memory for a test run allows the controllers to be targeted by a debugger (e.g. delve) or the golang race detector. The prerequisite for this mode is scaling down the KubeFed controller manager:
-
Reduce the
kubefed-controller-manager
deployment replicas to 0. This way we can launch the necessary KubeFed controllers ourselves via the test binary.kubectl scale deployments kubefed-controller-manager -n kube-federation-system --replicas=0
Once you've reduced the replicas to 0, you should see the
kubefed-controller-manager
deployment update to show 0 pods running:kubectl -n kube-federation-system get deployment.apps kubefed-controller-manager NAME DESIRED CURRENT AGE kubefed-controller-manager 0 0 14s
-
Run tests.
cd test/e2e go test -race -args -kubeconfig=/path/to/kubeconfig -in-memory-controllers=true \ -test.v --ginkgo.focus='Federated "secrets"'
Additionally, you can run delve to debug the test:
cd test/e2e dlv test -- -kubeconfig=/path/to/kubeconfig -in-memory-controllers=true \ -test.v --ginkgo.focus='Federated "secrets"'
The Simulated Scale
e2e test supports exploring control plane behavior with an arbitrary number of
registered clusters. Sync controller behavior is modified by the test to allow each simulated cluster to
be represented as a namespace in the host cluster.
To run the test, deploy a namespace-scoped control plane, scale it down as per the section on running tests with in-memory controllers, and execute the following:
cd test/e2e
go test -args -kubeconfig=/path/to/kubeconfig -ginkgo.focus=Scale -scale-test=true -scale-cluster-count=<number>
Follow the cleanup instructions in the user guide.
This project is using go-bindata
tool for embedding static files into its e2e
test-suite to enable the creation of a self-contained e2e binary.
You can install this utility using the download-binaries.sh script.
Use make generate
to regenerate the bindata.go
file in case the bundled
content changes. It's necessary to follow this step to ensure that e2e
test-suite passes the CI build.
Please refer to this script for more information.
In order to test your changes on your kubernetes cluster, you'll need to build an image and a deployment config.
NOTE: When KubeFed CRDs are changed, you need to run:
make generate
This step ensures that the CRD resources in helm chart are synced.
Ensure binaries from kubebuilder for etcd
and kube-apiserver
are in the path (see prerequisites).
If you just want to have this automated, then run the following command
specifying your own image. This assumes you've used the steps documented
above to
set up two kind
or minikube
clusters (cluster1
and cluster2
):
./scripts/deploy-kubefed.sh <containerregistry>/<username>/kubefed:test cluster2
NOTE: You can list multiple joining cluster names in the above command. Also, please make sure the joining cluster name(s) provided matches the joining cluster context from your kubeconfig. This will already be the case if you used the steps documented above to create your clusters.
As a convenience, when deploying the kubefed control plane on a kind cluster without pushing the Docker images, you can also run:
make deploy.kind
This command can be run multiple times to redeploy kubefed when any code changes have been made.
In order to test the latest master changes (tagged as canary
) on your
kubernetes cluster, you'll need to generate a config that specifies the correct
image and generated CRDs. To do that, run the following command:
make generate
./scripts/deploy-kubefed.sh <containerregistry>/<username>/kubefed:canary cluster2
In order to test the latest stable released version (tagged as latest
) on
your kubernetes cluster, follow the
Helm Chart Deployment instructions from the user guide.
If you are going to add some new sections for the document, make sure to update the table of contents. This can be done manually or with doctoc.