Skip to content

Commit

Permalink
Tidy up demo scripts and instructions
Browse files Browse the repository at this point in the history
 - fix a dependency loop in the demo with CRDs and their use
 - fix a typo in the enrol script
 - give some clues about manually cleaning up

Signed-off-by: Michael Bridgen <mikeb@squaremobius.net>
  • Loading branch information
squaremo committed Apr 13, 2022
1 parent 77bddb3 commit 7755045
Show file tree
Hide file tree
Showing 4 changed files with 60 additions and 11 deletions.
7 changes: 4 additions & 3 deletions demo/delete-cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,11 @@ name="$1"

echo "--> removing Cluster object and secret"
kubectl delete --ignore-not-found secret "$name-kubeconfig"
kubectl delete --ignore-not-found -f "$name.yaml"
if [ -f "$name.yaml" ]; then kubectl delete --ignore-not-found -f "$name.yaml"; fi

echo "--> deleting kind cluster $name"
kind delete cluster --name "$name"

echo "--> cleaning up kubeconfig"
rm "$name.kubeconfig"
echo "--> cleaning up kubeconfig and YAML"
rm -f "$name.kubeconfig"
rm -f "$name.yaml"
2 changes: 1 addition & 1 deletion demo/enrol-kind-cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ echo "--> create secret $name-kubeconfig"
kubectl create secret generic "$name-kubeconfig" --from-file="value=$kubeconfig"

host=$(yq eval '.networking.apiServerAddress' "kind.config")
port=$(yq eval '.clusters[0].cluster.server'"$kubeconfig" | sed 's#https://.*:\([0-9]\{4,5\}\)#\1#')
port=$(yq eval '.clusters[0].cluster.server' "$kubeconfig" | sed 's#https://.*:\([0-9]\{4,5\}\)#\1#')

echo "<!> Using host $host from kind.config apiServerAddress, this is assumed to be"
echo " an IP address accessible from the control cluster. For example, the IP address"
Expand Down
2 changes: 1 addition & 1 deletion demo/kind.config
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
apiServerAddress: 192.168.86.33
apiServerAddress: 192.168.86.84 # needs to be the IP of a local interface
60 changes: 54 additions & 6 deletions demo/transcript.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,12 +136,22 @@ This ties another knot: how does a downstream cluster start syncing anything? Th
a `BootstrapModule`, which will install the required GitOps Toolkit and Fleeet machinery on each
downstream cluster.

Now create bootstrap modules referring to these bits of repository:
This goes in another directory, with an indirection in the form of a Kustomization to sync it. That
is to avoid a chicken-and-egg problem of having a BootstrapModule object in an "earlier" sync (the
one created by `flux bootstrap`) than the one that defines its type (the `Kustomization` object
`fleeet-control`, created above).

Create bootstrap modules referring to these bits of repository:

```bash
# A directory for the fleet objects
mkdir fleet
# Sync the fleet directory
flux create kustomization fleet-objects --source=flux-system --path=fleet --prune=true --depends-on=fleeet-control --export > upstream/fleet-objects-sync.yaml
#
# This bootstrap module will be applied to all downstream clusters that show up in the namespace. The module must be given a particular revision or tag (but not a branch -- that would be the same as using image:latest).
CONFIG_VERSION=v0.1
cat > upstream/bootstrap-worker.yaml <<EOF
cat > fleet/bootstrap-worker.yaml <<EOF
---
apiVersion: fleet.squaremo.dev/v1alpha1
kind: BootstrapModule
Expand Down Expand Up @@ -179,8 +189,8 @@ spec:
EOF
#
# Add it to git, to be synced to the management cluster
git add upstream/bootstrap-worker.yaml
git commit -s -m "Add bootstrap modules for downstream"
git add fleet/bootstrap-worker.yaml upstream/fleet-objects-sync.yaml
git commit -s -m "Add bootstrap modules and a sync for them"
git push
```

Expand All @@ -193,18 +203,37 @@ git push
sh create-cluster.sh cluster-1
```

This creates a `kind` cluster and saves a kubeconfig for it to the current directory (e.g., as
`cluster-1.kubeconfig`). So that you can create a bunch of clusters, which takes a while, and make
it look like they are coming online later quickly, there's a separate step to make the cluster
appear in the control plane:

```bash
sh enrol-cluster.sh cluster-1.kubeconfig
```

(NB you supply the kubeconfig file; the script will work out the cluster name from that.)

See what happened in the downstream cluster:

```bash
kubectl --kubeconfig ./cluster-1.kubeconfig get namespace
# and explore from there ...
```

If something doesn't seem like it's working, you can have a look at the cluster nodes:

```bash
kubectl --kubeconfig ./cluster-1.kubeconfig describe nodes
```

You may see problems with the CNI plugin not working. Upgrading `kind` and trying again might help.

### Create a module

```bash
# Create a module and add that to syncing
cat > upstream/podinfo-module.yaml <<EOF
cat > fleet/podinfo-module.yaml <<EOF
apiVersion: fleet.squaremo.dev/v1alpha1
kind: Module
metadata:
Expand All @@ -222,7 +251,7 @@ spec:
kustomize:
path: ./kustomize
EOF
git add upstream/podinfo-module.yaml
git add fleet/podinfo-module.yaml
git commit -s -m "Add module for podinfo app"
git push
```
Expand All @@ -247,3 +276,22 @@ kubectl --kubeconfig ./cluster-a.kubeconfig get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
podinfo 2/2 2 2 5m17s
```

# Cleaning up

The script `delete-cluster.sh` will remove the Kubernetes resources associated with a cluster, the
files created by scripts, and delete the `kind` cluster itself.

If you need to clean things up manually, this is what might need to be deleted for, say,
`cluster-1`:

In the directory in which you ran scripts:
- a file `cluster-1.kubeconfig`
- a file `cluster-1.yaml`

In the cluster:
- a `Cluster` object named `cluster-1`
- a `Secret` named `cluster-1-kubeconfig`
- a `RemoteAssemblage` named `cluster-1`
- Kustomization objects with names including `cluster-1`
- a `ProxyAssemblage` named `cluster-1`

0 comments on commit 7755045

Please sign in to comment.