- Clone this repo to your local machine
git clone https://github.com/rucio/k8s-tutorial/
- Install
kubectl
: https://kubernetes.io/docs/tasks/tools/install-kubectl/ - Install
helm
: https://helm.sh/docs/intro/install/ - (Optional) Install
minikube
if you do not have a pre-existing Kubernetes cluster: https://kubernetes.io/docs/tasks/tools/install-minikube/
NOTE: All following commands should be run from the top-level directory of this repository.
You can skip this step if you have already set up a Kubernetes cluster.
- Run the
minikube
setup script:
./scripts/setup-minikube.sh
You can perform either an automatic deployment or a manual deployment, as documented below.
- Run the Rucio deployment script:
./scripts/deploy-rucio.sh
helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add rucio https://rucio.github.io/helm-charts
kubectl apply -k ./secrets
If you have done this step in a previous tutorial deployment on this cluster, the existing Postgres PersistentVolumeClaim must be deleted.
- Verify if the PVC exists via:
kubectl get pvc data-postgres-postgresql-0
If the PVC exists, the command will return the following message:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data-postgres-postgresql-0 Bound ... 8Gi RWO standard <unset> 4s
If the PVC does not exist, the command will return this message:
Error from server (NotFound): persistentvolumeclaims "data-postgres-postgresql-0" not found
You can skip to the next section if the PVC does not exist.
- If the PVC exists, patch it to allow deletion:
kubectl patch pvc data-postgres-postgresql-0 -p '{"metadata":{"finalizers":null}}'
- Delete the PVC:
kubectl delete pvc data-postgres-postgresql-0
- You might also need to uninstall
postgres
if it is installed:
helm uninstall postgres
helm install postgres bitnami/postgresql -f manifests/values-postgres.yaml
kubectl get pod postgres-postgresql-0
Once the Postgres setup is complete, you should see STATUS: Running
.
- Once Postgres is running, start the init container pod to set up the Rucio database:
kubectl apply -f manifests/init-pod.yaml
- This command will take some time to complete. You can follow the relevant logs via:
kubectl logs -f init
kubectl get pod init
Once the init container pod setup is complete, you should see STATUS: Completed
.
helm install server rucio/rucio-server -f manifests/values-server.yaml
- You can check the deployment status via:
kubectl rollout status deployment server-rucio-server
- This command will deploy three XRD storage container pods.
kubectl apply -f manifests/xrd.yaml
kubectl apply -f manifests/ftsdb.yaml
- You can check the deployment status via:
kubectl rollout status deployment fts-mysql
- Once the FTS database deployment is complete, Install the FTS server:
kubectl apply -f manifests/fts.yaml
- You can check the deployment status via:
kubectl rollout status deployment fts-server
helm install daemons rucio/rucio-daemons -f manifests/values-daemons.yaml
This command might take a few minutes.
- If at any point
helm
fails to install, before re-installing, remove the previous failed installation:
helm list # list all helm installations
helm delete $installation
- You might also get errors that a
job
also exists. You can easily remove this:
kubectl get jobs # get all jobs
kubectl delete jobs/$jobname
Once the setup is complete, you can use Rucio by interacting with it via a client.
You can either run the provided script to showcase the usage of Rucio, or you can manually run the Rucio commands described in the Manual client usage section.
- Run the Rucio usage script:
./scripts/use-rucio.sh
kubectl apply -f manifests/client.yaml
- You can verify that the client container is running via:
kubectl get pod client
Once the client container pod setup is complete, you should see STATUS: Running
.
kubectl exec -it client -- /bin/bash
rucio-admin rse add XRD1
rucio-admin rse add XRD2
rucio-admin rse add XRD3
rucio-admin rse add-protocol --hostname xrd1 --scheme root --prefix //rucio --port 1094 --impl rucio.rse.protocols.gfal.Default --domain-json '{"wan": {"read": 1, "write": 1, "delete": 1, "third_party_copy_read": 1, "third_party_copy_write": 1}, "lan": {"read": 1, "write": 1, "delete": 1}}' XRD1
rucio-admin rse add-protocol --hostname xrd2 --scheme root --prefix //rucio --port 1094 --impl rucio.rse.protocols.gfal.Default --domain-json '{"wan": {"read": 1, "write": 1, "delete": 1, "third_party_copy_read": 1, "third_party_copy_write": 1}, "lan": {"read": 1, "write": 1, "delete": 1}}' XRD2
rucio-admin rse add-protocol --hostname xrd3 --scheme root --prefix //rucio --port 1094 --impl rucio.rse.protocols.gfal.Default --domain-json '{"wan": {"read": 1, "write": 1, "delete": 1, "third_party_copy_read": 1, "third_party_copy_write": 1}, "lan": {"read": 1, "write": 1, "delete": 1}}' XRD3
rucio-admin rse set-attribute --rse XRD1 --key fts --value https://fts:8446
rucio-admin rse set-attribute --rse XRD2 --key fts --value https://fts:8446
rucio-admin rse set-attribute --rse XRD3 --key fts --value https://fts:8446
Note that 8446
is the port exposed by the fts-server
pod. You can view the ports opened by a pod by kubectl describe pod PODNAME
.
rucio-admin rse add-distance --distance 1 --ranking 1 XRD1 XRD2
rucio-admin rse add-distance --distance 1 --ranking 1 XRD1 XRD3
rucio-admin rse add-distance --distance 1 --ranking 1 XRD2 XRD1
rucio-admin rse add-distance --distance 1 --ranking 1 XRD2 XRD3
rucio-admin rse add-distance --distance 1 --ranking 1 XRD3 XRD1
rucio-admin rse add-distance --distance 1 --ranking 1 XRD3 XRD2
rucio-admin account set-limits root XRD1 -1
rucio-admin account set-limits root XRD2 -1
rucio-admin account set-limits root XRD3 -1
rucio-admin scope add --account root --scope test
dd if=/dev/urandom of=file1 bs=10M count=1
dd if=/dev/urandom of=file2 bs=10M count=1
dd if=/dev/urandom of=file3 bs=10M count=1
dd if=/dev/urandom of=file4 bs=10M count=1
rucio upload --rse XRD1 --scope test file1
rucio upload --rse XRD1 --scope test file2
rucio upload --rse XRD2 --scope test file3
rucio upload --rse XRD2 --scope test file4
rucio add-dataset test:dataset1
rucio attach test:dataset1 test:file1 test:file2
rucio add-dataset test:dataset2
rucio attach test:dataset2 test:file3 test:file4
rucio add-container test:container
rucio attach test:container test:dataset1 test:dataset2
rucio add-dataset test:dataset3
rucio attach test:dataset3 test:file4
rucio add-rule test:container 1 XRD3
This command will output a rule ID, which can also be obtained via:
rucio list-rules test:container
- You can check the information of the rule that has been created:
rucio rule-info <rule_id>
As the daemons run with long sleep cycles (e.g. 30 seconds, 60 seconds) by default, this could take a while. You can monitor the output of the daemon containers to see what they are doing.
- Activate
kubectl
completion:
Bash:
source <(kubectl completion bash)
Zsh:
source <(kubectl completion zsh)
- View all containers:
kubectl get pods
kubectl get pods --all-namespaces
- View logfiles of a pod:
kubectl logs <NAME>
- Tail logfiles of a pod:
kubectl logs -f <NAME>
- Update helm repositories:
helm repo update
- Shut down minikube:
minikube stop
- Command references: