The data analysis workflows platform deployment. The deployment consists of a Kubernetes virtual cluster (vcluster), in which Argo Workflows is deployed
Refer to https://diamondlightsource.github.io/workflows/docs for more explanations and tutorials for the workflows.
The workflow engine can be deployed using Helm:
helm install workflows-cluster charts/workflows-cluster
This will install a virtual cluster together with Argo CD, which then installs all other services inside the vcluster including the workflow engine itself.
To connect to the virtual cluster and run a command inside the vcluster, use
vcluster connect workflows-cluster -- <COMMAND>
helm install workflows-cluster charts/workflows-cluster -f charts/workflows-cluster/dev-values.yaml
Note that for getting the workflows-server to run inside the dev environment it is necessary to extract the argo-server-sso secret, delete the deployed sealed secret and then deploy a new sealed secret using kubectl create -f <SEALED-SECRET>
inside the virtual cluster.
Firstly, install mkdocs
and the requisite dependencies in docs/requirements.txt
; For this you may wish to use pipx
, as:
pipx install mkdocs
pipx runpip mkdocs install -r docs/requirements.txt
Now, serve the docs with mkdocs
:
mkdocs serve
To access the Argo CD dashboard, we need to use port-forwarding to connect to the argocd-server inside the vcluster
kubectl -n workflows port-forward svc/argocd-server-x-argocd-x-workflows-cluster 8080:80 &
and then open the dashboard on localhost:8080. To obtain the admin password, you can use
vcluster connect workflows-cluster -- argocd admin initial-password -n argocd
The frontend
directory contains all the react components for the workflows. The workflows-lib
subdirectory containers all the pure components where as relay-workflows-lib
contains relay components that fetches the data from a workflows proxy.
Refer to https://diamondlightsource.github.io/workflows/storybook to see all the components in storybook.