This repository defines aws infrastructure required to operate the ml-streaming showcase.
- aws cli https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html
- aws cli session manager plugin https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html
brew tap gjbae1212/gossm
brew install gossm terragrunt tfenv jq awscli fluxcd/tap/flux
tfenv install 0.14.1
tfenv use 0.14.1
You might run into issues because both terragrunt
and tfenv
depend on terraform. Some solutions to this are outlined here:
gruntwork-io/terragrunt#580
Ensure you have an aws profile configured that is named datareply
.
We do not yet use role assumption for the use case, thus the only thing declared in ~/.aws/config
for the datareply
profile ist:
[profile datareply]
region = eu-central-1
Make additionally sure to have set proper access credentials in ~/.aws/credentials
:
[datareply]
aws_access_key_id = YOU_ACCCESS_KEY_ID
aws_secret_access_key = theRespectivEAccessKey
The terraform will pickup the profile called datareply
and will assume you have sufficient privleges to apply the infrastructure spec.
This repository will soon have an automated CD pipeline which will make this partly redundant.
cd env/dev/k3s
terragrunt apply
This section explains which steps need to be done to run kubectl towards the cluster locally.
You need to set this up once to get a local copy of the kubeconfig.
- Run the bundled
bash bin/bootstrap.sh
- (optionaly) Run
bash bin/bootstrap.sh k3s-arm-cluster
to setup the arm cluster.
You need to run this, if you are planning to connect to the cluster via kubectl.
- Run the bundled
bash bin/connect.sh
orbash bin/connect.sh k3s-arm-cluster
- Use kubectl as usual
- Make sure to have initialized kubernetes connectivity
- Make sure you are connected to the cluster
- Run
bash bin/extract_kafka_clientconfig.sh k3s-arm-cluster
which yields the kafka client config.
Please review the the cifar10 workload section.
The installed grafana dashboard provides an indication about the model perfomance,
kubectl port-forward svc/seldon-core-analytics-grafana 3000:80 -n seldon-system
Open localhost:3000 in your webbrowser to get an indication of model performance. Login admin:password
This only needs to be done once by the cluster admin in order to bootstrap flux on the cluster and connect the app-configuration-layer-ml-streaming
with it.
export GITHUB_TOKEN="TODO REPLACE BY TOKEN"
export GITHUB_USER="TODO REPLACE BY USER"
flux bootstrap github \
--owner=DataReply \
--repository=app-configuration-layer-ml-streaming \
--branch=master \
--kubeconfig k3s-cluster.yaml \
--context=default \
--path=clusters/dev
flux bootstrap github \
--owner=DataReply \
--repository=app-configuration-layer-ml-streaming \
--branch=master \
--kubeconfig k3s-arm-cluster.yaml \
--context=default \
--path=clusters/dev-arm