Demo application for the blogpost: "Optimizing Kafka Streams Topologies running on Kubernetes".
- Kubernetes Cluster
- Helm
- Kafka ( e.g., via confluent helm charts )
- [Keda] (https://keda.sh/)
- bakdata Helm Charts which can be added with:
helm repo add bakdata-common https://raw.githubusercontent.com/bakdata/streams-bootstrap/master/charts
We provide schemas for the Avro Random Generator in ./data-gen
. The Datagen Source Connector by Confluent and their ksql-datagen tool can use those provided schemas to generate the data into input topics.
We use jib to build the application and push it to a docker registry.
We provide 3 different example deployments in ./deployments
:
values-all.yaml
: Deployment for the application running the whole topologyvalues-customer-lookup.yaml
: Deployment for the first sub-topology performing the customer lookupvalues-long-running.yaml
: Deployment for the second, long-running sub-topology
Using Helm, we can then deploy the application to Kubernetes:
helm upgrade --debug --install --force \
--values values-all.yaml \
--set image={image}
--set streams.brokers={broker}
--set streams.schemaRegistryUrl={schemaRegistryUrl}
complete-topology bakdata-common/streams-app --namespace {namespace}