From 11cb11f1be3e9a8889e1b71a2891d088e5a8a052 Mon Sep 17 00:00:00 2001
From: Eitan Suez The Prometheus dashboardBefore deploying Prometheus, patch the prometheus deployment to use the latest version of the image:
Launch the Prometheus dashboard:
-istioctl dash prometheus
+
Here are some PromQL queries you can try out, that will fetch metrics from Prometheus' metrics store:
-
The number of requests made by petclinic-frontend
to the cutomers
service:
-istio_requests_total{source_app="petclinic-frontend",destination_app="customers-service",reporter="source"}
+istio_requests_total{source_app="petclinic-frontend",destination_app="customers-service",reporter="source"}
-
A business metric exposed by the application proper: the number of calls to the findPet
method:
-petclinic_pet_seconds_count{method="findPet"}
+
Istio's Grafana metrics dashboards¶
Istio provides standard service mesh dashboards, based on the standard metrics collected by Envoy and sent to Prometheus.
Deploy Grafana:
-kubectl apply -f samples/addons/grafana.yaml
+
Launch the Grafana dashboard:
-istioctl dash grafana
+
Navigate to the dashboards section, you will see an Istio folder.
Select the Istio service dashboard.
@@ -787,17 +802,17 @@ Kiali&
-
Cancel the currently-running siege command. Relaunch siege, but with a different set of target endpoints:
-siege --concurrent=6 --delay=2 --file=./frontend-urls.txt
+
-
Deploy Kiali:
-kubectl apply -f samples/addons/kiali.yaml
+
-
Launch the Kiali dashboard:
-istioctl dashboard kiali
+
Select the Graph view and the default
namespace.
The flow of requests through the applications call graph will be rendered.
diff --git a/search/search_index.json b/search/search_index.json
index 061547c..21dbd7e 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"A great deal of \"cruft\" accumulates inside many files in spring-petclinic-cloud
: configuration for service discovery, load balancing, routing, retries, resilience, and so on.
When you move to Istio, you get separation of concerns. It's ironic that the Spring framework's raison d'\u00eatre was separation of concerns, but its focus is inside a monolithic application, not between microservices. When you move to cloud-native applications, you end up with a tangle of concerns that Istio helps you untangle.
And, little by little, our apps become sane again. It reminds me of one of Antoine de Saint-Exup\u00e9ry's famous quotes:
Perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away
The following instructions will walk you through deploying spring-petclinic-istio
either using a local Kubernetes cluster or a remote, cloud-based cluster.
After the application is deployed, I walk you through some aspects of the application and additional benefits gained from running on the Istio platform: orthogonal configuration of traffic management and resilience concerns, stronger security and workload identity, and observability.
Let's get started..
"},{"location":"api/","title":"API Endpoints","text":"Below, we demonstrate calling endpoints on the application in either of two ways:
- Internally from within the Kubernetes cluster
-
Through the \"front door\", via the ingress gateway
The environment variable LB_IP
captures the public IP address of the load balancer fronting the ingress gateway. We can access the service endpoints through that IP address.
"},{"location":"api/#deploy-the-sleep-client","title":"Deploy the sleep
client","text":"We make use of Istio's sleep sample application to facilitate the task of making calls to workloads from inside the cluster.
The sleep
deployment is a blank client Pod that can be used to send direct calls to specific microservices from within the Kubernetes cluster.
Deploy sleep
to your cluster:
kubectl apply -f manifests/sleep.yaml\n
Wait for the sleep pod to be ready (2/2 containers).
"},{"location":"api/#test-individual-service-endpoints","title":"Test individual service endpoints","text":"We assume that you have the excellent jq utility already installed.
"},{"location":"api/#call-the-vets-controller-endpoint","title":"Call the \"Vets\" controller endpoint","text":"InternalExternal kubectl exec deploy/sleep -- curl -s vets-service:8080/vets | jq\n
curl -s http://$LB_IP/api/vet/vets | jq\n
"},{"location":"api/#customers-service-endpoints","title":"Customers service endpoints","text":"Here are a couple of customers-service
endpoints to test:
InternalExternal kubectl exec deploy/sleep -- curl -s customers-service:8080/owners | jq\n
kubectl exec deploy/sleep -- curl -s customers-service:8080/owners/1/pets/1 | jq\n
curl -s http://$LB_IP/api/customer/owners | jq\n
curl -s http://$LB_IP/api/customer/owners/1/pets/1 | jq\n
Give the owner George Franklin a new pet, Sir Hiss (a snake):
InternalExternal kubectl exec deploy/sleep -- curl -s -v \\\n -X POST -H 'Content-Type: application/json' \\\n customers-service:8080/owners/1/pets \\\n -d '{ \"name\": \"Sir Hiss\", \"typeId\": 4, \"birthDate\": \"2020-01-01\" }'\n
curl -v -X POST -H 'Content-Type: application/json' \\\n http://$LB_IP/api/customer/owners/1/pets \\\n -d '{ \"name\": \"Sir Hiss\", \"typeId\": 4, \"birthDate\": \"2020-01-01\" }'\n
This can also be performed directly from the UI.
"},{"location":"api/#the-visits-service","title":"The Visits service","text":"Test one of the visits-service
endpoints:
InternalExternal kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits?petId=8 | jq\n
curl -s http://$LB_IP/api/visit/pets/visits?petId=8 | jq\n
"},{"location":"api/#petclinic-frontend","title":"PetClinic Frontend","text":"Call petclinic-frontend
endpoint that calls both the customers and visits services:
InternalExternal kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
"},{"location":"api/#summary","title":"Summary","text":"Now that we have some familiarity with some of the API endpoints that make up this application, let's turn our attention to configuring a small aspect of resilience: timeouts.
"},{"location":"deploy/","title":"Build and Deploy PetClinic","text":""},{"location":"deploy/#deploy-each-microservices-backing-database","title":"Deploy each microservice's backing database","text":"Deployment decisions:
- We use mysql. Mysql can be installed with helm. Its charts are in the bitnami repository.
- We deploy a separate database statefulset for each service.
- Inside each statefulset we name the database \"service_instance_db\".
- Apps use the root username \"root\".
- The helm installation will generate a root user password in a secret.
- The applications reference the secret name to get at the database credentials.
"},{"location":"deploy/#preparatory-steps","title":"Preparatory steps","text":"We assume you already have helm installed.
-
Add the helm repository:
helm repo add bitnami https://charts.bitnami.com/bitnami\n
-
Update it:
helm repo update\n
"},{"location":"deploy/#deploy-the-databases","title":"Deploy the databases","text":"Deploy the databases with a helm install
command, one for each app/service:
-
Vets:
helm install vets-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
-
Visits:
helm install visits-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
-
Customers:
helm install customers-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
The databases should be up after ~ 1-2 minutes.
Wait for the pods to be ready (2/2 containers).
"},{"location":"deploy/#build-the-apps-docker-images-and-push-them-to-image-registry","title":"Build the apps, docker images, and push them to image registry","text":"We assume you already have maven installed locally.
-
Compile the apps and run the tests:
mvn clean package\n
-
Build the images
mvn spring-boot:build-image\n
-
Publish the images
./push-images.sh\n
"},{"location":"deploy/#deploy-the-apps","title":"Deploy the apps","text":"The deployment manifests are located in manifests/deploy
.
The services are vets
, visits
, customers
, and petclinic-frontend
. For each service we create a Kubernetes ServiceAccount, a Deployment, and a ClusterIP service.
vets-service.yaml ---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: vets-service\n labels:\n account: vets-service\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: vets-service\n labels:\n app: vets-service\nspec:\n ports:\n - name: http\n port: 8080\n selector:\n app: vets-service\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: vets-v1\n labels:\n app: vets-service\n version: v1\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: vets-service\n version: v1\n template:\n metadata:\n labels:\n app: vets-service\n version: v1\n annotations:\n prometheus.io/scrape: \"true\"\n prometheus.io/port: \"8080\"\n prometheus.io/path: \"/actuator/prometheus\"\n spec:\n serviceAccountName: vets-service\n containers:\n - name: vets-service\n image: ${PULL_IMAGE_REGISTRY}/petclinic-vets-service:latest\n imagePullPolicy: Always\n ports:\n - containerPort: 8080\n livenessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/liveness\n initialDelaySeconds: 90\n periodSeconds: 5\n readinessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/readiness\n initialDelaySeconds: 15\n lifecycle:\n preStop:\n exec:\n command: [\"sh\", \"-c\", \"sleep 10\"]\n resources:\n requests:\n cpu: 500m\n memory: 1Gi\n limits:\n memory: 1Gi\n env:\n - name: SPRING_DATASOURCE_URL\n value: jdbc:mysql://vets-db-mysql.default.svc.cluster.local:3306/service_instance_db\n - name: SPRING_DATASOURCE_USERNAME\n value: root\n - name: SPRING_DATASOURCE_PASSWORD\n valueFrom:\n secretKeyRef:\n name: vets-db-mysql\n key: mysql-root-password\n
visits-service.yaml ---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: visits-service\n labels:\n account: visits-service\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: visits-service\n labels:\n app: visits-service\nspec:\n ports:\n - name: http\n port: 8080\n selector:\n app: visits-service\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: visits-v1\n labels:\n app: visits-service\n version: v1\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: visits-service\n version: v1\n template:\n metadata:\n labels:\n app: visits-service\n version: v1\n annotations:\n prometheus.io/scrape: \"true\"\n prometheus.io/port: \"8080\"\n prometheus.io/path: \"/actuator/prometheus\"\n spec:\n serviceAccountName: visits-service\n containers:\n - name: visits-service\n image: ${PULL_IMAGE_REGISTRY}/petclinic-visits-service:latest\n imagePullPolicy: Always\n ports:\n - containerPort: 8080\n livenessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/liveness\n initialDelaySeconds: 90\n periodSeconds: 5\n readinessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/readiness\n initialDelaySeconds: 15\n lifecycle:\n preStop:\n exec:\n command: [\"sh\", \"-c\", \"sleep 10\"]\n resources:\n requests:\n cpu: 500m\n memory: 1Gi\n limits:\n memory: 1Gi\n env:\n - name: DELAY_MILLIS\n value: \"0\"\n - name: SPRING_DATASOURCE_URL\n value: jdbc:mysql://visits-db-mysql.default.svc.cluster.local:3306/service_instance_db\n - name: SPRING_DATASOURCE_USERNAME\n value: root\n - name: SPRING_DATASOURCE_PASSWORD\n valueFrom:\n secretKeyRef:\n name: visits-db-mysql\n key: mysql-root-password\n
customers-service.yaml ---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: customers-service\n labels:\n account: customers-service\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: customers-service\n labels:\n app: customers-service\nspec:\n ports:\n - name: http\n port: 8080\n selector:\n app: customers-service\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: customers-v1\n labels:\n app: customers-service\n version: v1\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: customers-service\n version: v1\n template:\n metadata:\n labels:\n app: customers-service\n version: v1\n annotations:\n prometheus.io/scrape: \"true\"\n prometheus.io/port: \"8080\"\n prometheus.io/path: \"/actuator/prometheus\"\n spec:\n serviceAccountName: customers-service\n containers:\n - name: customers-service\n image: ${PULL_IMAGE_REGISTRY}/petclinic-customers-service:latest\n imagePullPolicy: Always\n ports:\n - containerPort: 8080\n livenessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/liveness\n initialDelaySeconds: 90\n periodSeconds: 5\n readinessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/readiness\n initialDelaySeconds: 15\n lifecycle:\n preStop:\n exec:\n command: [\"sh\", \"-c\", \"sleep 10\"]\n resources:\n requests:\n cpu: 500m\n memory: 1Gi\n limits:\n memory: 1Gi\n env:\n - name: SPRING_DATASOURCE_URL\n value: jdbc:mysql://customers-db-mysql.default.svc.cluster.local:3306/service_instance_db\n - name: SPRING_DATASOURCE_USERNAME\n value: root\n - name: SPRING_DATASOURCE_PASSWORD\n valueFrom:\n secretKeyRef:\n name: customers-db-mysql\n key: mysql-root-password\n
petclinic-frontend.yaml ---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: petclinic-frontend\n labels:\n account: petclinic-frontend\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: petclinic-frontend\n labels:\n app: petclinic-frontend\nspec:\n ports:\n - name: http\n port: 8080\n selector:\n app: petclinic-frontend\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: petclinic-frontend-v1\n labels:\n app: petclinic-frontend\n version: v1\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: petclinic-frontend\n version: v1\n template:\n metadata:\n labels:\n app: petclinic-frontend\n version: v1\n annotations:\n prometheus.io/scrape: \"true\"\n prometheus.io/port: \"8080\"\n prometheus.io/path: \"/actuator/prometheus\"\n spec:\n serviceAccountName: petclinic-frontend\n containers:\n - name: petclinic-frontend\n image: ${PULL_IMAGE_REGISTRY}/petclinic-frontend:latest\n imagePullPolicy: Always\n ports:\n - containerPort: 8080\n livenessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/liveness\n initialDelaySeconds: 90\n periodSeconds: 5\n readinessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/readiness\n initialDelaySeconds: 15\n lifecycle:\n preStop:\n exec:\n command: [\"sh\", \"-c\", \"sleep 10\"]\n resources:\n requests:\n cpu: 500m\n memory: 1Gi\n limits:\n memory: 1Gi\n
Apply the deployment manifests:
cat manifests/deploy/*.yaml | envsubst | kubectl apply -f -\n
The manifests reference the image registry environment variable, and so are passed through envsubst
for resolution before being applied to the Kubernetes cluster.
Wait for the pods to be ready (2/2 containers).
Here is a simple diagnostic command that tails the logs of the customers service pod, showing that the Spring Boot application has come up and is listening on port 8080.
kubectl logs --follow svc/customers-service\n
"},{"location":"deploy/#test-database-connectivity","title":"Test database connectivity","text":"The below instructions are taken from the output from the prior helm install
command.
Connect directly to the vets-db-mysql
database:
-
Obtain the root password from the Kubernetes secret:
bash shellfish shell MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default \\\n vets-db-mysql -o jsonpath=\"{.data.mysql-root-password}\" | base64 -d)\n
set MYSQL_ROOT_PASSWORD $(kubectl get secret --namespace default \\\n vets-db-mysql -o jsonpath=\"{.data.mysql-root-password}\" | base64 -d)\n
-
Create, and shell into a mysql client pod:
kubectl run vets-db-mysql-client \\\n --rm --tty -i --restart='Never' \\\n --image docker.io/bitnami/mysql:8.0.36-debian-11-r2 \\\n --namespace default \\\n --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD \\\n --command -- bash\n
-
Use the mysql
client to connect to the database:
mysql -h vets-db-mysql.default.svc.cluster.local -uroot -p\"$MYSQL_ROOT_PASSWORD\"\n
At the mysql prompt:
-
Select the database:
use service_instance_db;\n
-
List the tables:
show tables;\n
-
Query vet records:
select * from vets;\n
Exit the mysql prompt with \\q
, then exit the pod with exit
.
One can similarly connect to and inspect the customers-db-mysql
and visits-db-mysql
databases.
"},{"location":"deploy/#summary","title":"Summary","text":"At this point you should have all applications deployed and running, connected to their respective databases.
But we cannot access the application's UI until we configure ingress, which is our next topic.
"},{"location":"ingress/","title":"Configure Ingress","text":"The original project made use of the Spring Cloud Gateway project to configure ingress and routing.
Ingress is Istio's bread and butter. Envoy provides those capabilities. And so the dependency was removed and replaced with a standard Istio Ingress Gateway.
The Istio installation includes the Ingress Gateway component. You should be able to see the deployment in the istio-system
namespace with:
kubectl get deploy -n istio-system\n
Ingress is configured with Istio in two parts: the gateway configuration proper, and the configuration to route requests to backing services.
"},{"location":"ingress/#configure-the-gateway","title":"Configure the Gateway","text":"The below configuration creates a listener on the ingress gateway for HTTP traffic on port 80.
gateway.yaml ---\napiVersion: networking.istio.io/v1beta1\nkind: Gateway\nmetadata:\n name: main-gateway\nspec:\n selector:\n istio: ingressgateway\n servers:\n - port:\n number: 80\n name: http\n protocol: HTTP\n hosts:\n - \"*\"\n
Apply the gateway configuration to your cluster:
kubectl apply -f manifests/ingress/gateway.yaml\n
Since no routing has been configured yet for the gateway, a request to the gateway should return an HTTP 404 response:
curl -v http://$LB_IP/\n
"},{"location":"ingress/#configure-routing","title":"Configure routing","text":"The original Spring Cloud Gateway routing rules were replaced and are now captured with a standard Istio VirtualService in manifests/ingress/routes.yaml
:
routes.yaml
configures routing for the Istio ingress gateway (which replaces spring cloud gateway) to the application's API endpoints.
It exposes endpoints to each of the services, and in addition, routes requests with the /api/gateway
prefix to the petclinic-frontend
application. In the original version, the petclinic-frontend application and the gateway \"proper\" were bundled together as a single microservice.
routes.yaml ---\napiVersion: networking.istio.io/v1beta1\nkind: VirtualService\nmetadata:\n name: petclinic-routes\nspec:\n hosts:\n - \"*\"\n gateways:\n - main-gateway\n http:\n - match:\n - uri:\n prefix: \"/api/customer/\"\n rewrite:\n uri: \"/\"\n route:\n - destination:\n host: customers-service.default.svc.cluster.local\n port:\n number: 8080\n - match:\n - uri:\n prefix: \"/api/visit/\"\n rewrite:\n uri: \"/\"\n route:\n - destination:\n host: visits-service.default.svc.cluster.local\n port:\n number: 8080\n timeout: 4s\n - match:\n - uri:\n prefix: \"/api/vet/\"\n rewrite:\n uri: \"/\"\n route:\n - destination:\n host: vets-service.default.svc.cluster.local\n port:\n number: 8080\n - match:\n - uri:\n prefix: \"/api/gateway\"\n route:\n - destination:\n host: petclinic-frontend.default.svc.cluster.local\n port:\n number: 8080\n - route:\n - destination:\n host: petclinic-frontend.default.svc.cluster.local\n port:\n number: 8080\n
Apply the routing rules for the gateway:
kubectl apply -f manifests/ingress/routes.yaml\n
"},{"location":"ingress/#visit-the-app","title":"Visit the app","text":"With the application deployed and ingress configured, we can finally view the application's user interface.
To see the running PetClinic application, open a browser tab and visit http://$LB_IP/.
You should see a home page. Navigate to the Vets page, then the Pet Owners page, and finally, drill down to a specific pet owner, and otherwise get acquainted with the UI.
Next, let us explore some of the API endpoints exposed by the PetClinic application.
"},{"location":"observability/","title":"Observability","text":""},{"location":"observability/#distributed-tracing","title":"Distributed Tracing","text":"The Istio documentation dedicates a page to guide users on how to propagate trace headers in calls between microservices, in order to support distributed tracing.
In this version of PetClinic, all Spring Boot microservices have been configured to propagate trace headers using micrometer-tracing.
Micrometer tracing is an elegant solution, in that we do not have to couple the trace header propagation with the application logic. Instead, it becomes a simple matter of static configuration.
See the application.yaml
resource files and the property management.tracing.baggage.remote-fields
which configures the fields to propagate.
To make testing this easier, Istio is configured with 100% trace sampling.
"},{"location":"observability/#observe-distributed-traces","title":"Observe distributed traces","text":"In its samples
directory, Istio provides sample deployment manifests for various observability tools, including Zipkin and Jaeger.
Deploy Jaeger to your Kubernetes cluster:
-
Navigate to the base directory of your Istio distribution:
cd istio-1.20.2\n
-
Deploy jaeger:
kubectl apply -f samples/addons/jaeger.yaml\n
-
Wait for the Jaeger pod to be ready:
kubectl get pod -n istio-system\n
Next, let us turn our attention to calling an endpoint that will generate a trace capture, and observe it in the Jaeger dashboard:
-
Call the petclinic-frontend
endpoint that calls both the customers
and visits
services. Feel free to make mulitple requests to generate multiple traces.
curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
-
Launch the jaeger dashboard:
istioctl dashboard jaeger\n
-
In Jaeger, search for traces involving the services petclinic-frontend
, customers
, and visits
.
You should see one or more traces, each with six spans. Click on any one of them to display the full end-to-end request-response flow across all three services.
Close the Jaeger dashboard.
"},{"location":"observability/#exposing-metrics","title":"Exposing metrics","text":"Istio has built-in support for Prometheus as a mechanism for metrics collection.
Each Spring Boot application is configured with a micrometer dependency to expose a scrape endpoint for Prometheus to collect metrics.
Call the scrape endpoint and inspect the metrics exposed directly by the Spring Boot application:
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:8080/actuator/prometheus\n
Separately, Envoy collects a variety of metrics, often referred to as RED metrics, for: Requests, Errors, and Durations.
Inspect the metrics collected and exposed by the Envoy sidecar:
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15090/stats/prometheus\n
One common metric to note is the counter istio_requests_total
:
kubectl exec deploy/customers-v1 -c istio-proxy -- \\\n curl -s localhost:15090/stats/prometheus | grep istio_requests_total\n
Both the application metrics and envoy's metrics are aggregated (merged) and exposed on port 15020:
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15020/stats/prometheus\n
What allows Istio to aggregate both scrape endpoints are annotations placed in the pod template specification for each application, communicating the URL of the Prometheus scrape endpoint.
For example, here are the prometheus annotations for the customers
service.
For more information on metrics merging and Prometheus, see the Istio documentation.
"},{"location":"observability/#send-requests-to-the-application","title":"Send requests to the application","text":"To send a steady stream of requests through the petclinic-frontend
application, we use siege. Feel free to use other tools, or maybe a simple bash while
loop.
Run the following siege
command to send requests to various endpoints in our application:
siege --concurrent=6 --delay=2 --file=./urls.txt\n
Leave the siege command running.
Open a separate terminal in which to run subsequent commands.
"},{"location":"observability/#the-prometheus-dashboard","title":"The Prometheus dashboard","text":"Deploy Prometheus to your Kubernetes cluster:
kubectl apply -f samples/addons/prometheus.yaml\n
The latest version of Spring Boot (3.2) takes advantage of a relatively recent feature of Prometheus known as \"exemplars.\" The current version of Istio uses an older version of Prometheus (2.41) that does not yet support exemplars.
Before deploying Prometheus, patch the prometheus deployment to use the latest version of the image:
kubectl patch deploy -n istio-system prometheus --patch-file=manifests/config/prom-patch.yaml\n
Launch the Prometheus dashboard:
istioctl dash prometheus\n
Here are some PromQL queries you can try out, that will fetch metrics from Prometheus' metrics store:
-
The number of requests made by petclinic-frontend
to the cutomers
service:
istio_requests_total{source_app=\"petclinic-frontend\",destination_app=\"customers-service\",reporter=\"source\"}\n
-
A business metric exposed by the application proper: the number of calls to the findPet
method:
petclinic_pet_seconds_count{method=\"findPet\"}\n
"},{"location":"observability/#istios-grafana-metrics-dashboards","title":"Istio's Grafana metrics dashboards","text":"Istio provides standard service mesh dashboards, based on the standard metrics collected by Envoy and sent to Prometheus.
Deploy Grafana:
kubectl apply -f samples/addons/grafana.yaml\n
Launch the Grafana dashboard:
istioctl dash grafana\n
Navigate to the dashboards section, you will see an Istio folder.
Select the Istio service dashboard.
Review the Istio Service Dashboards for the services petclinic-frontend
, vets
, customers
, and visits
.
The dashboard exposes metrics such as the client request volume, client success rate, and client request durations:
"},{"location":"observability/#petclinic-custom-grafana-dashboard","title":"PetClinic custom Grafana dashboard","text":"The version of PetClinic from which this version derives already contained a custom Grafana dashboard.
To import the dashboard into Grafana:
- Navigate to \"Dashboards\"
- Click the \"New\" pulldown button, and select \"Import\"
- Select \"Upload dashboard JSON file\", and select the file
grafana-petclinic-dashboard.json
from the repository's base directory. - Select \"Prometheus\" as the data source
- Finally, click \"Import\"
The top two panels showing request latencies and request volumes are technically now redundant: both are now subsumed by the standard Istio dashboards.
Below those panels are custom application metrics. Metrics such as number of owners, pets, and visits created or updated.
Create a new Owner, give an existing owner a new pet, or add a visit for a pet, and watch those counters increment in Grafana.
"},{"location":"observability/#kiali","title":"Kiali","text":"Kiali is a bespoke \"console\" for Istio Service Mesh. One of the features of Kiali that stands out are the visualizations of requests making their way through the call graph.
-
Cancel the currently-running siege command. Relaunch siege, but with a different set of target endpoints:
siege --concurrent=6 --delay=2 --file=./frontend-urls.txt\n
-
Deploy Kiali:
kubectl apply -f samples/addons/kiali.yaml\n
-
Launch the Kiali dashboard:
istioctl dashboard kiali\n
Select the Graph view and the default
namespace.
The flow of requests through the applications call graph will be rendered.
"},{"location":"resilience/","title":"Resilience","text":"The original Spring Cloud version of PetClinic used Resilience4j to configure calls to the visit service with a timeout of 4 seconds, and a fallback to return an empty list of visits in the event that the request to get visits timed out.
In this version of the application, the Spring Cloud dependencies were removed. We can replace this configuration with an Istio Custom Resource.
The file timeouts.yaml
configures the equivalent 4s timeout on requests to the visits
service, replacing the previous Resilience4j-based implementation.
timeouts.yaml ---\napiVersion: networking.istio.io/v1beta1\nkind: VirtualService\nmetadata:\n name: visits\nspec:\n hosts:\n - visits-service.default.svc.cluster.local\n http:\n - route:\n - destination:\n host: visits-service.default.svc.cluster.local\n timeout: 4s\n
Apply the timeout configuration to your cluster:
kubectl apply -f manifests/config/timeouts.yaml\n
The fallback logic in PetClinicController.getOwnerDetails
was retrofitted to detect the Gateway Timeout (504) response code instead of using a Resilience4j API.
To test this feature, the environment variable DELAY_MILLIS was introduced into the visits service to insert a delay when fetching visits.
Here is how to test the behavior:
-
Call visits-service
directly:
bash shellfish shell kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits?petId=8 | jq\n
kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits\\?petId=8 | jq\n
Observe the call succeed and return a list of visits for this particular pet.
-
Call the petclinic-frontend
endpoint, and note that for each pet, we see a list of visits:
kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
-
Edit the deployment manifest for the visits-service
so that the environment variable DELAY_MILLIS
is set to the value \"5000\" (which is 5 seconds). One way to do this is to edit the file with (then save and exit):
kubectl edit deploy visits-v1\n
Wait until the new pod has rolled out and become ready.
-
Once the new visits-service
pod reaches Ready status, make the same call again:
bash shellfish shell kubectl exec deploy/sleep -- curl -v visits-service:8080/pets/visits?petId=8\n
kubectl exec deploy/sleep -- curl -v visits-service:8080/pets/visits\\?petId=8\n
Observe the 504 (Gateway timeout) response this time around (because it exceeds the 4-second timeout).
-
Call the petclinic-frontend
endpoint once more, and note that for each pet, the list of visits is empty:
kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
That is, the call succeeds, the timeout is caught, and the fallback empty list of visits is returned in its place.
-
Tail the logs of petclinic-frontend
and observe a log message indicating the fallback was triggered.
kubectl logs --follow svc/petclinic-frontend\n
Restore the original behavior with no delay: edit the visits-v1
deployment again and set the environment variable value to \"0\".
Let us next turn our attention to security-related configuration.
"},{"location":"security/","title":"Security","text":""},{"location":"security/#leverage-workload-identity","title":"Leverage workload identity","text":"Workloads in Istio are assigned a SPIFFE identity.
Authorization policies can be applied that allow or deny access to a service as a function of that identity.
For example, we can restrict access to each database exclusively to its corresponding service, i.e.:
- Only the visits service can access the visits db
- Only the vets service can access the vets db
- Only the customers service can access the customers db
The above policy is specified in the file authorization-policies.yaml
:
authorization-policies.yaml ---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n name: vets-db-allow-vets-service\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/instance: vets-db-mysql\n action: ALLOW\n rules:\n - from:\n - source:\n principals: [\"cluster.local/ns/default/sa/vets-service\"]\n to:\n - operation:\n ports: [\"3306\"]\n---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n name: customers-db-allow-customers-service\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/instance: customers-db-mysql\n action: ALLOW\n rules:\n - from:\n - source:\n principals: [\"cluster.local/ns/default/sa/customers-service\"]\n to:\n - operation:\n ports: [\"3306\"]\n---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n name: visits-db-allow-visits-service\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/instance: visits-db-mysql\n action: ALLOW\n rules:\n - from:\n - source:\n principals: [\"cluster.local/ns/default/sa/visits-service\"]\n to:\n - operation:\n ports: [\"3306\"]\n
The main aspects of each authorization policy are:
- The
selector
identifies the workload to apply the policy to - The
action
in this case is to Allow requests that match the given rules - The
rules
section which specify the source principal
, aka workload identity. - The
to
section applies the policy to requests on port 3306, the port that mysqld
listens on.
"},{"location":"security/#exercise","title":"Exercise","text":" -
Use the previous \"Test database connectivity\" instructions to create a client pod and to use it to connect to the \"vets\" database. This operation should succeed. You should be able to see the \"service_instance_db\" and see the tables and query them.
-
Apply the authorization policies:
kubectl apply -f manifests/config/authorization-policies.yaml\n
-
Attempt once more to create a client pod to connect to the \"vets\" database. This time the operation will fail. That's because only the vets service is now allowed to connect to the database.
-
Verify that the application itself continues to function because all database queries are performed via its associated service.
"},{"location":"security/#summary","title":"Summary","text":"One problem in the enterprise is enforcing access to data via microservices. Giving another team direct access to data is a well-known anti-pattern, as it couples multiple applications to a specific storage technology, a specific database schema, one that cannot be allowed to evolve without impacting everyone.
With the aid of Istio and workload identity, we can make sure that the manner in which data is stored by a microservice is an entirely internal concern, one that can be modified at a later time, perhaps to use a different storage backend, or perhaps simply to allow for the evolution of the schema without \"breaking all the clients\".
After traffic management, resilience, and security, it is time to discuss the other important facet that servicec meshes help with: Observability.
"},{"location":"setup/","title":"Setup","text":"Begin by cloning a local copy of the Spring PetClinic Istio repository from GitHub.
"},{"location":"setup/#kubernetes","title":"Kubernetes","text":"Select whether you wish to provision Kubernetes locally or remotely using a cloud provider.
Local SetupRemote Setup On a Mac running Docker Desktop or Rancher Desktop, make sure to give your VM plenty of CPU and memory. 16GB of memory and 6 CPUs seems to work for me.
Deploy a local K3D Kubernetes cluster with a local registry:
k3d cluster create my-istio-cluster \\\n --api-port 6443 \\\n --k3s-arg \"--disable=traefik@server:0\" \\\n --port 80:80@loadbalancer \\\n --registry-create my-cluster-registry:0.0.0.0:5010\n
Above, we:
- Disable the default traefik load balancer and configure local port 80 to instead forward to the \"istio-ingressgateway\" load balancer.
- Create a registry we can push to locally on port 5010 that is accessible from the Kubernetes cluster at \"my-cluster-registry:5000\".
Provision a k8s cluster in the cloud of your choice. For example, on GCP:
gcloud container clusters create my-istio-cluster \\\n --cluster-version latest \\\n --machine-type \"e2-standard-2\" \\\n --num-nodes \"3\" \\\n --network \"default\"\n
"},{"location":"setup/#environment-variables","title":"Environment variables","text":"Use envrc-template.sh
as the basis for configuring environment variables.
Be sure to:
- Set the local variable
local_setup
to either \"true\" or \"false\", depending on your choice of a local or remote cluster. - If using a remote setup, set the value of PUSH_IMAGE_REGISTRY to the value of your image registry URL.
I highly recommend using direnv
, a convenient way of associating setting environment variables with a specific directory.
If you choose to use direnv
, then the variables can be automatically set by renaming the file to .envrc
and running the command direnv allow
.
"},{"location":"setup/#istio","title":"Istio","text":" -
Follow the Istio documentation's instructions to download Istio.
-
After you have added the istioctl
CLI to your PATH, run the following command to install Istio:
istioctl install -f manifests/istio-install-manifest.yaml\n
The above-referenced configuration manifest configures certain facets of the mesh, namely:
- Setting trace sampling at 100%, for ease of obtaining distributed traces
- Deploying sidecars (envoy proxies) not only alongside workloads, but also in front of mysql databases.
istio-install-manifest.yaml ---\napiVersion: install.istio.io/v1alpha1\nkind: IstioOperator\nspec:\n meshConfig:\n accessLogFile: /dev/stdout # (1)\n extensionProviders:\n - name: otel\n envoyOtelAls:\n service: opentelemetry-collector.istio-system.svc.cluster.local\n port: 4317\n\n components:\n pilot:\n k8s:\n env:\n - name: PILOT_TRACE_SAMPLING # (2)\n value: \"100\"\n resources:\n requests:\n cpu: 10m\n memory: 100Mi\n\n values:\n global:\n proxy:\n resources:\n requests:\n cpu: 10m\n memory: 40Mi\n\n pilot:\n autoscaleEnabled: false\n env:\n PILOT_ENABLE_MYSQL_FILTER: \"true\" # (3)\n\n gateways:\n istio-egressgateway:\n autoscaleEnabled: false\n istio-ingressgateway:\n autoscaleEnabled: false\n
- Turns on sidecar access logging to stdout
- Sets trace sampling to 100% to easily expose see distributed traces (for testing)
- Enables mysql filter, see protocol selection and env vars
Once Istio is installed, feel free to verify the installation with:
istioctl verify-install\n
In the next section, you will work on deploying the microservices to the default
namespace.
As a final step, label the default
namespace for sidecar injection with:
kubectl label ns default istio-injection=enabled\n
"},{"location":"summary/","title":"Summary","text":"Prior to Istio, the common solution in the Spring ecosystem to issues of service discovery, resilience, load balancing was Spring Cloud. Spring Cloud consists of multiple projects that provide dependencies that developers add to their applications to help them deal with issues of client-side load-balancing, retries, circuit-breaking, service discovery and so on.
In spring-petclinic-istio
, those dependencies have been removed. What remains as dependencies inside each service are what you'd expect to find:
- Spring Boot and actuator are the foundation of modern Spring applications.
- Spring Data JPA and the mysql connector for database access.
- Micrometer for exposing application metrics via a Prometheus endpoint.
- Micrometer-tracing for propagating trace headers through these applications.
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"A great deal of \"cruft\" accumulates inside many files in spring-petclinic-cloud
: configuration for service discovery, load balancing, routing, retries, resilience, and so on.
When you move to Istio, you get separation of concerns. It's ironic that the Spring framework's raison d'\u00eatre was separation of concerns, but its focus is inside a monolithic application, not between microservices. When you move to cloud-native applications, you end up with a tangle of concerns that Istio helps you untangle.
And, little by little, our apps become sane again. It reminds me of one of Antoine de Saint-Exup\u00e9ry's famous quotes:
Perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away
The following instructions will walk you through deploying spring-petclinic-istio
either using a local Kubernetes cluster or a remote, cloud-based cluster.
After the application is deployed, I walk you through some aspects of the application and additional benefits gained from running on the Istio platform: orthogonal configuration of traffic management and resilience concerns, stronger security and workload identity, and observability.
Let's get started..
"},{"location":"api/","title":"API Endpoints","text":"Below, we demonstrate calling endpoints on the application in either of two ways:
- Internally from within the Kubernetes cluster
-
Through the \"front door\", via the ingress gateway
The environment variable LB_IP
captures the public IP address of the load balancer fronting the ingress gateway. We can access the service endpoints through that IP address.
"},{"location":"api/#deploy-the-sleep-client","title":"Deploy the sleep
client","text":"We make use of Istio's sleep sample application to facilitate the task of making calls to workloads from inside the cluster.
The sleep
deployment is a blank client Pod that can be used to send direct calls to specific microservices from within the Kubernetes cluster.
Deploy sleep
to your cluster:
kubectl apply -f manifests/sleep.yaml\n
Wait for the sleep pod to be ready (2/2 containers).
"},{"location":"api/#test-individual-service-endpoints","title":"Test individual service endpoints","text":"We assume that you have the excellent jq utility already installed.
"},{"location":"api/#call-the-vets-controller-endpoint","title":"Call the \"Vets\" controller endpoint","text":"InternalExternal kubectl exec deploy/sleep -- curl -s vets-service:8080/vets | jq\n
curl -s http://$LB_IP/api/vet/vets | jq\n
"},{"location":"api/#customers-service-endpoints","title":"Customers service endpoints","text":"Here are a couple of customers-service
endpoints to test:
InternalExternal kubectl exec deploy/sleep -- curl -s customers-service:8080/owners | jq\n
kubectl exec deploy/sleep -- curl -s customers-service:8080/owners/1/pets/1 | jq\n
curl -s http://$LB_IP/api/customer/owners | jq\n
curl -s http://$LB_IP/api/customer/owners/1/pets/1 | jq\n
Give the owner George Franklin a new pet, Sir Hiss (a snake):
InternalExternal kubectl exec deploy/sleep -- curl -s -v \\\n -X POST -H 'Content-Type: application/json' \\\n customers-service:8080/owners/1/pets \\\n -d '{ \"name\": \"Sir Hiss\", \"typeId\": 4, \"birthDate\": \"2020-01-01\" }'\n
curl -v -X POST -H 'Content-Type: application/json' \\\n http://$LB_IP/api/customer/owners/1/pets \\\n -d '{ \"name\": \"Sir Hiss\", \"typeId\": 4, \"birthDate\": \"2020-01-01\" }'\n
This can also be performed directly from the UI.
"},{"location":"api/#the-visits-service","title":"The Visits service","text":"Test one of the visits-service
endpoints:
InternalExternal kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits?petId=8 | jq\n
curl -s http://$LB_IP/api/visit/pets/visits?petId=8 | jq\n
"},{"location":"api/#petclinic-frontend","title":"PetClinic Frontend","text":"Call petclinic-frontend
endpoint that calls both the customers and visits services:
InternalExternal kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
"},{"location":"api/#summary","title":"Summary","text":"Now that we have some familiarity with some of the API endpoints that make up this application, let's turn our attention to configuring a small aspect of resilience: timeouts.
"},{"location":"deploy/","title":"Build and Deploy PetClinic","text":""},{"location":"deploy/#deploy-each-microservices-backing-database","title":"Deploy each microservice's backing database","text":"Deployment decisions:
- We use mysql. Mysql can be installed with helm. Its charts are in the bitnami repository.
- We deploy a separate database statefulset for each service.
- Inside each statefulset we name the database \"service_instance_db\".
- Apps use the root username \"root\".
- The helm installation will generate a root user password in a secret.
- The applications reference the secret name to get at the database credentials.
"},{"location":"deploy/#preparatory-steps","title":"Preparatory steps","text":"We assume you already have helm installed.
-
Add the helm repository:
helm repo add bitnami https://charts.bitnami.com/bitnami\n
-
Update it:
helm repo update\n
"},{"location":"deploy/#deploy-the-databases","title":"Deploy the databases","text":"Deploy the databases with a helm install
command, one for each app/service:
-
Vets:
helm install vets-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
-
Visits:
helm install visits-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
-
Customers:
helm install customers-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
The databases should be up after ~ 1-2 minutes.
Wait for the pods to be ready (2/2 containers).
"},{"location":"deploy/#build-the-apps-docker-images-and-push-them-to-image-registry","title":"Build the apps, docker images, and push them to image registry","text":"We assume you already have maven installed locally.
-
Compile the apps and run the tests:
mvn clean package\n
-
Build the images
mvn spring-boot:build-image\n
-
Publish the images
./push-images.sh\n
"},{"location":"deploy/#deploy-the-apps","title":"Deploy the apps","text":"The deployment manifests are located in manifests/deploy
.
The services are vets
, visits
, customers
, and petclinic-frontend
. For each service we create a Kubernetes ServiceAccount, a Deployment, and a ClusterIP service.
vets-service.yaml ---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: vets-service\n labels:\n account: vets-service\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: vets-service\n labels:\n app: vets-service\nspec:\n ports:\n - name: http\n port: 8080\n selector:\n app: vets-service\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: vets-v1\n labels:\n app: vets-service\n version: v1\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: vets-service\n version: v1\n template:\n metadata:\n labels:\n app: vets-service\n version: v1\n annotations:\n prometheus.io/scrape: \"true\"\n prometheus.io/port: \"8080\"\n prometheus.io/path: \"/actuator/prometheus\"\n spec:\n serviceAccountName: vets-service\n containers:\n - name: vets-service\n image: ${PULL_IMAGE_REGISTRY}/petclinic-vets-service:latest\n imagePullPolicy: Always\n ports:\n - containerPort: 8080\n livenessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/liveness\n initialDelaySeconds: 90\n periodSeconds: 5\n readinessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/readiness\n initialDelaySeconds: 15\n lifecycle:\n preStop:\n exec:\n command: [\"sh\", \"-c\", \"sleep 10\"]\n resources:\n requests:\n cpu: 500m\n memory: 1Gi\n limits:\n memory: 1Gi\n env:\n - name: SPRING_DATASOURCE_URL\n value: jdbc:mysql://vets-db-mysql.default.svc.cluster.local:3306/service_instance_db\n - name: SPRING_DATASOURCE_USERNAME\n value: root\n - name: SPRING_DATASOURCE_PASSWORD\n valueFrom:\n secretKeyRef:\n name: vets-db-mysql\n key: mysql-root-password\n
visits-service.yaml ---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: visits-service\n labels:\n account: visits-service\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: visits-service\n labels:\n app: visits-service\nspec:\n ports:\n - name: http\n port: 8080\n selector:\n app: visits-service\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: visits-v1\n labels:\n app: visits-service\n version: v1\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: visits-service\n version: v1\n template:\n metadata:\n labels:\n app: visits-service\n version: v1\n annotations:\n prometheus.io/scrape: \"true\"\n prometheus.io/port: \"8080\"\n prometheus.io/path: \"/actuator/prometheus\"\n spec:\n serviceAccountName: visits-service\n containers:\n - name: visits-service\n image: ${PULL_IMAGE_REGISTRY}/petclinic-visits-service:latest\n imagePullPolicy: Always\n ports:\n - containerPort: 8080\n livenessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/liveness\n initialDelaySeconds: 90\n periodSeconds: 5\n readinessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/readiness\n initialDelaySeconds: 15\n lifecycle:\n preStop:\n exec:\n command: [\"sh\", \"-c\", \"sleep 10\"]\n resources:\n requests:\n cpu: 500m\n memory: 1Gi\n limits:\n memory: 1Gi\n env:\n - name: DELAY_MILLIS\n value: \"0\"\n - name: SPRING_DATASOURCE_URL\n value: jdbc:mysql://visits-db-mysql.default.svc.cluster.local:3306/service_instance_db\n - name: SPRING_DATASOURCE_USERNAME\n value: root\n - name: SPRING_DATASOURCE_PASSWORD\n valueFrom:\n secretKeyRef:\n name: visits-db-mysql\n key: mysql-root-password\n
customers-service.yaml ---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: customers-service\n labels:\n account: customers-service\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: customers-service\n labels:\n app: customers-service\nspec:\n ports:\n - name: http\n port: 8080\n selector:\n app: customers-service\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: customers-v1\n labels:\n app: customers-service\n version: v1\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: customers-service\n version: v1\n template:\n metadata:\n labels:\n app: customers-service\n version: v1\n annotations:\n prometheus.io/scrape: \"true\"\n prometheus.io/port: \"8080\"\n prometheus.io/path: \"/actuator/prometheus\"\n spec:\n serviceAccountName: customers-service\n containers:\n - name: customers-service\n image: ${PULL_IMAGE_REGISTRY}/petclinic-customers-service:latest\n imagePullPolicy: Always\n ports:\n - containerPort: 8080\n livenessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/liveness\n initialDelaySeconds: 90\n periodSeconds: 5\n readinessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/readiness\n initialDelaySeconds: 15\n lifecycle:\n preStop:\n exec:\n command: [\"sh\", \"-c\", \"sleep 10\"]\n resources:\n requests:\n cpu: 500m\n memory: 1Gi\n limits:\n memory: 1Gi\n env:\n - name: SPRING_DATASOURCE_URL\n value: jdbc:mysql://customers-db-mysql.default.svc.cluster.local:3306/service_instance_db\n - name: SPRING_DATASOURCE_USERNAME\n value: root\n - name: SPRING_DATASOURCE_PASSWORD\n valueFrom:\n secretKeyRef:\n name: customers-db-mysql\n key: mysql-root-password\n
petclinic-frontend.yaml ---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: petclinic-frontend\n labels:\n account: petclinic-frontend\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: petclinic-frontend\n labels:\n app: petclinic-frontend\nspec:\n ports:\n - name: http\n port: 8080\n selector:\n app: petclinic-frontend\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: petclinic-frontend-v1\n labels:\n app: petclinic-frontend\n version: v1\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: petclinic-frontend\n version: v1\n template:\n metadata:\n labels:\n app: petclinic-frontend\n version: v1\n annotations:\n prometheus.io/scrape: \"true\"\n prometheus.io/port: \"8080\"\n prometheus.io/path: \"/actuator/prometheus\"\n spec:\n serviceAccountName: petclinic-frontend\n containers:\n - name: petclinic-frontend\n image: ${PULL_IMAGE_REGISTRY}/petclinic-frontend:latest\n imagePullPolicy: Always\n ports:\n - containerPort: 8080\n livenessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/liveness\n initialDelaySeconds: 90\n periodSeconds: 5\n readinessProbe:\n httpGet:\n port: 8080\n path: /actuator/health/readiness\n initialDelaySeconds: 15\n lifecycle:\n preStop:\n exec:\n command: [\"sh\", \"-c\", \"sleep 10\"]\n resources:\n requests:\n cpu: 500m\n memory: 1Gi\n limits:\n memory: 1Gi\n
Apply the deployment manifests:
cat manifests/deploy/*.yaml | envsubst | kubectl apply -f -\n
The manifests reference the image registry environment variable, and so are passed through envsubst
for resolution before being applied to the Kubernetes cluster.
Wait for the pods to be ready (2/2 containers).
Here is a simple diagnostic command that tails the logs of the customers service pod, showing that the Spring Boot application has come up and is listening on port 8080.
kubectl logs --follow svc/customers-service\n
"},{"location":"deploy/#test-database-connectivity","title":"Test database connectivity","text":"The below instructions are taken from the output from the prior helm install
command.
Connect directly to the vets-db-mysql
database:
-
Obtain the root password from the Kubernetes secret:
bash shellfish shell MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default \\\n vets-db-mysql -o jsonpath=\"{.data.mysql-root-password}\" | base64 -d)\n
set MYSQL_ROOT_PASSWORD $(kubectl get secret --namespace default \\\n vets-db-mysql -o jsonpath=\"{.data.mysql-root-password}\" | base64 -d)\n
-
Create, and shell into a mysql client pod:
kubectl run vets-db-mysql-client \\\n --rm --tty -i --restart='Never' \\\n --image docker.io/bitnami/mysql:8.0.36-debian-11-r2 \\\n --namespace default \\\n --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD \\\n --command -- bash\n
-
Use the mysql
client to connect to the database:
mysql -h vets-db-mysql.default.svc.cluster.local -uroot -p\"$MYSQL_ROOT_PASSWORD\"\n
At the mysql prompt:
-
Select the database:
use service_instance_db;\n
-
List the tables:
show tables;\n
-
Query vet records:
select * from vets;\n
Exit the mysql prompt with \\q
, then exit the pod with exit
.
One can similarly connect to and inspect the customers-db-mysql
and visits-db-mysql
databases.
"},{"location":"deploy/#summary","title":"Summary","text":"At this point you should have all applications deployed and running, connected to their respective databases.
But we cannot access the application's UI until we configure ingress, which is our next topic.
"},{"location":"ingress/","title":"Configure Ingress","text":"The original project made use of the Spring Cloud Gateway project to configure ingress and routing.
Ingress is Istio's bread and butter. Envoy provides those capabilities. And so the dependency was removed and replaced with a standard Istio Ingress Gateway.
The Istio installation includes the Ingress Gateway component. You should be able to see the deployment in the istio-system
namespace with:
kubectl get deploy -n istio-system\n
Ingress is configured with Istio in two parts: the gateway configuration proper, and the configuration to route requests to backing services.
"},{"location":"ingress/#configure-the-gateway","title":"Configure the Gateway","text":"The below configuration creates a listener on the ingress gateway for HTTP traffic on port 80.
gateway.yaml ---\napiVersion: networking.istio.io/v1beta1\nkind: Gateway\nmetadata:\n name: main-gateway\nspec:\n selector:\n istio: ingressgateway\n servers:\n - port:\n number: 80\n name: http\n protocol: HTTP\n hosts:\n - \"*\"\n
Apply the gateway configuration to your cluster:
kubectl apply -f manifests/ingress/gateway.yaml\n
Since no routing has been configured yet for the gateway, a request to the gateway should return an HTTP 404 response:
curl -v http://$LB_IP/\n
"},{"location":"ingress/#configure-routing","title":"Configure routing","text":"The original Spring Cloud Gateway routing rules were replaced and are now captured with a standard Istio VirtualService in manifests/ingress/routes.yaml
:
routes.yaml
configures routing for the Istio ingress gateway (which replaces spring cloud gateway) to the application's API endpoints.
It exposes endpoints to each of the services, and in addition, routes requests with the /api/gateway
prefix to the petclinic-frontend
application. In the original version, the petclinic-frontend application and the gateway \"proper\" were bundled together as a single microservice.
routes.yaml ---\napiVersion: networking.istio.io/v1beta1\nkind: VirtualService\nmetadata:\n name: petclinic-routes\nspec:\n hosts:\n - \"*\"\n gateways:\n - main-gateway\n http:\n - match:\n - uri:\n prefix: \"/api/customer/\"\n rewrite:\n uri: \"/\"\n route:\n - destination:\n host: customers-service.default.svc.cluster.local\n port:\n number: 8080\n - match:\n - uri:\n prefix: \"/api/visit/\"\n rewrite:\n uri: \"/\"\n route:\n - destination:\n host: visits-service.default.svc.cluster.local\n port:\n number: 8080\n timeout: 4s\n - match:\n - uri:\n prefix: \"/api/vet/\"\n rewrite:\n uri: \"/\"\n route:\n - destination:\n host: vets-service.default.svc.cluster.local\n port:\n number: 8080\n - match:\n - uri:\n prefix: \"/api/gateway\"\n route:\n - destination:\n host: petclinic-frontend.default.svc.cluster.local\n port:\n number: 8080\n - route:\n - destination:\n host: petclinic-frontend.default.svc.cluster.local\n port:\n number: 8080\n
Apply the routing rules for the gateway:
kubectl apply -f manifests/ingress/routes.yaml\n
"},{"location":"ingress/#visit-the-app","title":"Visit the app","text":"With the application deployed and ingress configured, we can finally view the application's user interface.
To see the running PetClinic application, open a browser tab and visit http://$LB_IP/.
You should see a home page. Navigate to the Vets page, then the Pet Owners page, and finally, drill down to a specific pet owner, and otherwise get acquainted with the UI.
Next, let us explore some of the API endpoints exposed by the PetClinic application.
"},{"location":"observability/","title":"Observability","text":""},{"location":"observability/#distributed-tracing","title":"Distributed Tracing","text":"The Istio documentation dedicates a page to guide users on how to propagate trace headers in calls between microservices, in order to support distributed tracing.
In this version of PetClinic, all Spring Boot microservices have been configured to propagate trace headers using micrometer-tracing.
Micrometer tracing is an elegant solution, in that we do not have to couple the trace header propagation with the application logic. Instead, it becomes a simple matter of static configuration.
See the application.yaml
resource files and the property management.tracing.baggage.remote-fields
which configures the fields to propagate.
To make testing this easier, Istio is configured with 100% trace sampling.
"},{"location":"observability/#observe-distributed-traces","title":"Observe distributed traces","text":"In its samples
directory, Istio provides sample deployment manifests for various observability tools, including Zipkin and Jaeger.
Deploy Jaeger to your Kubernetes cluster:
-
Navigate to the base directory of your Istio distribution:
cd istio-1.20.2\n
-
Deploy jaeger:
kubectl apply -f samples/addons/jaeger.yaml\n
-
Wait for the Jaeger pod to be ready:
kubectl get pod -n istio-system\n
Next, let us turn our attention to calling an endpoint that will generate a trace capture, and observe it in the Jaeger dashboard:
-
Call the petclinic-frontend
endpoint that calls both the customers
and visits
services. Feel free to make mulitple requests to generate multiple traces.
curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
-
Launch the jaeger dashboard:
istioctl dashboard jaeger\n
-
In Jaeger, search for traces involving the services petclinic-frontend
, customers
, and visits
.
You should see one or more traces, each with six spans. Click on any one of them to display the full end-to-end request-response flow across all three services.
Close the Jaeger dashboard.
"},{"location":"observability/#exposing-metrics","title":"Exposing metrics","text":"Istio has built-in support for Prometheus as a mechanism for metrics collection.
Each Spring Boot application is configured with a micrometer dependency to expose a scrape endpoint for Prometheus to collect metrics.
Call the scrape endpoint and inspect the metrics exposed directly by the Spring Boot application:
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:8080/actuator/prometheus\n
Separately, Envoy collects a variety of metrics, often referred to as RED metrics, for: Requests, Errors, and Durations.
Inspect the metrics collected and exposed by the Envoy sidecar:
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15090/stats/prometheus\n
One common metric to note is the counter istio_requests_total
:
kubectl exec deploy/customers-v1 -c istio-proxy -- \\\n curl -s localhost:15090/stats/prometheus | grep istio_requests_total\n
Both the application metrics and envoy's metrics are aggregated (merged) and exposed on port 15020:
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15020/stats/prometheus\n
What allows Istio to aggregate both scrape endpoints are annotations placed in the pod template specification for each application, communicating the URL of the Prometheus scrape endpoint.
For example, here are the prometheus annotations for the customers
service.
For more information on metrics merging and Prometheus, see the Istio documentation.
"},{"location":"observability/#send-requests-to-the-application","title":"Send requests to the application","text":"To send a steady stream of requests through the petclinic-frontend
application, we use siege. Feel free to use other tools, or maybe a simple bash while
loop.
Run the following siege
command to send requests to various endpoints in our application:
siege --concurrent=6 --delay=2 --file=./urls.txt\n
Leave the siege command running.
Open a separate terminal in which to run subsequent commands.
"},{"location":"observability/#the-prometheus-dashboard","title":"The Prometheus dashboard","text":"Deploy Prometheus to your Kubernetes cluster:
kubectl apply -f samples/addons/prometheus.yaml\n
The latest version of Spring Boot (3.2) takes advantage of a relatively recent feature of Prometheus known as \"exemplars.\" The current version of Istio uses an older version of Prometheus (2.41) that does not yet support exemplars.
Before deploying Prometheus, patch the prometheus deployment to use the latest version of the image:
kubectl patch deploy -n istio-system prometheus --patch-file=manifests/config/prom-patch.yaml\n
prom-patch.yaml spec:\n template:\n spec:\n containers:\n - name: prometheus-server\n image: prom/prometheus:latest\n
Launch the Prometheus dashboard:
istioctl dash prometheus\n
Here are some PromQL queries you can try out, that will fetch metrics from Prometheus' metrics store:
-
The number of requests made by petclinic-frontend
to the cutomers
service:
istio_requests_total{source_app=\"petclinic-frontend\",destination_app=\"customers-service\",reporter=\"source\"}\n
-
A business metric exposed by the application proper: the number of calls to the findPet
method:
petclinic_pet_seconds_count{method=\"findPet\"}\n
"},{"location":"observability/#istios-grafana-metrics-dashboards","title":"Istio's Grafana metrics dashboards","text":"Istio provides standard service mesh dashboards, based on the standard metrics collected by Envoy and sent to Prometheus.
Deploy Grafana:
kubectl apply -f samples/addons/grafana.yaml\n
Launch the Grafana dashboard:
istioctl dash grafana\n
Navigate to the dashboards section, you will see an Istio folder.
Select the Istio service dashboard.
Review the Istio Service Dashboards for the services petclinic-frontend
, vets
, customers
, and visits
.
The dashboard exposes metrics such as the client request volume, client success rate, and client request durations:
"},{"location":"observability/#petclinic-custom-grafana-dashboard","title":"PetClinic custom Grafana dashboard","text":"The version of PetClinic from which this version derives already contained a custom Grafana dashboard.
To import the dashboard into Grafana:
- Navigate to \"Dashboards\"
- Click the \"New\" pulldown button, and select \"Import\"
- Select \"Upload dashboard JSON file\", and select the file
grafana-petclinic-dashboard.json
from the repository's base directory. - Select \"Prometheus\" as the data source
- Finally, click \"Import\"
The top two panels showing request latencies and request volumes are technically now redundant: both are now subsumed by the standard Istio dashboards.
Below those panels are custom application metrics. Metrics such as number of owners, pets, and visits created or updated.
Create a new Owner, give an existing owner a new pet, or add a visit for a pet, and watch those counters increment in Grafana.
"},{"location":"observability/#kiali","title":"Kiali","text":"Kiali is a bespoke \"console\" for Istio Service Mesh. One of the features of Kiali that stands out are the visualizations of requests making their way through the call graph.
-
Cancel the currently-running siege command. Relaunch siege, but with a different set of target endpoints:
siege --concurrent=6 --delay=2 --file=./frontend-urls.txt\n
-
Deploy Kiali:
kubectl apply -f samples/addons/kiali.yaml\n
-
Launch the Kiali dashboard:
istioctl dashboard kiali\n
Select the Graph view and the default
namespace.
The flow of requests through the applications call graph will be rendered.
"},{"location":"resilience/","title":"Resilience","text":"The original Spring Cloud version of PetClinic used Resilience4j to configure calls to the visit service with a timeout of 4 seconds, and a fallback to return an empty list of visits in the event that the request to get visits timed out.
In this version of the application, the Spring Cloud dependencies were removed. We can replace this configuration with an Istio Custom Resource.
The file timeouts.yaml
configures the equivalent 4s timeout on requests to the visits
service, replacing the previous Resilience4j-based implementation.
timeouts.yaml ---\napiVersion: networking.istio.io/v1beta1\nkind: VirtualService\nmetadata:\n name: visits\nspec:\n hosts:\n - visits-service.default.svc.cluster.local\n http:\n - route:\n - destination:\n host: visits-service.default.svc.cluster.local\n timeout: 4s\n
Apply the timeout configuration to your cluster:
kubectl apply -f manifests/config/timeouts.yaml\n
The fallback logic in PetClinicController.getOwnerDetails
was retrofitted to detect the Gateway Timeout (504) response code instead of using a Resilience4j API.
To test this feature, the environment variable DELAY_MILLIS was introduced into the visits service to insert a delay when fetching visits.
Here is how to test the behavior:
-
Call visits-service
directly:
bash shellfish shell kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits?petId=8 | jq\n
kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits\\?petId=8 | jq\n
Observe the call succeed and return a list of visits for this particular pet.
-
Call the petclinic-frontend
endpoint, and note that for each pet, we see a list of visits:
kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
-
Edit the deployment manifest for the visits-service
so that the environment variable DELAY_MILLIS
is set to the value \"5000\" (which is 5 seconds). One way to do this is to edit the file with (then save and exit):
kubectl edit deploy visits-v1\n
Wait until the new pod has rolled out and become ready.
-
Once the new visits-service
pod reaches Ready status, make the same call again:
bash shellfish shell kubectl exec deploy/sleep -- curl -v visits-service:8080/pets/visits?petId=8\n
kubectl exec deploy/sleep -- curl -v visits-service:8080/pets/visits\\?petId=8\n
Observe the 504 (Gateway timeout) response this time around (because it exceeds the 4-second timeout).
-
Call the petclinic-frontend
endpoint once more, and note that for each pet, the list of visits is empty:
kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
That is, the call succeeds, the timeout is caught, and the fallback empty list of visits is returned in its place.
-
Tail the logs of petclinic-frontend
and observe a log message indicating the fallback was triggered.
kubectl logs --follow svc/petclinic-frontend\n
Restore the original behavior with no delay: edit the visits-v1
deployment again and set the environment variable value to \"0\".
Let us next turn our attention to security-related configuration.
"},{"location":"security/","title":"Security","text":""},{"location":"security/#leverage-workload-identity","title":"Leverage workload identity","text":"Workloads in Istio are assigned a SPIFFE identity.
Authorization policies can be applied that allow or deny access to a service as a function of that identity.
For example, we can restrict access to each database exclusively to its corresponding service, i.e.:
- Only the visits service can access the visits db
- Only the vets service can access the vets db
- Only the customers service can access the customers db
The above policy is specified in the file authorization-policies.yaml
:
authorization-policies.yaml ---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n name: vets-db-allow-vets-service\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/instance: vets-db-mysql\n action: ALLOW\n rules:\n - from:\n - source:\n principals: [\"cluster.local/ns/default/sa/vets-service\"]\n to:\n - operation:\n ports: [\"3306\"]\n---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n name: customers-db-allow-customers-service\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/instance: customers-db-mysql\n action: ALLOW\n rules:\n - from:\n - source:\n principals: [\"cluster.local/ns/default/sa/customers-service\"]\n to:\n - operation:\n ports: [\"3306\"]\n---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n name: visits-db-allow-visits-service\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/instance: visits-db-mysql\n action: ALLOW\n rules:\n - from:\n - source:\n principals: [\"cluster.local/ns/default/sa/visits-service\"]\n to:\n - operation:\n ports: [\"3306\"]\n
The main aspects of each authorization policy are:
- The
selector
identifies the workload to apply the policy to - The
action
in this case is to Allow requests that match the given rules - The
rules
section which specify the source principal
, aka workload identity. - The
to
section applies the policy to requests on port 3306, the port that mysqld
listens on.
"},{"location":"security/#exercise","title":"Exercise","text":" -
Use the previous \"Test database connectivity\" instructions to create a client pod and to use it to connect to the \"vets\" database. This operation should succeed. You should be able to see the \"service_instance_db\" and see the tables and query them.
-
Apply the authorization policies:
kubectl apply -f manifests/config/authorization-policies.yaml\n
-
Attempt once more to create a client pod to connect to the \"vets\" database. This time the operation will fail. That's because only the vets service is now allowed to connect to the database.
-
Verify that the application itself continues to function because all database queries are performed via its associated service.
"},{"location":"security/#summary","title":"Summary","text":"One problem in the enterprise is enforcing access to data via microservices. Giving another team direct access to data is a well-known anti-pattern, as it couples multiple applications to a specific storage technology, a specific database schema, one that cannot be allowed to evolve without impacting everyone.
With the aid of Istio and workload identity, we can make sure that the manner in which data is stored by a microservice is an entirely internal concern, one that can be modified at a later time, perhaps to use a different storage backend, or perhaps simply to allow for the evolution of the schema without \"breaking all the clients\".
After traffic management, resilience, and security, it is time to discuss the other important facet that servicec meshes help with: Observability.
"},{"location":"setup/","title":"Setup","text":"Begin by cloning a local copy of the Spring PetClinic Istio repository from GitHub.
"},{"location":"setup/#kubernetes","title":"Kubernetes","text":"Select whether you wish to provision Kubernetes locally or remotely using a cloud provider.
Local SetupRemote Setup On a Mac running Docker Desktop or Rancher Desktop, make sure to give your VM plenty of CPU and memory. 16GB of memory and 6 CPUs seems to work for me.
Deploy a local K3D Kubernetes cluster with a local registry:
k3d cluster create my-istio-cluster \\\n --api-port 6443 \\\n --k3s-arg \"--disable=traefik@server:0\" \\\n --port 80:80@loadbalancer \\\n --registry-create my-cluster-registry:0.0.0.0:5010\n
Above, we:
- Disable the default traefik load balancer and configure local port 80 to instead forward to the \"istio-ingressgateway\" load balancer.
- Create a registry we can push to locally on port 5010 that is accessible from the Kubernetes cluster at \"my-cluster-registry:5000\".
Provision a k8s cluster in the cloud of your choice. For example, on GCP:
gcloud container clusters create my-istio-cluster \\\n --cluster-version latest \\\n --machine-type \"e2-standard-2\" \\\n --num-nodes \"3\" \\\n --network \"default\"\n
"},{"location":"setup/#environment-variables","title":"Environment variables","text":"Use envrc-template.sh
as the basis for configuring environment variables.
Be sure to:
- Set the local variable
local_setup
to either \"true\" or \"false\", depending on your choice of a local or remote cluster. - If using a remote setup, set the value of PUSH_IMAGE_REGISTRY to the value of your image registry URL.
I highly recommend using direnv
, a convenient way of associating setting environment variables with a specific directory.
If you choose to use direnv
, then the variables can be automatically set by renaming the file to .envrc
and running the command direnv allow
.
"},{"location":"setup/#istio","title":"Istio","text":" -
Follow the Istio documentation's instructions to download Istio.
-
After you have added the istioctl
CLI to your PATH, run the following command to install Istio:
istioctl install -f manifests/istio-install-manifest.yaml\n
The above-referenced configuration manifest configures certain facets of the mesh, namely:
- Setting trace sampling at 100%, for ease of obtaining distributed traces
- Deploying sidecars (envoy proxies) not only alongside workloads, but also in front of mysql databases.
istio-install-manifest.yaml ---\napiVersion: install.istio.io/v1alpha1\nkind: IstioOperator\nspec:\n meshConfig:\n accessLogFile: /dev/stdout # (1)\n extensionProviders:\n - name: otel\n envoyOtelAls:\n service: opentelemetry-collector.istio-system.svc.cluster.local\n port: 4317\n\n components:\n pilot:\n k8s:\n env:\n - name: PILOT_TRACE_SAMPLING # (2)\n value: \"100\"\n resources:\n requests:\n cpu: 10m\n memory: 100Mi\n\n values:\n global:\n proxy:\n resources:\n requests:\n cpu: 10m\n memory: 40Mi\n\n pilot:\n autoscaleEnabled: false\n env:\n PILOT_ENABLE_MYSQL_FILTER: \"true\" # (3)\n\n gateways:\n istio-egressgateway:\n autoscaleEnabled: false\n istio-ingressgateway:\n autoscaleEnabled: false\n
- Turns on sidecar access logging to stdout
- Sets trace sampling to 100% to easily expose see distributed traces (for testing)
- Enables mysql filter, see protocol selection and env vars
Once Istio is installed, feel free to verify the installation with:
istioctl verify-install\n
In the next section, you will work on deploying the microservices to the default
namespace.
As a final step, label the default
namespace for sidecar injection with:
kubectl label ns default istio-injection=enabled\n
"},{"location":"summary/","title":"Summary","text":"Prior to Istio, the common solution in the Spring ecosystem to issues of service discovery, resilience, load balancing was Spring Cloud. Spring Cloud consists of multiple projects that provide dependencies that developers add to their applications to help them deal with issues of client-side load-balancing, retries, circuit-breaking, service discovery and so on.
In spring-petclinic-istio
, those dependencies have been removed. What remains as dependencies inside each service are what you'd expect to find:
- Spring Boot and actuator are the foundation of modern Spring applications.
- Spring Data JPA and the mysql connector for database access.
- Micrometer for exposing application metrics via a Prometheus endpoint.
- Micrometer-tracing for propagating trace headers through these applications.
"}]}
\ No newline at end of file
diff --git a/sitemap.xml.gz b/sitemap.xml.gz
index 605c7883eddf1a41306b1f8cde8210d050abc61f..11c39cf05be88dfcea29c38cb34eb4e3498e9ec9 100644
GIT binary patch
delta 12
Tcmb=gXOr*d;8;;Kk*yK{7|;Xh
delta 12
Tcmb=gXOr*d;5e2)k*yK{8CnDc