diff --git a/docs/get-started.md b/docs/get-started.md
index 29afe92b..4d5671e5 100644
--- a/docs/get-started.md
+++ b/docs/get-started.md
@@ -27,7 +27,7 @@ import TabItem from '@theme/TabItem';
Aiven provides managed open source services for streaming, storing and analyzing data on all major clouds.
All services run reliably and securely in the clouds of your choice, are observable, and can easily be integrated with each other and with external 3rd party tools.
-
+
## Discover the Aiven platform
diff --git a/docs/platform/howto/use-aws-privatelinks.md b/docs/platform/howto/use-aws-privatelinks.md
index 1dfe9fbf..add10042 100644
--- a/docs/platform/howto/use-aws-privatelinks.md
+++ b/docs/platform/howto/use-aws-privatelinks.md
@@ -164,7 +164,7 @@ AWS PrivateLink is not supported in:
As a result, PrivateLink connection details are added to the **Connection information** section on the service
.
-
+
It takes a couple of minutes before connectivity is available after
you enable a service component. This is because AWS requires an AWS
diff --git a/docs/platform/howto/vpc-peering-upcloud.md b/docs/platform/howto/vpc-peering-upcloud.md
index 7c9309a8..3e813824 100644
--- a/docs/platform/howto/vpc-peering-upcloud.md
+++ b/docs/platform/howto/vpc-peering-upcloud.md
@@ -95,7 +95,7 @@ The peering becomes active and the traffic is shared only after you create the p
both from the source network and from the target network.
:::
-
+
### Use the UpCloud API{#upcloud-api}
diff --git a/docs/products/kafka/concepts/upgrade-procedure.md b/docs/products/kafka/concepts/upgrade-procedure.md
index a3075182..9d261873 100644
--- a/docs/products/kafka/concepts/upgrade-procedure.md
+++ b/docs/products/kafka/concepts/upgrade-procedure.md
@@ -25,7 +25,7 @@ This example demonstrates the steps in the automated upgrade procedure for
a 3-node Apache Kafka service, visualized below:
-
+
During an upgrade procedure:
@@ -40,7 +40,7 @@ During an upgrade procedure:
1. **Transfer data and leadership:** The partition data and leadership are transferred
to new nodes.
-
+
:::warning
This step is CPU intensive due to the additional data movement
@@ -58,7 +58,7 @@ During an upgrade procedure:
1. **Complete process**: The process is completed once the last old node has been
removed from the cluster.
-
+
## No downtime during upgrade
diff --git a/docs/products/kafka/howto/kafka-quix.md b/docs/products/kafka/howto/kafka-quix.md
index e8f94f59..73273849 100644
--- a/docs/products/kafka/howto/kafka-quix.md
+++ b/docs/products/kafka/howto/kafka-quix.md
@@ -40,17 +40,17 @@ To configure and connect Aiven for Apache Kafka® with Klaw:
1. Create an environment.
- If you're editing an existing project, open the project settings and click **+ New environment**.
-
+
- Follow the setup wizard until you get to the broker settings.
1. When you get to the broker settings, select **Aiven** as your broken provider.
-
+
1. Configure the required settings:
-
+
- **Service URI**: Enter the Service URI for your Apache Kafka service. Find the
service URI in the Connection information page of your service in Aiven Console
@@ -72,7 +72,7 @@ can deploy in a few clicks.
To test your Aiven for Apache Kafka® connection, you can use the [_Hello Quix_
template](https://quix.io/templates/hello-quix), which is a three-step pipeline:
-
+
1. Click [**Clone this project**](https://portal.platform.quix.io/signup?projectName=Hello%20Quix&httpsUrl=https://github.com/quixio/template-hello-quix&branchName=tutorial).
1. On the **Import Project** screen, click **Quix advanced configuration** to ensure
@@ -85,7 +85,7 @@ template](https://quix.io/templates/hello-quix), which is a three-step pipeline:
In the Quix portal, wait for the services to deploy and their status to become **Running**.
-
+
Ensure the `_csv-data_` and `_counted-names_` required topics appear in both Quix
and Aiven. In Aiven, topics that originate from Quix have the Quix workspace