diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst index 1f239d042d..f3146d4356 100644 --- a/CONTRIBUTING.rst +++ b/CONTRIBUTING.rst @@ -197,7 +197,7 @@ Howto article template Title template: Start with a verb (e.g. *Connect with Go*, *Install or upgrade an extension*). -:: +.. code:: Article title ############# @@ -227,7 +227,7 @@ Concept article template Title template: *About [subject]* (if this is a background information for a task, e.g. *About migrating to Aiven*) / *Subject* (use noun or noun phrase, e.g. *Authentication*, *High availability*) -:: +.. code:: Article title ############# @@ -255,7 +255,7 @@ Limited availability note template For features that are in the limited availability stage, add the following admonition directly undert the article title: -:: +.. code:: .. important:: {feature name} is a :doc:`limited availability feature `. If you're interested in trying out this feature, contact the sales team at sales@Aiven.io. @@ -266,7 +266,7 @@ Early availability note template For features that are in the early availability stage and can be enabled in the Console, add the following admonition directly under the article title: -:: +.. code:: .. important:: {feature name} is an :doc:`early availability feature `. To use it, :doc:`enable the feature preview ` in your user profile. diff --git a/docs/community/challenge/the-rolling-challenge.rst b/docs/community/challenge/the-rolling-challenge.rst index 7db61ad920..1a7439a705 100644 --- a/docs/community/challenge/the-rolling-challenge.rst +++ b/docs/community/challenge/the-rolling-challenge.rst @@ -60,13 +60,13 @@ The goal is to make sense of the incoming stream of data. 7. Build the Docker image - :: + .. code:: docker build -t fake-data-producer-for-apache-kafka-docker . 8. Run the Docker image - :: + .. code:: docker run fake-data-producer-for-apache-kafka-docker diff --git a/docs/integrations/cloudwatch/cloudwatch-logs-cli.rst b/docs/integrations/cloudwatch/cloudwatch-logs-cli.rst index b7eadc7aea..5efc9a6f8f 100644 --- a/docs/integrations/cloudwatch/cloudwatch-logs-cli.rst +++ b/docs/integrations/cloudwatch/cloudwatch-logs-cli.rst @@ -30,11 +30,13 @@ This is what you'll need to send your logs from the AWS CloudWatch using the :do Configure the integration ------------------------- -1. Open the Aiven client, and log in:: +1. Open the Aiven client, and log in: - avn user login --token + .. code:: + + avn user login --token -.. seealso:: Learn more about :doc:`/docs/tools/cli/user/user-access-token` + .. seealso:: Learn more about :doc:`/docs/tools/cli/user/user-access-token` 2. Collect the following information for the creation of the endpoint between your Aiven account and AWS CloudWatch. These are the placeholders you will need to replace in the code sample: diff --git a/docs/integrations/datadog/datadog-logs.rst b/docs/integrations/datadog/datadog-logs.rst index a152874278..3b2e3c1426 100644 --- a/docs/integrations/datadog/datadog-logs.rst +++ b/docs/integrations/datadog/datadog-logs.rst @@ -43,7 +43,9 @@ Start by configuring the link between Aiven and Datadog for logs. This setup onl * - ``AIVEN_PROJECT_NAME`` - Found in the web console -This is the format to use, replacing the variables listed. Don't edit the values surrounded by ``%`` signs, such as ``%msg%`` as these are used in constructing the log line:: +This is the format to use, replacing the variables listed. Don't edit the values surrounded by ``%`` signs, such as ``%msg%`` as these are used in constructing the log line: + +.. code:: DATADOG_API_KEY <%pri%>1 %timestamp:::date-rfc3339% %HOSTNAME%.AIVEN_PROJECT_NAME %app-name% - - - %msg% @@ -51,7 +53,10 @@ An example of the correct format, using an example API key and ``my_project`` as ``01234567890123456789abcdefabcdef <%pri%>1 %timestamp:::date-rfc3339% %HOSTNAME%.my_project %app-name% - - - %msg%`` -.. note:: Metrics and logs are correlated in Datadog by hostname. The metrics integration is currently configured to append the project name to the hostname in order to disambiguate between services that have the same name in different projects. Adding the project name to the hostname in the syslog integration to Datadog assures that they can be correlated again in the Datadog dashboard. Not doing so will not result in missing logs, but the logs that appear in Datadog will miss tags that come from this correlation with the metrics. See https://docs.datadoghq.com/integrations/rsyslog. +.. note:: + + Metrics and logs are correlated in Datadog by hostname. The metrics integration is currently configured to append the project name to the hostname in order to disambiguate between services that have the same name in different projects. Adding the project name to the hostname in the syslog integration to Datadog assures that they can be correlated again in the Datadog dashboard. Not doing so will not result in missing logs, but the logs that appear in Datadog will miss tags that come from this correlation with the metrics. + See the `Datadog documentation `_. 4. Select **Create** to save the endpoint. @@ -63,8 +68,8 @@ Follow the steps in this section for each of the services whose logs should be s 1. From the **Service Overview** page, select **Manage integrations** and choose the **Rsyslog** option. -.. image:: /images/integrations/rsyslog-service-integration.png - :alt: Screenshot of system integrations including rsyslog + .. image:: /images/integrations/rsyslog-service-integration.png + :alt: Screenshot of system integrations including rsyslog 2. Pick the log integration you created earlier from the dropdown and choose **Enable**. diff --git a/docs/integrations/google-bigquery.rst b/docs/integrations/google-bigquery.rst index 0621e9ea21..27e0bbb56a 100644 --- a/docs/integrations/google-bigquery.rst +++ b/docs/integrations/google-bigquery.rst @@ -25,7 +25,7 @@ Step 1: Create integration endpoints * **GCP Project ID**: The identifier associated with your Google Cloud Project where BigQuery is set up. For example, ``my-gcp-project-12345``. * **Google Service Account Credentials**: The JSON formatted credentials obtained from your Google Cloud Console for service account authentication. For example: - :: + .. code:: { "type": "service_account", @@ -51,7 +51,7 @@ Step 1. Create integration endpoints `````````````````````````````````````` To create a new integration endpoint that can be used to connect to a BigQuery service, use the :ref:`avn service integration-endpoint-create ` command with the required parameters. -:: +.. code:: avn service integration-endpoint-create \ --project \ @@ -100,13 +100,13 @@ Step 2: Add your service to the integration endpoint `````````````````````````````````````````````````````` 1. Retrieve the endpoint identifier using the following command: - :: + .. code:: avn service integration-endpoint-list --project your-project-name 2. Using this ``endpoint_id``, connect your Aiven service to the endpoint with the following command: - :: + .. code:: avn service integration-create --project your-project-name \ -t external_google_bigquery -s your-service-name \ diff --git a/docs/integrations/rsyslog.rst b/docs/integrations/rsyslog.rst index bbb5a4c5b5..d9a018138d 100644 --- a/docs/integrations/rsyslog.rst +++ b/docs/integrations/rsyslog.rst @@ -25,7 +25,7 @@ Console. Another option is to use the `Aiven Client `__ . -:: +.. code:: avn service integration-endpoint-create --project your-project \     -d example-syslog -t rsyslog \ @@ -97,7 +97,7 @@ integration by clicking **Use integration** in the modal window. Alternately, with the Aiven Client, first you need the id of the endpoint previously created -:: +.. code:: avn service integration-endpoint-list --project your-project ENDPOINT_ID                           ENDPOINT_NAME   ENDPOINT_TYPE @@ -106,7 +106,7 @@ endpoint previously created Then you can link the service to the endpoint -:: +.. code:: avn service integration-create --project your-project \     -t rsyslog -s your-service \ @@ -143,7 +143,7 @@ The Syslog Endpoint to use for ``server`` depends on your account: See the Coralogix `Rsyslog `_ documentation for more information. -:: +.. code:: avn service integration-endpoint-create --project your-project \ -d coralogix -t rsyslog \ @@ -162,7 +162,7 @@ For `Loggly `_ integration, you need to use a custom ``logline`` format with your token. -:: +.. code:: avn service integration-endpoint-create --project your-project \ -d loggly -t rsyslog \ @@ -177,7 +177,7 @@ Mezmo (LogDNA) For `Mezmo `_ syslog integration you need to use a custom ``logline`` format with your key. -:: +.. code:: avn service integration-endpoint-create --project your-project \ -d logdna -t rsyslog \ @@ -202,7 +202,7 @@ The value to use for ``server`` depends on the account location: For more information see `Use TCP endpoint to forward logs to New Relic `_ -:: +.. code:: avn service integration-endpoint-create --project your-project \ -d newrelic -t rsyslog \ @@ -222,7 +222,7 @@ respectively. You **do not need** the ca-bundle as the Papertrail servers use certificates signed by a known CA. You also need to set the format to ``rfc3164`` . -:: +.. code:: avn service integration-endpoint-create --project your-project \ -d papertrail -t rsyslog \ @@ -238,7 +238,7 @@ For `Sumo Logic `_ you need to use a custom ``logline`` format with your collector token, use the server and port of the collector, and replace ``YOUR_DEPLOYMENT`` with one of ``au``, ``ca``, ``de``, ``eu``, ``fed``, ``in``, ``jp``, ``us1`` or ``us2``. See `Cloud Syslog Source `_ for more information. -:: +.. code:: avn service integration-endpoint-create --project your-project \ -d sumologic -t rsyslog \ diff --git a/docs/integrations/rsyslog/loggly.rst b/docs/integrations/rsyslog/loggly.rst index 33ef9b23bd..c1ba7e66ca 100644 --- a/docs/integrations/rsyslog/loggly.rst +++ b/docs/integrations/rsyslog/loggly.rst @@ -6,8 +6,8 @@ systems that support rsyslog protocol, including `Loggly `. +can be done using through Aiven console or command line using +`Aiven CLI _`. Prerequisites ------------- diff --git a/docs/integrations/rsyslog/logtail.rst b/docs/integrations/rsyslog/logtail.rst index 81c18ea2d4..5d6eb5dc2d 100644 --- a/docs/integrations/rsyslog/logtail.rst +++ b/docs/integrations/rsyslog/logtail.rst @@ -15,9 +15,11 @@ Send Aiven logs to Logtail * **Server**: ``in.logtail.com`` * **Port**: ``6514`` * **Format**: ``custom`` - * Now replace ``YOUR_LOGTAIL_SOURCE_TOKEN`` in the log template below with the token you copied in step 2, and paste into the **Log template** field:: + * Now replace ``YOUR_LOGTAIL_SOURCE_TOKEN`` in the log template below with the token you copied in step 2, and paste into the **Log template** field: - <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [logtail@11993 source_token="YOUR_LOGTAIL_SOURCE_TOKEN"] %msg% + .. code:: + + <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [logtail@11993 source_token="YOUR_LOGTAIL_SOURCE_TOKEN"] %msg% 5. Add your new logs integration to any of your Aiven services (more information :ref:`in the Rsyslog article`) @@ -26,13 +28,15 @@ Send Aiven logs to Logtail Create the Logtail service integration endpoint with Aiven client ----------------------------------------------------------------- -If you would rather use the CLI, you can use the following command to create the service integration endpoint. Replace the placeholder with your token:: +If you would rather use the CLI, you can use the following command to create the service integration endpoint. Replace the placeholder with your token: - avn service integration-endpoint-create --project your-project \ - -d logtail -t rsyslog \ - -c server=in.logtail.com -c port=6514 \ - -c tls=true -c format=custom \ - -c logline='<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [logtail@11993 source_token="TOKEN-FROM-LOGTAIL"] %msg%' +.. code:: + + avn service integration-endpoint-create --project your-project \ + -d logtail -t rsyslog \ + -c server=in.logtail.com -c port=6514 \ + -c tls=true -c format=custom \ + -c logline='<%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% [logtail@11993 source_token="TOKEN-FROM-LOGTAIL"] %msg%' This replaces steps 3 and 4 above. diff --git a/docs/platform/concepts/beta_services.rst b/docs/platform/concepts/beta_services.rst index 89fd6f26af..0d79ac1e95 100644 --- a/docs/platform/concepts/beta_services.rst +++ b/docs/platform/concepts/beta_services.rst @@ -6,7 +6,7 @@ Before general availability, the lifecycle of new services and features includes Limited availability (private beta) ----------------------------------- -The limited availability (private beta) stage is an initial release of a new functionality that you can try out by invitation only. If you are interested in trying a service or feature in this stage, contact the sales team at sales@Aiven.io. +The limited availability (private beta) stage is an initial release of a new functionality that you can try out by invitation only. If you are interested in trying a service or feature in this stage, contact the `sales team `_ . Early availability (public beta) -------------------------------- diff --git a/docs/platform/concepts/disaster-recovery-test-scenarios.rst b/docs/platform/concepts/disaster-recovery-test-scenarios.rst index 23ab6235f8..f834abde45 100644 --- a/docs/platform/concepts/disaster-recovery-test-scenarios.rst +++ b/docs/platform/concepts/disaster-recovery-test-scenarios.rst @@ -8,13 +8,13 @@ These situations are uncommon but can have a huge impact, so they need to be pre What is a Disaster Recovery scenario? ----------------------------------------------------------- -This is a preset scenario where an Aiven specialist will simulate an issue with your service and `sabotage` one (or more) of your Virtual Machines. For example, with an Aiven for PostgreSQL® service, we can `sabotage` the Primary instance and test the failover functionality or we can sabotage both nodes to test recovery time for a complete outage. +This is a preset scenario where an Aiven specialist will simulate an issue with your service and ``sabotage`` one (or more) of your Virtual Machines. For example, with an Aiven for PostgreSQL® service, we can ``sabotage`` the Primary instance and test the failover functionality or we can sabotage both nodes to test recovery time for a complete outage. What is needed? ----------------------------------------------------------- 1. At least 7 working days notice and the time (plus timezone) that you would like this carried out. -2. A `throwaway` service (i.e. one that is created specifically for this scenario and not a service used in Production). +2. A ``throwaway`` service (i.e. one that is created specifically for this scenario and not a service used in Production). 3. The virtual machine and/or the availability zone that you would like to target. 4. An Enterprise Support contract. diff --git a/docs/platform/concepts/service-level-agreement.rst b/docs/platform/concepts/service-level-agreement.rst index 63c282a00f..9a1799dd11 100644 --- a/docs/platform/concepts/service-level-agreement.rst +++ b/docs/platform/concepts/service-level-agreement.rst @@ -1,6 +1,6 @@ Service level agreement ======================= -The Aiven service level agreement (SLA) details can be found at `https://aiven.io/sla `_. +The Aiven service level agreement (SLA) details can be found at `aiven.io/sla `_. Custom SLAs are available for premium plans. Contact us at sales@Aiven.io for more details. diff --git a/docs/platform/howto/download-ca-cert.rst b/docs/platform/howto/download-ca-cert.rst index 45e6ba3e0b..aa2add2149 100644 --- a/docs/platform/howto/download-ca-cert.rst +++ b/docs/platform/howto/download-ca-cert.rst @@ -3,8 +3,10 @@ Download CA certificates If your service needs a CA certificate, download it through the `Aiven Console `_ by accessing the **Overview** page for the specific service. In the **Connection information** section, find **CA Certificate**, and select the download icon in the same line. -Or, you can use the ``avn`` :doc:`command-line tool ` with the following command:: +Or, you can use the ``avn`` :doc:`command-line tool ` with the following command: - avn service user-creds-download --username +.. code:: + + avn service user-creds-download --username Read more: :doc:`../concepts/tls-ssl-certificates` diff --git a/docs/platform/howto/integrations/datadog-increase-metrics-limit.rst b/docs/platform/howto/integrations/datadog-increase-metrics-limit.rst index b2fb185e29..42dbb086b1 100644 --- a/docs/platform/howto/integrations/datadog-increase-metrics-limit.rst +++ b/docs/platform/howto/integrations/datadog-increase-metrics-limit.rst @@ -9,7 +9,7 @@ Identify that metrics have been dropped ---------------------------------------- The following is an example log of a large Apache Kafka® service cluster where some metrics are missing and cannot be found in the Datadog dashboards after service integration. These metrics have been dropped by user Telegraf. -:: +.. code:: 2022-02-15T22:47:30.601220+0000 scoober-kafka-3c1132a3-82 user-telegraf: 2022-02-15T22:47:30Z W! [outputs.prometheus_client] Metric buffer overflow; 3378 metrics have been dropped 2022-02-15T22:47:30.625696+0000 scoober-kafka-3c1132a3-86 user-telegraf: 2022-02-15T22:47:30Z W! [outputs.prometheus_client] Metric buffer overflow; 1197 metrics have been dropped @@ -27,13 +27,13 @@ The ``max_jmx_metrics`` is not exposed in the Aiven Console yet, but you can cha 1. Find the ``SERVICE_INTEGRATION_ID`` for your Datadog integration with -:: +.. code:: avn service integration-list --project=PROJECT_NAME SERVICE_NAME 2. Change the value of ``max_jmx_metrics`` to the new LIMIT: -:: +.. code:: avn service integration-update SERVICE_INTEGRATION_ID --project PROJECT_NAME -c max_jmx_metrics=LIMIT diff --git a/docs/platform/howto/pause-from-cli.rst b/docs/platform/howto/pause-from-cli.rst index 623b13a300..70e100d2a2 100644 --- a/docs/platform/howto/pause-from-cli.rst +++ b/docs/platform/howto/pause-from-cli.rst @@ -7,14 +7,14 @@ One option is to power the service off temporarily. This way you can come back a You can update the state of your service either through the service overview page in `Aiven Console `_ or by using Aiven command line interface: -:: +.. code:: avn service update demo-open-search --power-off When you're ready to continue using the service run the command to power it on. Use ``wait`` command to easily see when the service is up and running. -:: +.. code:: avn service update demo-open-search --power-on avn service wait demo-open-search @@ -22,7 +22,7 @@ When you're ready to continue using the service run the command to power it on. If you have finished exploring your OpenSearch® service, you can destroy or "terminate" the service. To terminate the service completely use the following command: -:: +.. code:: avn service terminate demo-open-search diff --git a/docs/platform/howto/private-ip-resolution.rst b/docs/platform/howto/private-ip-resolution.rst index e0e0a11cf0..ad3de40617 100644 --- a/docs/platform/howto/private-ip-resolution.rst +++ b/docs/platform/howto/private-ip-resolution.rst @@ -22,13 +22,13 @@ DNS-rebinding protection on your network. To verify this assumption: ``8.8.8.8``. This has no rebinding protection so serves as a good test. You can use the ``dig`` command: -:: +.. code:: dig +short myservice-myproject.aivencloud.com @8.8.8.8 3. Compare the output of the above command with the response from your default DNS resolver: -:: +.. code:: dig +short myservice-myproject.aivencloud.com diff --git a/docs/platform/howto/static-ip-addresses.rst b/docs/platform/howto/static-ip-addresses.rst index b417313f2f..60603ed99c 100644 --- a/docs/platform/howto/static-ip-addresses.rst +++ b/docs/platform/howto/static-ip-addresses.rst @@ -45,7 +45,7 @@ times as you need to create enough IP addresses for your service. Specify the name of the cloud that the IP address should be created in, to match the service that will use it. -:: +.. code:: avn static-ip create --cloud azure-westeurope @@ -95,7 +95,7 @@ Configure service to use static IP Enable static IP addresses for the service by setting the ``static_ips`` user configuration option: -:: +.. code:: avn service update -c static_ips=true my-static-pg @@ -125,13 +125,13 @@ Static IP addresses are removed by first dissociating them from a service, while they are not in use. This returns them back to the ``created`` state to either be associated with another service, or deleted. -:: +.. code:: avn static-ip dissociate ip358375b2765 To delete a static IP: -:: +.. code:: avn static-ip delete ip358375b2765 diff --git a/docs/platform/howto/use-aws-privatelinks.rst b/docs/platform/howto/use-aws-privatelinks.rst index 170e6334c2..1675f4d87f 100644 --- a/docs/platform/howto/use-aws-privatelinks.rst +++ b/docs/platform/howto/use-aws-privatelinks.rst @@ -37,13 +37,13 @@ currently support AWS PrivateLink. - Using the Aiven CLI, run the following command including your AWS account ID, the access scope, and the name of your Aiven service: - :: + .. code:: $ avn service privatelink aws create --principal arn:aws:iam::$AWS_account_ID:$access_scope $Aiven_service_name For example: - :: + .. code:: $ avn service privatelink aws create --principal arn:aws:iam::012345678901:user/mwf my-kafka @@ -70,7 +70,7 @@ currently support AWS PrivateLink. #. In the AWS CLI, run the following command to create a VPC endpoint: - :: + .. code:: $ aws ec2 --region eu-west-1 create-vpc-endpoint --vpc-endpoint-type Interface --vpc-id $your_vpc_id --subnet-ids $space_separated_list_of_subnet_ids --security-group-ids $security_group_ids --service-name com.amazonaws.vpce.eu-west-1.vpce-svc-0b16e88f3b706aaf1 @@ -79,7 +79,7 @@ currently support AWS PrivateLink. **Network** > **AWS service name** in `Aiven Console `__ or by running the following command in the Aiven CLI: - :: + .. code:: $ avn service privatelink aws get aws_service_name @@ -111,7 +111,7 @@ currently support AWS PrivateLink. ``user_config.privatelink_access.`` to ``true`` for the components that you want to enable. For example: - :: + .. code:: $ avn service update -c privatelink_access.kafka=true $Aiven_service_name $ avn service update -c privatelink_access.kafka_connect=true $Aiven_service_name @@ -199,7 +199,7 @@ allowed to connect a VPC endpoint: - Use the ``update`` command of the Aiven CLI: - :: + .. code:: # avn service privatelink aws update --principal arn:aws:iam::$AWS_account_ID:$access_scope $Aiven_service_name @@ -224,11 +224,11 @@ Deleting a privatelink connection - Using the Aiven CLI, run the following command: - :: + .. code:: $ avn service privatelink aws delete $Aiven_service_name - :: + .. code:: AWS_SERVICE_ID AWS_SERVICE_NAME PRINCIPALS STATE ========================== ======================================================= ================================== ======== diff --git a/docs/platform/howto/vnet-peering-azure.rst b/docs/platform/howto/vnet-peering-azure.rst index ab5fcaf7e7..6486fbcc4e 100644 --- a/docs/platform/howto/vnet-peering-azure.rst +++ b/docs/platform/howto/vnet-peering-azure.rst @@ -43,7 +43,7 @@ as well as the :doc:`Aiven CLI ` to follow this guide. Using the Azure CLI: -:: +.. code:: az account clear az login @@ -56,7 +56,7 @@ If you manage multiple Azure subscriptions, also configure the Azure CLI to default to the correct subscription for the subsequent commands. This is not needed if there's only one subscription: -:: +.. code:: az account set --subscription   @@ -67,7 +67,7 @@ is not needed if there's only one subscription: Create an application object in your AD tenant. Using the Azure CLI, this can be done with: -:: +.. code:: az ad app create --display-name "" --sign-in-audience AzureADMultipleOrgs --key-type Password @@ -84,7 +84,7 @@ Create a service principal for the app object you created. The service principal should be created to the Azure subscription the VNet you wish to peer is located in: -:: +.. code:: az ad sp create --id $user_app_id @@ -97,7 +97,7 @@ shown in the output. 4. set a password for your app object ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -:: +.. code:: az ad app credential reset --id $user_app_id @@ -110,7 +110,7 @@ as ``$user_app_secret`` below This can be found in the Azure portal in "Virtual networks" -> name of your network -> “JSON View” -> "Resource ID", or using -:: +.. code:: az network vnet list @@ -143,7 +143,7 @@ app object and service principal has, you can create a custom role with just that permission. The built-in *Network Contributor* role includes that permission, and can be found using the Azure CLI with -:: +.. code:: az role definition list --name "Network Contributor" @@ -151,7 +151,7 @@ The ``id`` field from the output will be used as ``$network_contributor_role_id`` to assign the service principal that role: -:: +.. code:: az role assignment create --role $network_contributor_role_id --assignee-object-id $user_sp_id --scope $user_vnet_id @@ -170,7 +170,7 @@ from the Project VPC VNet in the Aiven subscription to the VNet from step 5 in your subscription. For this the Aiven app object needs a service principal in your subscription: -:: +.. code:: az ad sp create --id 55f300d4-fc50-4c5e-9222-e90a6e2187fb @@ -192,7 +192,7 @@ permissions. In order to target a network in your subscription with a peering and nothing else, we'll create a this a custom role definition, with only a single action allowing to do that and only that: -:: +.. code:: az role definition create --role-definition '{"Name": "", "Description": "Allows creating a peering to vnets in scope (but not from)", "Actions": ["Microsoft.Network/virtualNetworks/peer/action"], "AssignableScopes": ["/subscriptions/'$user_subscription_id'"]}' @@ -211,7 +211,7 @@ peer with your VNet, assign the role created in the previous step to the Aiven service principal (step 7) with the scope of your VNet (step 5) with -:: +.. code:: az role assignment create --role $aiven_role_id --assignee-object-id $aiven_sp_id --scope $user_vnet_id @@ -223,7 +223,7 @@ The ID of your AD tenant will be needed in the next step. Find it from the Azure portal from "Azure Active Directory" -> "Properties" -> "Directory ID" or with the Azure CLI using -:: +.. code:: az account list @@ -247,7 +247,7 @@ your tenant to give it access to the service principal created in step 7 be found with ``avn vpc list`` | Using the Aiven CLI: -:: +.. code:: avn vpc peering-connection create --project-vpc-id $aiven_project_vpc_id --peer-cloud-account $user_subscription_id --peer-resource-group $user_resource_group --peer-vpc $user_vnet_name --peer-azure-app-id $user_app_id --peer-azure-tenant-id $user_tenant_id @@ -263,7 +263,7 @@ peering connection is being set up by the Aiven platform. Run the following command until the state is no longer ``APPROVED`` , but ``PENDING_PEER`` : -:: +.. code:: avn vpc peering-connection get -v --project-vpc-id $aiven_project_vpc_id --peer-cloud-account $user_subscription_id --peer-resource-group $user_resource_group --peer-vpc $user_vnet_name @@ -286,20 +286,20 @@ output is referred to as the ``$aiven_vnet_id`` Log out the Azure user you logged in with in step 1 using -:: +.. code:: az account clear Log in the application object you created with in step 2 to your AD tenant with -:: +.. code:: az login --service-principal -u $user_app_id -p $user_app_secret --tenant $user_tenant_id Log in the same application object to the Aiven AD tenant -:: +.. code:: az login --service-principal -u $user_app_id -p $user_app_secret --tenant $aiven_tenant_id @@ -307,7 +307,7 @@ Now that your application object has a session with both AD tenants, create a peering from your VNet to the VNet in the Aiven subscription with -:: +.. code:: az network vnet peering create --name --remote-vnet $aiven_vnet_id --vnet-name $user_vnet_name --resource-group $user_resource_group --subscription $user_subscription_id --allow-vnet-access @@ -322,7 +322,7 @@ again and creating the peering again after waiting a bit by repeating the commands in this step. If the error message persists, please check the role assignment in step 6 was correct. -:: +.. code:: The client '' with object id '' does not have authorization to perform action 'Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write' over scope '$user_vnet_id' If access was recently granted, please refresh your credentials. @@ -338,6 +338,6 @@ in the Project VPC can be reached through the peering. To check if the peering connection is ``ACTIVE`` , run the same Aiven CLI ``avn vpc peering-connection get`` command from step 12. In some cases it has taken up to 15 minutes for the state to update: -:: +.. code:: avn vpc peering-connection get -v --project-vpc-id $aiven_project_vpc_id --peer-cloud-account $user_subscription_id --peer-resource-group $user_resource_group --peer-vpc $user_vnet_name diff --git a/docs/platform/reference/service-ip-address.rst b/docs/platform/reference/service-ip-address.rst index 7b0e3cbf2a..8070b820c3 100644 --- a/docs/platform/reference/service-ip-address.rst +++ b/docs/platform/reference/service-ip-address.rst @@ -8,16 +8,18 @@ When a new Aiven service is created, the chosen cloud service provider will dyna .. Note:: - Aiven also offer the ability to define :doc:`static IP addresses ` in case you need them a service. For more information about how to obtain a static IP and assign it to a particular service, please check the :doc:`related guide `. + Aiven also offer the ability to define :doc:`static IP addresses ` in case you need them a service. For more information about how to obtain a static IP and assign it to a particular service, please check the :doc:`related guide `. If you have your own cloud account and want to keep your Aiven services isolated from the public internet, you can however create a VPC and a peering connection to your own cloud account. For more information on how to setup the VPC peering, check `the related article `_. Default service hostname ------------------------ -When a new service is being provisioned, its hostname is defined as follows:: +When a new service is being provisioned, its hostname is defined as follows: --.aivencloud.com +.. code:: + + -.aivencloud.com where: diff --git a/docs/products/cassandra/concepts/cross-cluster-replication.rst b/docs/products/cassandra/concepts/cross-cluster-replication.rst index 721d0fe559..01a0afb8bb 100644 --- a/docs/products/cassandra/concepts/cross-cluster-replication.rst +++ b/docs/products/cassandra/concepts/cross-cluster-replication.rst @@ -15,11 +15,8 @@ Cross-cluster replication (CCR) is a configuration of Apache Cassandra services Why use CCR ----------- -Improved data availability - CCR improves the disaster recovery capability for your service. Even if one service (cloud provider or region) goes down, your data stays safe and available with the CCR peer, which is another service with a different cloud provider or region. - -Improved performance - Enabling CCR on your service, you can set up your client to interact with the service that is geographically close. The data locality benefit translates into a lower latency and improved data processing performance. +- **Improved data availability**: CCR improves the disaster recovery capability for your service. Even if one service (cloud provider or region) goes down, your data stays safe and available with the CCR peer, which is another service with a different cloud provider or region. +- **Improved performance**: Enabling CCR on your service, you can set up your client to interact with the service that is geographically close. The data locality benefit translates into a lower latency and improved data processing performance. Data flow architecture ---------------------- @@ -86,7 +83,6 @@ CCR setup To make CCR work on your services, you need a cluster comprising two Apache Cassandra services with CCR enabled. On the cluster, you need to issue the CREATE KEYSPACE request, specifying ``NetworkTopologyStrategy`` as a replication strategy along with desired replication factors. .. code-block:: bash - :caption: Example CREATE KEYSPACE test WITH replication = / { / @@ -102,18 +98,13 @@ CCR in action With CCR enabled and configured, Apache Cassandra replicates each write in the keyspace to both services (datacenters) with an appropriate number of copies as per replication factor. -Active-active model - Apache Cassandra uses an active-active model: clients have the choice of reading/writing either from one service or the other. - -Consistency level - The consistency level regulates how many nodes need to confirm they executed an operation for this operation to be considered successfully completed by the client. You can set up the consistency level to one of the allowed consistency level arguments depending on your needs. +- **Active-active model**: Apache Cassandra uses an active-active model: clients have the choice of reading/writing either from one service or the other. +- **Consistency level**: The consistency level regulates how many nodes need to confirm they executed an operation for this operation to be considered successfully completed by the client. You can set up the consistency level to one of the allowed consistency level arguments depending on your needs. .. topic:: Examples - * LOCAL_QUORUM consistency level - The read is contained within the service you connect to (completes faster). - * QUORUM consistency level - Replies from nodes of both services are required. The read produces more consistent results but fails if one of the regions is unavailable. + * LOCAL_QUORUM consistency level: The read is contained within the service you connect to (completes faster). + * QUORUM consistency level: Replies from nodes of both services are required. The read produces more consistent results but fails if one of the regions is unavailable. .. seealso:: @@ -125,11 +116,10 @@ Limitations ----------- * It is not possible to connect two existing services to become a CCR pair. - -.. topic:: But you still can + But you still can: - * Create a CCR pair from scratch or - * Add a new region to an existing service (create a new service that replicates from your existing service). + * Create a CCR pair from scratch or, + * Add a new region to an existing service (create a new service that replicates from your existing service). * Enabling CCR on an existing service is only possible if this service has a keyspace that uses ``NetworkTopologyStrategy`` as a replication strategy. * Two CCR services need to use an identical service plan and the same amount of dynamic disk space. diff --git a/docs/products/cassandra/get-started.rst b/docs/products/cassandra/get-started.rst index 63c0985357..b1d7b46501 100644 --- a/docs/products/cassandra/get-started.rst +++ b/docs/products/cassandra/get-started.rst @@ -19,7 +19,7 @@ If you prefer launching a new service from the CLI, `Aiven CLI @@ -47,7 +47,7 @@ Set the `SSL_CERTFILE` environment variable to the location of the *CA Certifica Navigate to the directory of your local Cassandra installation and execute the following from a terminal window: -:: +.. code:: ./cqlsh --ssl -u avnadmin -p diff --git a/docs/products/cassandra/howto/connect-go.rst b/docs/products/cassandra/howto/connect-go.rst index 6974469e79..ae2ac428df 100644 --- a/docs/products/cassandra/howto/connect-go.rst +++ b/docs/products/cassandra/howto/connect-go.rst @@ -21,26 +21,32 @@ Variable Description Pre-requisites '''''''''''''' -Get the ``gocql`` library:: +Get the ``gocql`` library: - go get github.com/gocql/gocql +.. code:: + + go get github.com/gocql/gocql Code '''' 1. Create a new file named ``main.go`` and add the following content: -.. literalinclude:: /code/products/cassandra/connect.go - :language: go + .. literalinclude:: /code/products/cassandra/connect.go + :language: go + + This code first creates a keyspace named ``example_keyspace`` and a table named ``example_go`` that contains an ``id`` and a ``message``. Then, it writes a new + entry into the table with the values ``1`` and ``hello world``. Finally, it reads the entry from the table and prints it. -This code first creates a keyspace named ``example_keyspace`` and a table named ``example_go`` that contains an ``id`` and a ``message``. Then, it writes a new -entry into the table with the values ``1`` and ``hello world``. Finally, it reads the entry from the table and prints it. +2. Execute the following from a terminal window to build an executable: + + .. code:: -2. Execute the following from a terminal window to build an executable:: + go build main.go - go build main.go +3. Run the program with the required flags to pass the necessary connection details: -3. Run the program with the required flags to pass the necessary connection details:: + .. code:: - ./main --host --port --user avnadmin --password --ssl-certfile + ./main --host --port --user avnadmin --password --ssl-certfile diff --git a/docs/products/cassandra/howto/disable-cross-cluster-replication.rst b/docs/products/cassandra/howto/disable-cross-cluster-replication.rst index 5ab4aa14e6..87dc2e6def 100644 --- a/docs/products/cassandra/howto/disable-cross-cluster-replication.rst +++ b/docs/products/cassandra/howto/disable-cross-cluster-replication.rst @@ -29,7 +29,7 @@ Prerequisites * Depending on the method you choose to use for disabling CCR * Access to `Aiven Console `_ - * `cURL` CLI tool + * ``cURL`` CLI tool * `Aiven CLI tool `_ * CCR enabled on a pair of Aiven for Apache Cassandra services @@ -93,7 +93,7 @@ You can disable CCR for your Aiven for Apache Cassandra service(s) by calling th .. note:: - In this instruction, the `curl` command line tool is used to interact with Aiven APIs. + In this instruction, the ``curl`` command line tool is used to interact with Aiven APIs. .. tip:: diff --git a/docs/products/cassandra/howto/enable-cross-cluster-replication.rst b/docs/products/cassandra/howto/enable-cross-cluster-replication.rst index 9cb7dd273b..345a46e10f 100644 --- a/docs/products/cassandra/howto/enable-cross-cluster-replication.rst +++ b/docs/products/cassandra/howto/enable-cross-cluster-replication.rst @@ -35,7 +35,7 @@ Prerequisites * Depending on the method you choose to use for enabling CCR * Access to `Aiven Console `_ - * `cURL` CLI tool + * ``cURL`` CLI tool * `Aiven CLI tool `_ * See :ref:`Limitations `. @@ -176,7 +176,7 @@ Using :doc:`Aiven APIs `, you can enable CCR for .. note:: - In this instruction, the `curl` command line tool is used to interact with Aiven APIs. + In this instruction, the ``curl`` command line tool is used to interact with Aiven APIs. .. topic:: Understand parameters to be supplied diff --git a/docs/products/cassandra/howto/manage-cross-cluster-replication.rst b/docs/products/cassandra/howto/manage-cross-cluster-replication.rst index 7090d15e14..267b2ae131 100644 --- a/docs/products/cassandra/howto/manage-cross-cluster-replication.rst +++ b/docs/products/cassandra/howto/manage-cross-cluster-replication.rst @@ -172,7 +172,7 @@ To configure the consistency level in a client library, add an extra parameter o .. topic:: Example:: - In Python, you can specify `consistency_level`` as a parameter for the `SimpleStatement` object. + In Python, you can specify `consistency_level`` as a parameter for the ``SimpleStatement`` object. .. code-block:: bash diff --git a/docs/products/cassandra/howto/use-dsbulk-with-cassandra.rst b/docs/products/cassandra/howto/use-dsbulk-with-cassandra.rst index 8baf2e8c77..671a117049 100644 --- a/docs/products/cassandra/howto/use-dsbulk-with-cassandra.rst +++ b/docs/products/cassandra/howto/use-dsbulk-with-cassandra.rst @@ -40,14 +40,16 @@ In order for ``dsbulk`` to read the security certificate to connect to Aiven ser 1. Go to `Aiven Console `_ and download the certificate from the **Overview** page of your Aiven for Apache Cassandra service. Save the CA certificate in a file called ``cassandra-certificate.pem`` in a directory on the linux system where ``dsbulk`` runs. -2. Run this command line to create a truststore file and import the certificate in it:: +2. Run this command line to create a truststore file and import the certificate in it: + + .. code:: - keytool -import -v \ - -trustcacerts \ - -alias CARoot \ - -file cassandra-certificate.pem \ - -keystore client.truststore \ - -storepass KEYSTORE_PASSWORD + keytool -import -v \ + -trustcacerts \ + -alias CARoot \ + -file cassandra-certificate.pem \ + -keystore client.truststore \ + -storepass KEYSTORE_PASSWORD A truststore file called ``client.truststore`` is created in the directory where the ``keytool`` command has been launched. @@ -58,7 +60,9 @@ In order for ``dsbulk`` to read the security certificate to connect to Aiven ser By creating a configuration file, the ``dsbulk`` command line is more readable and it doesn't show passwords in clear text. If you don't create a configuration file, every option must be explicitly provided on the command line. -4. Create a file that contains the connection configuration like the following:: +4. Create a file that contains the connection configuration like the following: + + .. code:: datastax-java-driver { advanced { @@ -89,17 +93,19 @@ Run a ``dsbulk`` command to count records in a Cassandra table Once the configuration file is created, you can run the ``dsbulk``. -1. Navigate to the `bin` subdirectory of the downloaded ``dsbulk`` package. +1. Navigate to the ``bin`` subdirectory of the downloaded ``dsbulk`` package. -2. Run the following command:: +2. Run the following command: - ./dsbulk count \ - -f /full/path/to/conf.file \ - -k baselines \ - -t keyvalue \ - -h HOST \ - -port PORT \ - --log.verbosity 2 + .. code:: + + ./dsbulk count \ + -f /full/path/to/conf.file \ + -k baselines \ + -t keyvalue \ + -h HOST \ + -port PORT \ + --log.verbosity 2 where: @@ -112,7 +118,9 @@ Once the configuration file is created, you can run the ``dsbulk``. Extract data from a Cassandra table in CSV format ------------------------------------------------- -To extract the data from a table, you can use the following command:: +To extract the data from a table, you can use the following command: + +.. code:: ./dsbulk unload \ -f /full/path/to/conf.file \ @@ -128,8 +136,10 @@ This command will extract all records from the table and output in a CSV format Load data into a Cassandra table from a CSV file ------------------------------------------------ -To load data into a Cassandra table, the command line is very similar to the previous command:: +To load data into a Cassandra table, the command line is very similar to the previous command: +.. code:: + ./dsbulk load \ -f /full/path/to/conf.file \ -k baselines \ diff --git a/docs/products/cassandra/howto/use-nosqlbench-with-cassandra.rst b/docs/products/cassandra/howto/use-nosqlbench-with-cassandra.rst index 874c951069..f3eacecbc3 100644 --- a/docs/products/cassandra/howto/use-nosqlbench-with-cassandra.rst +++ b/docs/products/cassandra/howto/use-nosqlbench-with-cassandra.rst @@ -42,8 +42,10 @@ Create a schema and load data Nosqlbench can be used to create a sample schema and load data. -The schema can be created with the following command, after substituting the placeholders for ``HOST``, ``PORT``, ``PASSWORD`` and ``SSL_CERTFILE``:: +The schema can be created with the following command, after substituting the placeholders for ``HOST``, ``PORT``, ``PASSWORD`` and ``SSL_CERTFILE``: +.. code:: + ./nb run \ host=HOST \ port=PORT \ @@ -94,11 +96,15 @@ Nosqlbench uses workflows to define the load activity. You can define your own w Check the workflow details ~~~~~~~~~~~~~~~~~~~~~~~~~~ -To check the details of the several predefined workloads and activities, you can dump the definition to a file. To have the list of all the pre-compiled workloads execute:: +To check the details of the several predefined workloads and activities, you can dump the definition to a file. To have the list of all the pre-compiled workloads execute: + +.. code:: ./nb --list-workloads -The above command will generate the list of pre-compiled workloads like:: +The above command will generate the list of pre-compiled workloads like: + +.. code:: # An IOT workload with more optimal settings for DSE /activities/baselines/cql-iot-dse.yaml @@ -110,7 +116,9 @@ The above command will generate the list of pre-compiled workloads like:: /activities/baselines/cql-keyvalue.yaml -To edit a particular workload file locally, you execute the following, replacing the placeholder ``WORKLOAD_NAME`` with the name of the workload:: +To edit a particular workload file locally, you execute the following, replacing the placeholder ``WORKLOAD_NAME`` with the name of the workload: + +.. code:: ./nb --copy WORKLOAD_NAME @@ -124,8 +132,10 @@ Create your own workload Workload files can be modified and then executed with ``nb`` using the command option ``workload=WORKLOAD_NAME``. The tool expects the file ``WORKLOAD_NAME.yaml`` to be in the same directory of the ``nb`` command. -If you create the file called ``my-workload.yaml`` in the same directory of ``nb`` command, the new workload can be run with this command line:: +If you create the file called ``my-workload.yaml`` in the same directory of ``nb`` command, the new workload can be run with this command line: +.. code:: + ./nb run \ driver=cql \ workload=my-workload diff --git a/docs/products/cassandra/howto/zdm-proxy.rst b/docs/products/cassandra/howto/zdm-proxy.rst index 3339e98218..34512e7e24 100644 --- a/docs/products/cassandra/howto/zdm-proxy.rst +++ b/docs/products/cassandra/howto/zdm-proxy.rst @@ -74,7 +74,7 @@ Check if the binary has been downloaded successfully using ``ls`` in the relevan Run ZDM Proxy ''''''''''''' -To run ZDM Proxy, specify connection information by setting ZDM_* environment variables using the ``export`` command. Next, run the binary. +To run ZDM Proxy, specify connection information by setting ``ZDM_*`` environment variables using the ``export`` command. Next, run the binary. .. code-block:: bash diff --git a/docs/products/clickhouse/howto/configure-tiered-storage.rst b/docs/products/clickhouse/howto/configure-tiered-storage.rst index 31e7c9d9a8..a983a41aaf 100644 --- a/docs/products/clickhouse/howto/configure-tiered-storage.rst +++ b/docs/products/clickhouse/howto/configure-tiered-storage.rst @@ -16,7 +16,7 @@ You may want to change this default data distribution behavior by :ref:`configur To enable this time-based data distribution mechanism, you can set up a retention policy (threshold) on a table level by using the TTL clause. For data retention control purposes, the TTL clause uses the following: -* Data item of the `Date` or `DateTime` type as a reference point in time +* Data item of the ``Date`` or ``DateTime`` type as a reference point in time * INTERVAL clause as a time period to elapse between the reference point and the data transfer to object storage Prerequisites diff --git a/docs/products/clickhouse/howto/connect-with-clickhouse-cli.rst b/docs/products/clickhouse/howto/connect-with-clickhouse-cli.rst index 61fd2672eb..29486d0a50 100644 --- a/docs/products/clickhouse/howto/connect-with-clickhouse-cli.rst +++ b/docs/products/clickhouse/howto/connect-with-clickhouse-cli.rst @@ -56,7 +56,9 @@ Alternatively, sometimes you might want to run individual queries and be able to --secure \ --query="YOUR SQL QUERY GOES HERE" -Similar to above example, you can request the list of present databases directly:: +Similar to above example, you can request the list of present databases directly: + +.. code:: docker run --interactive \ --rm clickhouse/clickhouse-server clickhouse-client \ diff --git a/docs/products/clickhouse/howto/connect-with-jdbc.rst b/docs/products/clickhouse/howto/connect-with-jdbc.rst index 75fa27a566..11a209ee1a 100644 --- a/docs/products/clickhouse/howto/connect-with-jdbc.rst +++ b/docs/products/clickhouse/howto/connect-with-jdbc.rst @@ -17,11 +17,15 @@ Variable Description Connection string -------------------- -Replace ``CLICKHOUSE_HTTPS_HOST`` and ``CLICKHOUSE_HTTPS_PORT`` with your connection values:: +Replace ``CLICKHOUSE_HTTPS_HOST`` and ``CLICKHOUSE_HTTPS_PORT`` with your connection values: - jdbc:ch://CLICKHOUSE_HTTPS_HOST:CLICKHOUSE_HTTPS_PORT?ssl=true&sslmode=STRICT +.. code:: + + jdbc:ch://CLICKHOUSE_HTTPS_HOST:CLICKHOUSE_HTTPS_PORT?ssl=true&sslmode=STRICT -You'll also need to provide user name and password to establish the connection. For example, if you use Java:: +You'll also need to provide user name and password to establish the connection. For example, if you use Java: - Connection connection = dataSource.getConnection("CLICKHOUSE_USER", "CLICKHOUSE_PASSWORD"); +.. code:: + + Connection connection = dataSource.getConnection("CLICKHOUSE_USER", "CLICKHOUSE_PASSWORD"); diff --git a/docs/products/clickhouse/howto/load-dataset.rst b/docs/products/clickhouse/howto/load-dataset.rst index 24d240bbb1..9377319e46 100644 --- a/docs/products/clickhouse/howto/load-dataset.rst +++ b/docs/products/clickhouse/howto/load-dataset.rst @@ -115,7 +115,7 @@ You should now see the two tables in your database and you are ready to try out Run queries ----------- -Once the data is loaded, you can run queries against the sample data you imported. For example, here is a command to query the number of items in the `hits_v1` table: +Once the data is loaded, you can run queries against the sample data you imported. For example, here is a command to query the number of items in the ``hits_v1`` table: .. code:: sql diff --git a/docs/products/clickhouse/howto/manage-databases-tables.rst b/docs/products/clickhouse/howto/manage-databases-tables.rst index 250c270963..bb687eb1cf 100644 --- a/docs/products/clickhouse/howto/manage-databases-tables.rst +++ b/docs/products/clickhouse/howto/manage-databases-tables.rst @@ -8,7 +8,7 @@ Databases and tables are at the core of any Database Management System. ClickHou Create a database ----------------- -Creating databases in an Aiven for ClickHouse service can only be done via the Aiven platform; the `admin` user is not allowed to create databases directly for security and reliability reasons. However, you can create a new database through the web interface of `Aiven console `_: +Creating databases in an Aiven for ClickHouse service can only be done via the Aiven platform; the ``admin`` user is not allowed to create databases directly for security and reliability reasons. However, you can create a new database through the web interface of `Aiven console `_: 1. Log in to the `Aiven Console `_, and select your service from the **Services** page. 2. In your service's page, select **Databases and tables** from the sidebar. diff --git a/docs/products/clickhouse/howto/manage-users-roles.rst b/docs/products/clickhouse/howto/manage-users-roles.rst index 84b818ccd7..1ae7f5780c 100644 --- a/docs/products/clickhouse/howto/manage-users-roles.rst +++ b/docs/products/clickhouse/howto/manage-users-roles.rst @@ -41,9 +41,11 @@ This article shows you examples of how to create roles and grant privileges. The Create a new role ^^^^^^^^^^^^^^^^^ -To create a new role named `auditor`, run the following command:: +To create a new role named `auditor`, run the following command: - CREATE ROLE auditor; +.. code:: + + CREATE ROLE auditor; You can find more information `on role creation here `_. @@ -52,7 +54,7 @@ Grant permissions You can grant permissions both to specific roles and to individual users. The grants can be also granular, targeting specific databases, tables, columns, or rows. -For example, the following request grants the `auditor` role permissions to select data from the `transactions` database:: +For example, the following request grants the ``auditor`` role permissions to select data from the ``transactions`` database:: GRANT SELECT ON transactions.* TO auditor; @@ -64,7 +66,7 @@ Or to particular columns of a table:: GRANT SELECT(date,description,amount) ON transactions.expenses TO auditor -To grant the `auditor` and `external` roles to several users, run:: +To grant the ``auditor`` and ``external`` roles to several users, run:: GRANT auditor, external TO Mary.Anderson, James.Miller; @@ -93,13 +95,15 @@ Set roles A single user can be assigned different roles, either individually or simultaneously. -:: +.. code:: SET ROLE auditor; -You can also specify a role to be activated by default when the user logs in:: +You can also specify a role to be activated by default when the user logs in: - SET DEFAULT ROLE auditor, external TO Mary.Anderson, James.Miller; +.. code:: + + SET DEFAULT ROLE auditor, external TO Mary.Anderson, James.Miller; Delete a role ^^^^^^^^^^^^^ @@ -128,11 +132,11 @@ Run the following commands to see all available grants, users, and roles:: SHOW GRANTS; -:: +.. code:: SHOW USERS; -:: +.. code:: SHOW ROLES; diff --git a/docs/products/clickhouse/howto/run-federated-queries.rst b/docs/products/clickhouse/howto/run-federated-queries.rst index 07ccf2ccd0..3c53b060ea 100644 --- a/docs/products/clickhouse/howto/run-federated-queries.rst +++ b/docs/products/clickhouse/howto/run-federated-queries.rst @@ -68,7 +68,7 @@ Query using SELECT and the s3Cluster function ''''''''''''''''''''''''''''''''''''''''''''' The ``s3Cluster`` function allows all cluster nodes to participate in the query execution. -Using `default` for the cluster name parameter, we can compute the same aggregations as above as follows: +Using ``default`` for the cluster name parameter, we can compute the same aggregations as above as follows: .. code-block:: sql diff --git a/docs/products/clickhouse/reference/limitations.rst b/docs/products/clickhouse/reference/limitations.rst index bbdd1732ca..a7622fc514 100644 --- a/docs/products/clickhouse/reference/limitations.rst +++ b/docs/products/clickhouse/reference/limitations.rst @@ -33,7 +33,7 @@ From the information about restrictions on using Aiven for ClickHouse, you can e * Some special table engines and the Log engine are not supported in Aiven for ClickHouse. - * Some engines are remapped to their `Replicated` alternatives, for example, `MergeTree` -> `ReplicatedMergeTree`. + * Some engines are remapped to their ``Replicated`` alternatives, for example, ``MergeTree`` **>** ``ReplicatedMergeTree``. - * For storing data, use the `Buffer engine `_ instead of the Log engine. * Use the available table engines listed in :doc:`Supported table engines in Aiven for ClickHouse `. @@ -45,10 +45,10 @@ From the information about restrictions on using Aiven for ClickHouse, you can e - \- * - Querying all shards at once - If you have a sharded plan, you must use a Distributed view on top of your MergeTree table to query all the shards at the same time, and you should use it for inserts too. - - Use the `Distributed` view with sharded plans. + - Use the ``Distributed`` view with sharded plans. * - ON CLUSTER queries - Aiven for ClickHouse doesn't support ON CLUSTER queries because it actually runs each data definition query on all the servers of the cluster without using `ON CLUSTER`. - - Run queries without `ON CLUSTER`. + - Run queries without ``ON CLUSTER``. * - Creating a database using SQL - You cannot create a database directly using SQL, for example, if you'd like to add a non-default database. - Use the Aiven's public API. diff --git a/docs/products/flink/concepts/managed-service-features.rst b/docs/products/flink/concepts/managed-service-features.rst index 5c25b375d1..7d446471dc 100644 --- a/docs/products/flink/concepts/managed-service-features.rst +++ b/docs/products/flink/concepts/managed-service-features.rst @@ -19,7 +19,7 @@ By default, each TaskManager is configured with a single slot for maximum job is Cluster restart strategy ------------------------ -The default restart strategy of the cluster is set to `Failure Rate`. This controls how Apache Flink restarts in case of failures during job execution. Administrators can change this setting in the advanced configuration options of the service. +The default restart strategy of the cluster is set to ``Failure Rate``. This controls how Apache Flink restarts in case of failures during job execution. Administrators can change this setting in the advanced configuration options of the service. For more information on available options, refer to `Apache Flink fault tolerance `_ documentation. diff --git a/docs/products/flink/howto/connect-bigquery.rst b/docs/products/flink/howto/connect-bigquery.rst index ba1eebe75f..6699166df0 100644 --- a/docs/products/flink/howto/connect-bigquery.rst +++ b/docs/products/flink/howto/connect-bigquery.rst @@ -26,13 +26,13 @@ Step 1: Create or use an Aiven for Apache Flink service You can use an existing Aiven for Apache Flink service. To get a list of all your existing Flink services, use the following command: -:: +.. code:: avn service list --project --service-type flink Alternatively, if you need to create a new Aiven for Apache Flink service, you can use the following command: -:: +.. code:: avn service create -t flink -p --cloud @@ -56,7 +56,7 @@ Step 3: Create an external Google BigQuery endpoint `````````````````````````````````````````````````````` To integrate Google BigQuery with Aiven for Apache Flink, you need to create an external BigQuery endpoint. You can use the :ref:`avn service integration-endpoint-create ` command with the required parameters. This command will create a new integration endpoint that can be used to connect to a BigQuery service. -:: +.. code:: avn service integration-endpoint-create \ --project \ @@ -104,7 +104,7 @@ where: **Aiven CLI Example: Creating an external BigQuery integration endpoint** -:: +.. code:: avn service integration-endpoint-create --project aiven-test --endpoint-name my-bigquery-endpoint --endpoint-type external_bigquery @@ -130,7 +130,7 @@ Step 4: Create an integration for Google BigQuery ````````````````````````````````````````````````````` Now, create an integration between your Aiven for Apache Flink service and your BigQuery endpoint: -:: +.. code:: avn service integration-create --source-endpoint-id @@ -139,7 +139,7 @@ Now, create an integration between your Aiven for Apache Flink service and your For example, -:: +.. code:: avn service integration-create --source-endpoint-id eb870a84-b91c-4fd7-bbbc-3ede5fafb9a2 @@ -159,13 +159,13 @@ After creating the integration between Aiven for Apache Flink and and Google Big To verify that the integration has been created successfully, run the following command: -:: +.. code:: avn service integration-list --project For example: -:: +.. code:: avn service integration-list --project systest-project flink-1 @@ -220,7 +220,7 @@ If you're using Google BigQuery for your data storage and analysis, you can seam * **GCP Project ID**: The identifier associated with your Google Cloud Project where BigQuery is set up. For example, ``my-gcp-project-12345``. * **Google Service Account Credentials**: The JSON formatted credentials obtained from your Google Cloud Console for service account authentication. For example: - :: + .. code:: { "type": "service_account", diff --git a/docs/products/flink/howto/connect-kafka.rst b/docs/products/flink/howto/connect-kafka.rst index f9bdc8752c..bb062abadb 100644 --- a/docs/products/flink/howto/connect-kafka.rst +++ b/docs/products/flink/howto/connect-kafka.rst @@ -25,7 +25,7 @@ To create a Apache Flink® table based on an Aiven for Apache Kafka® topic via 5. In the **Add new source table** or **Edit source table** screen, select the Aiven for Apache Kafka service as the integrated service. 6. In the **Table SQL** section, enter the SQL statement below to create the Apache Kafka-based Apache Flink: - :: + .. code:: CREATE TABLE kafka ( diff --git a/docs/products/flink/howto/connect-pg.rst b/docs/products/flink/howto/connect-pg.rst index 5cff322eed..84034fb91b 100644 --- a/docs/products/flink/howto/connect-pg.rst +++ b/docs/products/flink/howto/connect-pg.rst @@ -50,7 +50,7 @@ Example: Define a Flink table over a PostgreSQL® table The Aiven for PostgreSQL® service named ``pg-demo`` contains a table named ``students`` in the ``public`` schema with the following structure: -:: +.. code:: CREATE TABLE students_tbl ( student_id INT, diff --git a/docs/products/flink/howto/datagen-connector.rst b/docs/products/flink/howto/datagen-connector.rst index 548a9d5141..6217232eff 100644 --- a/docs/products/flink/howto/datagen-connector.rst +++ b/docs/products/flink/howto/datagen-connector.rst @@ -17,7 +17,7 @@ To configure DataGen as the source using the DataGen built-in connector for Apac 4. Select **Add new table** or select **Edit** if you want to edit an existing source table. 5. In the **Table SQL** section of the **Add new source table** or **Edit source table** screen, set the connector to **datagen** as shown in the example below: -:: +.. code:: CREATE TABLE `gen_me` ( diff --git a/docs/products/flink/howto/pg-cdc-connector.rst b/docs/products/flink/howto/pg-cdc-connector.rst index 79de4cb797..7dd0bf27ca 100644 --- a/docs/products/flink/howto/pg-cdc-connector.rst +++ b/docs/products/flink/howto/pg-cdc-connector.rst @@ -28,7 +28,8 @@ In addition to the above, gather the following information about the source Post * ``Decoding plugin name``: The decoding plugin name to use for capturing the changes. For PostgreSQL CDC, set it as ``pgoutput``. .. important:: - To create a PostgreSQL CDC source connector in Aiven for Apache Flink with Aiven for PostgreSQL using the pgoutput plugin, you need to have superuser privileges. For more information, see :ref:`Troubleshooting`. + To create a PostgreSQL CDC source connector in Aiven for Apache Flink with Aiven for PostgreSQL using the pgoutput plugin, you need to have superuser privileges. + For more information, see :ref:`Troubleshooting`. Configure the PostgreSQL CDC connector diff --git a/docs/products/flink/howto/slack-connector.rst b/docs/products/flink/howto/slack-connector.rst index e156cad40b..9183919cb9 100644 --- a/docs/products/flink/howto/slack-connector.rst +++ b/docs/products/flink/howto/slack-connector.rst @@ -30,7 +30,7 @@ To configure Slack as the target using the Slack connector for Apache Flink, fol 5. In the **Table SQL** section, set the connector to **slack** and enter the necessary token as shown in the example below: -:: +.. code:: CREATE TABLE channel_name ( channel_id STRING, diff --git a/docs/products/flink/howto/timestamps_opensearch.rst b/docs/products/flink/howto/timestamps_opensearch.rst index d942da070d..8223c600b4 100644 --- a/docs/products/flink/howto/timestamps_opensearch.rst +++ b/docs/products/flink/howto/timestamps_opensearch.rst @@ -18,7 +18,7 @@ Define Apache Flink® target tables including timestamps for OpenSearch® When the result of the data pipeline contains a timestamp column like the below: -:: +.. code:: EVENT_TIME TIMESTAMP(3), HOSTNAME STRING, @@ -26,7 +26,7 @@ When the result of the data pipeline contains a timestamp column like the below: to push the data correctly to an OpenSearch® index, you'll need to set the target column format as ``STRING`` in the Flink table definition, like: -:: +.. code:: EVENT_TIME STRING, HOSTNAME STRING, @@ -34,7 +34,7 @@ to push the data correctly to an OpenSearch® index, you'll need to set the targ and, assuming the ``EVENT_TIME`` is a timestamp, you'll need to specify it in the format understood by OpenSearch® using the ``DATE_FORMAT`` function, like: -:: +.. code:: DATE_FORMAT(EVENT_TIME, 'yyyy/MM/dd HH:mm:ss') diff --git a/docs/products/grafana/howto/rotating-grafana-service-credentials.rst b/docs/products/grafana/howto/rotating-grafana-service-credentials.rst index e389ef2ff7..9a38f787ef 100644 --- a/docs/products/grafana/howto/rotating-grafana-service-credentials.rst +++ b/docs/products/grafana/howto/rotating-grafana-service-credentials.rst @@ -17,18 +17,22 @@ and to have installed ``avn``, the `Aiven CLI tool \ - + .. code:: - For example :: + avn service user-password-reset \ + --username avnadmin \ + --new-password \ + - avn service user-password-reset \ - --username avnadmin \ - --new-password my_super_secure_password \ - my-grafana-service + For example: + + .. code:: + + avn service user-password-reset \ + --username avnadmin \ + --new-password my_super_secure_password \ + my-grafana-service 6. Refresh the Aiven Console and the new password should now be displayed for the ``avnadmin`` user. diff --git a/docs/products/grafana/howto/send-emails.rst b/docs/products/grafana/howto/send-emails.rst index b11ce24a2f..21336f3e3b 100644 --- a/docs/products/grafana/howto/send-emails.rst +++ b/docs/products/grafana/howto/send-emails.rst @@ -28,9 +28,11 @@ Configure the SMTP server for Grafana To configure the Aiven for Grafana service: -1. Open the Aiven client, and log in:: +1. Open the Aiven client, and log in: - avn user login --token + .. code:: + + avn user login --token 2. configure the service using your own SMTP values:: diff --git a/docs/products/influxdb/howto/migrate-data-self-hosted-influxdb-aiven.rst b/docs/products/influxdb/howto/migrate-data-self-hosted-influxdb-aiven.rst index 8b5da39607..922667edc9 100644 --- a/docs/products/influxdb/howto/migrate-data-self-hosted-influxdb-aiven.rst +++ b/docs/products/influxdb/howto/migrate-data-self-hosted-influxdb-aiven.rst @@ -14,7 +14,7 @@ Create the data export file To export the data from a self-hosted InfluxDB service, you will first need to run the ``influx_inspect export`` command. This command will create a dump file of your data. The following is an example command: -:: +.. code:: influx_inspect export -datadir "/var/lib/influxdb/data" -waldir "/var/lib/influxdb/wal" -out "/scratch/weather.influx.gz" -database weather -compress @@ -23,7 +23,7 @@ where, * ``-datadir`` and ``-waldir`` : specifies the directories where your data and write-ahead log files are stored, respectively. These paths may differ on your system, so double-check your settings before running the command. * ``-out``: specifies where the export file will be saved. -* ``-database``: specifies which database you want to export. In this example, the database named `weather` is being exported. +* ``-database``: specifies which database you want to export. In this example, the database named ``weather`` is being exported. * ``-compress``: implies the command to compress the data. If you have a large database and only need a specific part of the data, you can optionally define a time span using the ``-start`` and ``-end`` switches to reduce the dump size. This will make the export process faster and take up less space. @@ -44,9 +44,10 @@ Now that you have successfully created the export file, you can proceed to impor The ``avnadmin`` admin user does not have full superuser access, so it is necessary to pre-create the database before transferring the data. 3. **Import the data:** You can now push the exported data to the destination Aiven service using the ``influx -import`` command. You will need to specify the host, port, username, and password of the Aiven for InfluxDB service and the path to the exported data. The following is an example command: -:: - influx -import -host influx-testuser-business-demo.aivencloud.com -port 12691 -username 'avnadmin' -password 'secret' -ssl -precision rfc3339 -compressed -path ./weather.influx.gz + .. code:: + + influx -import -host influx-testuser-business-demo.aivencloud.com -port 12691 -username 'avnadmin' -password 'secret' -ssl -precision rfc3339 -compressed -path ./weather.influx.gz where, diff --git a/docs/products/kafka/concepts/acl.rst b/docs/products/kafka/concepts/acl.rst index c41b6eddd6..b94f78614b 100644 --- a/docs/products/kafka/concepts/acl.rst +++ b/docs/products/kafka/concepts/acl.rst @@ -33,7 +33,7 @@ Examples: .. Warning:: - By default, Aiven adds an ``avnadmin`` service user to every new service and adds `admin` permission for all topics to that user. When you create your own ACLs to restrict access, you probably want to remove this ACL entry. + By default, Aiven adds an ``avnadmin`` service user to every new service and adds ``admin`` permission for all topics to that user. When you create your own ACLs to restrict access, you probably want to remove this ACL entry. .. Note:: diff --git a/docs/products/kafka/concepts/log-compaction.rst b/docs/products/kafka/concepts/log-compaction.rst index bcfc611372..19538e7e51 100644 --- a/docs/products/kafka/concepts/log-compaction.rst +++ b/docs/products/kafka/concepts/log-compaction.rst @@ -13,7 +13,7 @@ An Apache Kafka topic represents a continuous stream of messages that typically For example, if there is a topic containing a user's home address, on every update, a message is sent using ``user_id`` as the primary key and home address as the value: -:: +.. code:: 1001 -> "4 Privet Dr" 1002 -> "221B Baker Street" @@ -158,10 +158,10 @@ The compaction thread then scans the **tail**, removing every record having a ke - Key - Value * - 1 - - 1001 :bdg-secondary:`delete` + - 1001 (``delete``) - 4 Privet Dr * - 2 - - 1002 :bdg-secondary:`delete` + - 1002 (``delete``) - 221B Baker Street * - 3 - 1003 diff --git a/docs/products/kafka/concepts/non-leader-for-partition.rst b/docs/products/kafka/concepts/non-leader-for-partition.rst index c625f48559..231ddc92e0 100644 --- a/docs/products/kafka/concepts/non-leader-for-partition.rst +++ b/docs/products/kafka/concepts/non-leader-for-partition.rst @@ -5,7 +5,7 @@ Aiven continuously monitors services to ensure they are healthy; if problems ari The exact error message depends on your client library and log formatting, but should be similar to the following: -:: +.. code:: [2021-02-04 09:01:20,118] WARN [Producer clientId=test-producer] Received invalid metadata error in produce request on partition topic1-25 due to org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.. Going to request metadata update now (org.apache.kafka.clients.producer.internals.Sender) diff --git a/docs/products/kafka/howto/configure-with-kafka-cli.rst b/docs/products/kafka/howto/configure-with-kafka-cli.rst index 5af97e8e18..2b3c6c72a3 100644 --- a/docs/products/kafka/howto/configure-with-kafka-cli.rst +++ b/docs/products/kafka/howto/configure-with-kafka-cli.rst @@ -18,7 +18,7 @@ The same can be achieved using the ``kafka-topics.sh`` script included in the `A 4. Create a :doc:`client configuration file ` pointing at the keystore and truststore created at the previous steps 5. Run the following command to check the connectivity to the Aiven for Apache Kafka service, replacing the ```` with the URI of the service available in the `Aiven Console `_. - :: + .. code:: ./kafka-topics.sh \ --bootstrap-server \ @@ -31,7 +31,7 @@ The same can be achieved using the ``kafka-topics.sh`` script included in the `A 6. Run the following command to create a new topic named ``new-test-topic`` with a retention rate of 30 minutes. Use the kafka-topics script for this and set the retention value in milliseconds ``((100 * 60) * 30 = 180000)``. - :: + .. code:: ./kafka-topics.sh \ --bootstrap-server \ diff --git a/docs/products/kafka/howto/create-topics-automatically.rst b/docs/products/kafka/howto/create-topics-automatically.rst index c9bc7ede46..01b3921aa7 100644 --- a/docs/products/kafka/howto/create-topics-automatically.rst +++ b/docs/products/kafka/howto/create-topics-automatically.rst @@ -34,6 +34,6 @@ The :ref:`Aiven CLI service update command ` enables to You can enable the automatic creation of topics on an existing Aiven for Apache Kafka service by using the `Aiven CLI service update `command. Set the ``auto_create_topics_enable`` parameter to ``true`` with the following command, replacing ``SERVICE_NAME`` with the name of your service: -:: +.. code:: avn service update SERVICE_NAME -c kafka.auto_create_topics_enable=true diff --git a/docs/products/kafka/howto/fake-sample-data.rst b/docs/products/kafka/howto/fake-sample-data.rst index dae99dbb91..add5d79142 100644 --- a/docs/products/kafka/howto/fake-sample-data.rst +++ b/docs/products/kafka/howto/fake-sample-data.rst @@ -17,7 +17,7 @@ To learn data streaming, you need a continuous flow of data and for that you can 1. Clone the repository: -:: +.. code:: git clone https://github.com/aiven/fake-data-producer-for-apache-kafka-docker @@ -25,7 +25,7 @@ To learn data streaming, you need a continuous flow of data and for that you can 3. Create a new access token via the `Aiven Console `_ or the following command in the :doc:`Aiven CLI `, changing the ``max-age-seconds`` appropriately for the duration of your test: -:: +.. code:: avn user access-token create \ --description "Token used by Fake data generator" \ @@ -46,7 +46,7 @@ To learn data streaming, you need a continuous flow of data and for that you can 5. Build the Docker image with: -:: +.. code:: docker build -t fake-data-producer-for-apache-kafka-docker . @@ -56,7 +56,7 @@ To learn data streaming, you need a continuous flow of data and for that you can 6. Start the streaming data flow with: -:: +.. code:: docker run fake-data-producer-for-apache-kafka-docker diff --git a/docs/products/kafka/howto/kafdrop.rst b/docs/products/kafka/howto/kafdrop.rst index fbd43195f2..b68a5a82a6 100644 --- a/docs/products/kafka/howto/kafdrop.rst +++ b/docs/products/kafka/howto/kafdrop.rst @@ -16,7 +16,7 @@ Kafdrop supports both :doc:`SASL and SSL authentication methods<../concepts/auth Once the keystore and truststore are created, you can define a Kafdrop configuration file named ``kafdrop.properties`` with the following content, replacing the ``KEYSTORE_PWD`` and ``TRUSTSTORE_PWD`` with the keystore and truststore passwords respectively: -:: +.. code:: security.protocol=SSL ssl.keystore.password=KEYSTORE_PWD @@ -28,7 +28,7 @@ Run Kafdrop on Docker You can run Kafdrop in a Docker/Podman container with the following command, by replacing the ``KAFKA_SERVICE_URI`` with the Aiven for Apache Kafka® service URI available in the service Overview tab of the Aiven console, and the ``client.truststore.jks`` and ``client.keystore.p12`` with the keystores and truststores file names: -:: +.. code:: docker run -p 9000:9000 \ -e KAFKA_BROKERCONNECT=KAFKA_SERVICE_URI \ @@ -39,7 +39,7 @@ You can run Kafdrop in a Docker/Podman container with the following command, by If you're also interested in Kafdrop to de-serialize Avro messages using `Karapace `_ schema registry, add the following two lines to the ``docker run`` command: -:: +.. code:: -e SCHEMAREGISTRY_AUTH="avnadmin:SCHEMA_REGISTRY_PWD" \ -e SCHEMAREGISTRY_CONNECT="https://SCHEMA_REGISTRY_URI" \ diff --git a/docs/products/kafka/howto/kafka-klaw.rst b/docs/products/kafka/howto/kafka-klaw.rst index a1cfaef81e..f9a3a90c8c 100644 --- a/docs/products/kafka/howto/kafka-klaw.rst +++ b/docs/products/kafka/howto/kafka-klaw.rst @@ -10,7 +10,7 @@ Prerequisites To connect Aiven for Apache Kafka® and Klaw, you need to have the following setup: * A running Aiven for Apache Kafka® service. See :doc:`Getting started with Aiven for Apache Kafka ` for more information. -* A running Klaw cluster. See `Run Klaw from the source `_ for more information. +* A running Klaw cluster. See `Run Klaw from the source `_ for more information. * Configured :doc:`Java keystore and truststore containing the service SSL certificates `. Connect Aiven for Apache Kafka® to Klaw @@ -41,7 +41,7 @@ Follow the below steps to configure and connect Aiven for Apache Kafka® with Kl - **Environment Name:** Select environment from the drop-down list .. note:: - To learn more, see `Clusters and environments `__ in Klaw documentation. + To learn more, see `Clusters and environments `__ in Klaw documentation. - **Select Cluster:** Select the cluster you added from the drop-down list. The bootstrap servers and protocol details are automatically populated - **Default Partitions:** Enter the number of partitions based on your requirements. The default value is set to 2 @@ -51,7 +51,7 @@ Follow the below steps to configure and connect Aiven for Apache Kafka® with Kl - **Topic prefix (optional):** Enter a topic prefix - **Tenant:** The value is set to default Tenant - .. note:: Klaw is multi-tenant by default. Each tenant manages topics with their own teams in isolation. Every tenant has its own set of Apache Kafka® environments, and users of one tenant cannot view/access topics, or ACLS from other tenants. It provides isolation avoiding any security breach. For this topic, I have used the default tenant configuration. For more information, see `Klaw documentation `__. + .. note:: Klaw is multi-tenant by default. Each tenant manages topics with their own teams in isolation. Every tenant has its own set of Apache Kafka® environments, and users of one tenant cannot view/access topics, or ACLS from other tenants. It provides isolation avoiding any security breach. For this topic, I have used the default tenant configuration. For more information, see `Klaw documentation `__. 8. Click **Save**. @@ -102,7 +102,7 @@ After retrieving the SSL certificate files and configuring the SSL keystore and 2. Next, open the ``application.properties`` file located in the ``klaw/cluster-api/src/main/resources`` directory. 3. Configure the SSL properties to connect to Apache Kafka® clusters by editing the following lines: - :: + .. code:: klawssl.kafkassl.keystore.location=client.keystore.p12 klawssl.kafkassl.keystore.pwd=klaw1234 @@ -119,7 +119,7 @@ After retrieving the SSL certificate files and configuring the SSL keystore and The following is an example of an ``application.properties`` file configured with Klaw Cluster ID, keystore, and truststore paths and passwords. - :: + .. code:: demo_cluster.kafkassl.keystore.location=/Users/demo.user/Documents/Klaw/demo-certs/client.keystore.p12 demo_cluster.kafkassl.keystore.pwd=Aiventest123! diff --git a/docs/products/kafka/howto/kcat.rst b/docs/products/kafka/howto/kcat.rst index ca4abd3e5f..39dc33a212 100644 --- a/docs/products/kafka/howto/kcat.rst +++ b/docs/products/kafka/howto/kcat.rst @@ -31,7 +31,7 @@ A ``kcat`` configuration file enabling the connection to an Aiven for Apache Kaf An example of the ``kcat`` configuration file is provided below: -:: +.. code:: bootstrap.servers=demo-kafka.my-demo-project.aivencloud.com:17072 security.protocol=ssl @@ -41,13 +41,13 @@ An example of the ``kcat`` configuration file is provided below: Once the content is stored in a file named ``kcat.config``, this can be referenced using the ``-F`` flag: -:: +.. code:: kcat -F kcat.config Alternatively, the same settings can be specified directly on the command line with: -:: +.. code:: kcat \     -b demo-kafka.my-demo-project.aivencloud.com:17072 \ @@ -58,7 +58,7 @@ Alternatively, the same settings can be specified directly on the command line w If :doc:`SASL authentication ` is enabled, then the ``kcat`` configuration file requires the following entries: -:: +.. code:: bootstrap.servers=demo-kafka.my-demo-project.aivencloud.com:17072 ssl.ca.location=ca.pem @@ -76,7 +76,7 @@ Produce data to an Apache Kafka® topic Use the following code to produce a single message into topic named ``test-topic``: -:: +.. code:: echo test-message-content | kcat -F kcat.config -P -t test-topic -k test-message-key @@ -95,7 +95,7 @@ Consume data from an Apache Kafka® topic Use the following code to consume messages coming from a topic named ``test-topic``: -:: +.. code:: kcat -F kcat.config -C -t test-topic -o -1 -e diff --git a/docs/products/kafka/howto/keystore-truststore.rst b/docs/products/kafka/howto/keystore-truststore.rst index 2a78b3c1c8..b0269f15ff 100644 --- a/docs/products/kafka/howto/keystore-truststore.rst +++ b/docs/products/kafka/howto/keystore-truststore.rst @@ -17,7 +17,7 @@ To create these files: 3. Use the ``openssl`` utility to create the keystore with the ``service.key`` and ``service.cert`` files downloaded previously: -:: +.. code:: openssl pkcs12 -export \ -inkey service.key \ @@ -32,7 +32,7 @@ To create these files: 6. In the folder where the certificates are stored, use the ``keytool`` utility to create the truststore with the ``ca.pem`` file as input: -:: +.. code:: keytool -import \ -file ca.pem \ @@ -41,7 +41,7 @@ To create these files: 7. Enter a password to protect the truststores, when prompted -8. Reply to `yes` to confirm trusting the CA certificate, when prompted +8. Reply to ``yes`` to confirm trusting the CA certificate, when prompted The result are the keystore named ``client.keystore.p12`` and truststore named ``client.truststore.jks`` that can be used for client applications configuration. diff --git a/docs/products/kafka/howto/kpow.rst b/docs/products/kafka/howto/kpow.rst index 302fe285ef..90074b45b8 100644 --- a/docs/products/kafka/howto/kpow.rst +++ b/docs/products/kafka/howto/kpow.rst @@ -45,7 +45,7 @@ Kpow supports both :doc:`SASL and SSL authentication methods<../concepts/auth-ty Once the keystore and truststore are created, define a Kpow configuration file named ``kpow.env`` with the following content, replacing the ``APACHE_KAFKA_HOST``, ``APACHE_KAFKA_PORT``, ``KPOW_LICENSE_ID``, ``KPOW_LICENSE_CODE``, ``KPOW_LICENSEE``, ``KPOW_LICENSE_EXPIRY_DATE``, ``KPOW_LICENSE_SIGNATURE``, ``SSL_KEYSTORE_FILE_NAME``, ``SSL_KEYSTORE_PASSWORD``, ``SSL_KEY_PASSWORD``, ``SSL_TRUSTSTORE_FILE_NAME`` and ``SSL_TRUSTSTORE_PASSWORD`` with the the respective values taken from the prerequisites section: -:: +.. code:: BOOTSTRAP=APACHE_KAFKA_HOST:APACHE_KAFKA_PORT LICENSE_ID=KPOW_LICENSE_ID @@ -71,7 +71,7 @@ Run Kpow on Docker You can run Kpow in a Docker/Podman container with the following command, by replacing the ``SSL_STORE_FOLDER`` with the name of the folder containing the Java keystore and truststore: -:: +.. code:: docker run -p 3000:3000 -m2G \ -v SSL_STORE_FOLDER:/ssl \ diff --git a/docs/products/kafka/howto/ksql-docker.rst b/docs/products/kafka/howto/ksql-docker.rst index 42f221aaff..92a37f9b9c 100644 --- a/docs/products/kafka/howto/ksql-docker.rst +++ b/docs/products/kafka/howto/ksql-docker.rst @@ -34,17 +34,21 @@ ksqlDB by default uses the ``ssl.truststore`` settings for the Schema Registry c To have ksqlDB working with Aiven's `Karapace `__ Schema Registry you need to explicitly define a truststore that contains the commonly trusted root CA of Schema Registry server. To create such a truststore: -1. Obtain the root CA of the server with the following ``openssl`` command by replacing the ``APACHE_KAFKA_HOST`` and ``SCHEMA_REGISTRY_PORT`` placeholders:: +1. Obtain the root CA of the server with the following ``openssl`` command by replacing the ``APACHE_KAFKA_HOST`` and ``SCHEMA_REGISTRY_PORT`` placeholders: - openssl s_client -connect APACHE_KAFKA_HOST:SCHEMA_REGISTRY_PORT \ + .. code:: + + openssl s_client -connect APACHE_KAFKA_HOST:SCHEMA_REGISTRY_PORT \ -showcerts < /dev/null 2>/dev/null | \ awk '/BEGIN CERT/{s=1}; s{t=t "\n" $0}; /END CERT/ {last=t; t=""; s=0}; END{print last}' \ > ca_schema_registry.cert -2. Create the truststore with the following ``keytool`` command by replacing the ``TRUSTSTORE_SCHEMA_REGISTRY_FILE_NAME`` and ``TRUSTSTORE_SCHEMA_REGISTRY_PASSWORD`` placeholders:: +2. Create the truststore with the following ``keytool`` command by replacing the ``TRUSTSTORE_SCHEMA_REGISTRY_FILE_NAME`` and ``TRUSTSTORE_SCHEMA_REGISTRY_PASSWORD`` placeholders: - keytool -import -file ca_schema_registry.cert \ + .. code:: + + keytool -import -file ca_schema_registry.cert \ -alias CA \ -keystore TRUSTSTORE_SCHEMA_REGISTRY_FILE_NAME \ -storepass TRUSTSTORE_SCHEMA_REGISTRY_PASSWORD \ @@ -73,7 +77,7 @@ You can run ksqlDB on Docker with the following command, by replacing the placeh * ``TRUSTSTORE_SCHEMA_REGISTRY_FILE_NAME`` * ``TRUSTSTORE_SCHEMA_REGISTRY_PASSWORD`` -:: +.. code:: docker run -d --name ksql \ -v SSL_STORE_FOLDER/:/ssl_settings/ \ @@ -102,8 +106,10 @@ You can run ksqlDB on Docker with the following command, by replacing the placeh .. Warning:: - Some docker setups have issues using the ``-v`` mounting options. In those cases copying the Keystore and Truststore in the container can be an easier option. This can be achieved with the following:: + Some docker setups have issues using the ``-v`` mounting options. In those cases copying the Keystore and Truststore in the container can be an easier option. This can be achieved with the following: + .. code:: + docker container create --name ksql \ -p 127.0.0.1:8088:8088 \ -e KSQL_BOOTSTRAP_SERVERS=APACHE_KAFKA_HOST:APACHE_KAFKA_PORT \ @@ -130,6 +136,8 @@ You can run ksqlDB on Docker with the following command, by replacing the placeh -Once the Docker image is up and running you should be able to access ksqlDB at ``localhost:8088`` or connect via terminal with the following command:: +Once the Docker image is up and running you should be able to access ksqlDB at ``localhost:8088`` or connect via terminal with the following command: + +.. code:: - docker exec -it ksql ksql + docker exec -it ksql ksql diff --git a/docs/products/kafka/howto/manage-acls.rst b/docs/products/kafka/howto/manage-acls.rst index b87ed1330c..5d22274775 100644 --- a/docs/products/kafka/howto/manage-acls.rst +++ b/docs/products/kafka/howto/manage-acls.rst @@ -31,35 +31,35 @@ To add new access control list, follow these steps: You can add a new access control list grant via the `Aiven Console `_ with: -1. Log in to `Aiven Console `_ and select your service. +#. Log in to `Aiven Console `_ and select your service. -2. Select **ACL** from the left sidebar and select **Add entry**. -3. On the **Add access control entry** screen, select the desired ACL type: +#. Select **ACL** from the left sidebar and select **Add entry**. +#. On the **Add access control entry** screen, select the desired ACL type: - a. For **ACL for Topics**, enter the following details: + a. For **ACL for Topics**, enter the following details: - * Username - * Topic - * Permissions + * Username + * Topic + * Permissions - b. For ACL for Schema Registry, enter the following details: + b. For ACL for Schema Registry, enter the following details: - * Username - * Resources - * Permissions + * Username + * Resources + * Permissions - Refer to :doc:`Access control lists and permission mapping <../concepts/acl>` section for more information on permission mapping. + Refer to the :doc:`Access control lists and permission mapping <../concepts/acl>` section for more information. -6. Click **Add ACL entry**. +#. Click **Add ACL entry**. -.. Tip:: + .. Tip:: - When using the :doc:`Aiven Terraform Provider `, you can add the ``default_acl`` key to your ``resource`` and set it to ``false`` if you do not want to create the admin user with wildcard permissions. + When using the :doc:`Aiven Terraform Provider `, you can add the ``default_acl`` key to your ``resource`` and set it to ``false`` if you do not want to create the admin user with wildcard permissions. -5. Once you start defining custom ACLs, it's recommended to delete the default ``avnadmin`` rule by clicking the **Remove** icon. +#. Once you start defining custom ACLs, it's recommended to delete the default ``avnadmin`` rule by clicking the **Remove** icon. -.. Warning:: + .. Warning:: - ACL restrictions currently do not apply to Kafka REST. Rules are applied based on the username and topic names, but there are no restrictions on consumer group names. + ACL restrictions currently do not apply to Kafka REST. Rules are applied based on the username and topic names, but there are no restrictions on consumer group names. - We are working on extending the same restrictions to Kafka REST. + We are working on extending the same restrictions to Kafka REST. diff --git a/docs/products/kafka/howto/monitor-logs-acl-failure.rst b/docs/products/kafka/howto/monitor-logs-acl-failure.rst index 0170281f4f..76318c75fd 100644 --- a/docs/products/kafka/howto/monitor-logs-acl-failure.rst +++ b/docs/products/kafka/howto/monitor-logs-acl-failure.rst @@ -11,7 +11,7 @@ Failed producer A producer creates the following log in case the client has no privilege to write to a specific topic: -:: +.. code:: HOSTNAME: kafka-pi-3141592-75 SYSTEMD_UNIT: kafka.service @@ -22,7 +22,7 @@ Failed consumer A consumer creates the following log in case the client has no privilege to describe a specific topic: -:: +.. code:: HOSTNAME: kafka-pi-3141592-74 SYSTEMD_UNIT: kafka.service @@ -35,7 +35,7 @@ Valid certificate with invalid key A client creates the following log when using a valid certificate with an invalid key to perform a describe operation over a topic: -:: +.. code:: HOSTNAME: kafka-pi-3141592-75 SYSTEMD_UNIT: kafka.service diff --git a/docs/products/kafka/howto/prevent-full-disks.rst b/docs/products/kafka/howto/prevent-full-disks.rst index 3e21cbaf41..d73e5dbfbb 100644 --- a/docs/products/kafka/howto/prevent-full-disks.rst +++ b/docs/products/kafka/howto/prevent-full-disks.rst @@ -11,7 +11,9 @@ If any node in the service surpasses the critical threshold of disk usage (more When the disk space is insufficient, and the ACL blocks write operations, you will encounter an error. For example, if you are using the Python client for Apache Kafka, you may receive the following error message: - TopicAuthorizationFailedError: [Error 29] TopicAuthorizationFailedError: you-topic +.. code:: + + TopicAuthorizationFailedError: [Error 29] TopicAuthorizationFailedError: your-topic Upgrade to a larger service plan diff --git a/docs/products/kafka/howto/provectus-kafka-ui.rst b/docs/products/kafka/howto/provectus-kafka-ui.rst index 411723330f..961b5fd317 100644 --- a/docs/products/kafka/howto/provectus-kafka-ui.rst +++ b/docs/products/kafka/howto/provectus-kafka-ui.rst @@ -28,21 +28,27 @@ Share keystores with non-root user Since container for Provectus® UI for Apache Kafka® uses non-root user, to avoid permission problems, while keeping the secrets safe, perform the following steps (see example commands below): -1. Create separate directory for secrets:: +1. Create separate directory for secrets: - mkdir SSL_STORE_FOLDER + .. code:: + + mkdir SSL_STORE_FOLDER -2. Restrict the directory to current user:: +2. Restrict the directory to current user: - chmod 700 SSL_STORE_FOLDER + .. code:: + + chmod 700 SSL_STORE_FOLDER 3. Copy secrets there (replace the ``SSL_KEYSTORE_FILE_NAME`` and ``SSL_TRUSTSTORE_FILE_NAME`` with the keystores and truststores file names):: cp SSL_KEYSTORE_FILE_NAME SSL_TRUSTSTORE_FILE_NAME SSL_STORE_FOLDER -4. Give read permissions for secret files for everyone:: +4. Give read permissions for secret files for everyone: - chmod +r SSL_STORE_FOLDER/* + .. code:: + + chmod +r SSL_STORE_FOLDER/* Execute Provectus® UI for Apache Kafka® on Docker or Podman @@ -59,7 +65,7 @@ You can run Provectus® UI for Apache Kafka® in a Docker/Podman container with * ``SSL_TRUSTSTORE_PASSWORD`` -:: +.. code:: docker run -p 8080:8080 \ -v SSL_STORE_FOLDER/SSL_TRUSTSTORE_FILE_NAME:/client.truststore.jks:ro \ diff --git a/docs/products/kafka/howto/renew-ssl-certs.rst b/docs/products/kafka/howto/renew-ssl-certs.rst index 8e728d11ca..df06e6112e 100644 --- a/docs/products/kafka/howto/renew-ssl-certs.rst +++ b/docs/products/kafka/howto/renew-ssl-certs.rst @@ -49,7 +49,7 @@ To acknowledge the new SSL certificate with the `Aiven Console `_: - :: + .. code:: curl --request PUT \ --url https://api.aiven.io/v1/project//service//user/ \ diff --git a/docs/products/kafka/howto/schema-registry.rst b/docs/products/kafka/howto/schema-registry.rst index 07664ee63e..9c0b92d660 100644 --- a/docs/products/kafka/howto/schema-registry.rst +++ b/docs/products/kafka/howto/schema-registry.rst @@ -48,7 +48,7 @@ Create version 1 of the Avro schema To create an Avro schema, you need a definition file. As example you can use a **click record** schema defined in JSON and stored in a file named ``ClickRecord.avsc`` containing the following: -:: +.. code:: {"type": "record", "name": "ClickRecord", @@ -86,7 +86,9 @@ Auto schema compilation ~~~~~~~~~~~~~~~~~~~~~~~~~~ With auto, the schema is compiled during the project build with, for example, ``maven-avro-plugin`` or ``gradle-avro-plugin``. -The following is a configuration example for ``maven-avro-plugin`` when ``ClickRecord.avsc`` is stored in the path ``src/main/avro/ClickRecord.avsc``:: +The following is a configuration example for ``maven-avro-plugin`` when ``ClickRecord.avsc`` is stored in the path ``src/main/avro/ClickRecord.avsc``: + +.. code:: org.apache.avro @@ -116,7 +118,9 @@ Set consumer and producer properties for schema registry The full code to create consumer and producers using the Schema Registry in Aiven for Apache Kafka can be found in the `Aiven examples GitHub repository `_. The following contains a list of the properties required. -For producers you need to specify:: +For producers you need to specify: + +.. code:: props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, [BOOTSTRAPSERVERS]); props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL"); @@ -132,7 +136,9 @@ For producers you need to specify:: props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class.getName()); -For consumers you need to specify:: +For consumers you need to specify: + +.. code:: props.put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG, [BOOTSTRAPSERVERS]); props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL"); diff --git a/docs/products/kafka/howto/set-kafka-parameters.rst b/docs/products/kafka/howto/set-kafka-parameters.rst index 9268806895..234ce803e6 100644 --- a/docs/products/kafka/howto/set-kafka-parameters.rst +++ b/docs/products/kafka/howto/set-kafka-parameters.rst @@ -19,7 +19,7 @@ Retrieve the current service parameters with Aiven CLI To retrieve the existing Aiven for Apache Kafka configuration use the following command: -:: +.. code:: avn service get SERVICE_NAME --json @@ -30,7 +30,7 @@ Retrieve the customizable parameters with Aiven CLI Not all Aiven for Apache Kafka parameters are customizable, to retrieve the list of those parameters you can change use the following command: -:: +.. code:: avn service types -v @@ -41,7 +41,7 @@ Update a service parameter with the Aiven CLI To modify a service parameter use the :ref:`Aiven CLI service update command `. E.g. to modify the ``message.max.bytes`` parameter use the following command: -:: +.. code:: avn service update SERVICE_NAME -c "kafka.message_max_bytes=newmaximumbytelimit" diff --git a/docs/products/kafka/howto/viewing-resetting-offset.rst b/docs/products/kafka/howto/viewing-resetting-offset.rst index 2110cb3ae6..f0b375d15e 100644 --- a/docs/products/kafka/howto/viewing-resetting-offset.rst +++ b/docs/products/kafka/howto/viewing-resetting-offset.rst @@ -18,7 +18,7 @@ List active consumer groups To list the currently active consumer groups use the following command replacing the ``demo-kafka.my-project.aivencloud.com:17072`` with your service URI: -:: +.. code:: kafka-consumer-groups.sh \ --bootstrap-server demo-kafka.my-project.aivencloud.com:17072 \ @@ -30,7 +30,7 @@ Retrieve the details of a consumer group To retrieve the details of a consumer group use the following command replacing the ``demo-kafka.my-project.aivencloud.com:17072`` with the Aiven for Apache Kafka service URI and the ``my-group`` with the required consumer group name: -:: +.. code:: kafka-consumer-groups.sh \ --bootstrap-server demo-kafka.my-project.aivencloud.com:17072 \ @@ -50,7 +50,7 @@ List the current members of a consumer group To retrieve the current members of a consumer group use the following command replacing the ``demo-kafka.my-project.aivencloud.com:17072`` with the Aiven for Apache Kafka service URI and the ``my-group`` with the required consumer group name: -:: +.. code:: kafka-consumer-groups.sh \ --bootstrap-server demo-kafka.my-project.aivencloud.com:17072 \ @@ -82,7 +82,7 @@ To reset the offset use the following command replacing: The consumer group must be inactive when you make offset changes. -:: +.. code:: kafka-consumer-groups.sh \     --bootstrap-server demo-kafka.my-project.aivencloud.com:17072 \ diff --git a/docs/products/kafka/kafka-connect/concepts/jdbc-source-modes.rst b/docs/products/kafka/kafka-connect/concepts/jdbc-source-modes.rst index 855c38c68f..747919fcee 100644 --- a/docs/products/kafka/kafka-connect/concepts/jdbc-source-modes.rst +++ b/docs/products/kafka/kafka-connect/concepts/jdbc-source-modes.rst @@ -19,7 +19,7 @@ Thus, if the source table contains ``100.000`` rows, the connector will insert ` Incrementing mode ----------------- -Using the ``incrementing`` mode, the connector will query the table and append a `WHERE` condition based on an **incrementing column** in order to fetch new rows. The incrementing mode requires that a column containing an always growing number (like a series) is present in the source table. The incrementing column is used to check which rows have been added since last query. +Using the ``incrementing`` mode, the connector will query the table and append a ``WHERE`` condition based on an **incrementing column** in order to fetch new rows. The incrementing mode requires that a column containing an always growing number (like a series) is present in the source table. The incrementing column is used to check which rows have been added since last query. .. Note:: @@ -103,7 +103,7 @@ The columns ``created_date`` and ``modified_date`` can be used as timestamp colu The following polls will append a ``WHERE`` condition to the query selecting only rows with ``modified_date`` or ``created_date`` greater than the previously recorded maximum value using the ``COALESCENCE`` function. In the example below, the condition will be: -:: +.. code:: WHERE COALESCENCE(modified_date, created_date) > '2021-04-06' diff --git a/docs/products/kafka/kafka-connect/howto/couchbase-sink.rst b/docs/products/kafka/kafka-connect/howto/couchbase-sink.rst index f26ed64c51..02de6b76ea 100644 --- a/docs/products/kafka/kafka-connect/howto/couchbase-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/couchbase-sink.rst @@ -97,7 +97,7 @@ The example creates an Couchbase sink connector with the following properties: The connector configuration is the following: -:: +.. code:: { "name": "couchbase_sink", diff --git a/docs/products/kafka/kafka-connect/howto/couchbase-source.rst b/docs/products/kafka/kafka-connect/howto/couchbase-source.rst index 70fbd17a76..e5f1637f27 100644 --- a/docs/products/kafka/kafka-connect/howto/couchbase-source.rst +++ b/docs/products/kafka/kafka-connect/howto/couchbase-source.rst @@ -115,7 +115,7 @@ The example creates an Couchbase source connector with the following properties: The connector configuration is the following: -:: +.. code:: { "name": "couchbase_source", diff --git a/docs/products/kafka/kafka-connect/howto/debezium-source-connector-pg-node-replacement.rst b/docs/products/kafka/kafka-connect/howto/debezium-source-connector-pg-node-replacement.rst index 06951a4d69..c30e4e0b29 100644 --- a/docs/products/kafka/kafka-connect/howto/debezium-source-connector-pg-node-replacement.rst +++ b/docs/products/kafka/kafka-connect/howto/debezium-source-connector-pg-node-replacement.rst @@ -25,7 +25,7 @@ In cases when the Debezium connector can't recover during or after the PostgreSQ The above errors are unrecoverable, meaning that they require a restart of the connector task(s) in order to resume operations again. -A restart can be performed manually either through the `Aiven Console `_, in under the `Connectors` tab console or via the `Apache Kafka® Connect REST API `__. You can get the service URI from the `Aiven Console `_, in the service detail page. +A restart can be performed manually either through the `Aiven Console `_, in under the ``Connectors`` tab console or via the `Apache Kafka® Connect REST API `__. You can get the service URI from the `Aiven Console `_, in the service detail page. .. image:: /images/products/postgresql/pg-debezium-cdc_image.png :alt: The Aiven Console page showing the Debezium connector error diff --git a/docs/products/kafka/kafka-connect/howto/debezium-source-connector-pg.rst b/docs/products/kafka/kafka-connect/howto/debezium-source-connector-pg.rst index 7623e65e84..07911e2243 100644 --- a/docs/products/kafka/kafka-connect/howto/debezium-source-connector-pg.rst +++ b/docs/products/kafka/kafka-connect/howto/debezium-source-connector-pg.rst @@ -107,13 +107,13 @@ Create a Kafka Connect connector with Aiven CLI To create the connector, execute the following :ref:`Aiven CLI command `, replacing the ``SERVICE_NAME`` with the name of the Aiven service where the connector needs to run: -:: +.. code:: avn service connector create SERVICE_NAME @debezium_source_pg.json Check the connector status with the following command, replacing the ``SERVICE_NAME`` with the Aiven service and the ``CONNECTOR_NAME`` with the name of the connector defined before: -:: +.. code:: avn service connector status SERVICE_NAME CONNECTOR_NAME @@ -130,7 +130,7 @@ Solve the error ``must be superuser to create FOR ALL TABLES publication`` When creating a Debezium source connector pointing to Aiven for PostgreSQL using the ``pgoutput`` plugin, you could get the following error: -:: +.. code:: Caused by: org.postgresql.util.PSQLException: ERROR: must be superuser to create FOR ALL TABLES publication @@ -141,7 +141,7 @@ The error is due to Debezium trying to create a publication and failing because Note that with older versions of Debezium, there was a bug preventing the addition of more tables to the filter with ``filtered`` mode. As a result, this configuration was not conflicting with a publication ``FOR ALL TABLES``. Starting with Debezium 1.9.7, those configurations are conflicting and you could get the following error: -:: +.. code:: Caused by: org.postgresql.util.PSQLException: ERROR: publication "dbz_publication" is defined as FOR ALL TABLES Detail: Tables cannot be added to or dropped from FOR ALL TABLES publications. @@ -157,13 +157,13 @@ To create the publication in PostgreSQL: * Installing the ``aiven-extras`` extension: -:: +.. code:: CREATE EXTENSION aiven_extras CASCADE; * Create a publication (with name e.g. ``my_test_publication``) for all the tables: -:: +.. code:: SELECT * FROM aiven_extras.pg_create_publication_for_all_tables( diff --git a/docs/products/kafka/kafka-connect/howto/debezium-source-connector-sql-server.rst b/docs/products/kafka/kafka-connect/howto/debezium-source-connector-sql-server.rst index 501c210eac..996f349e20 100644 --- a/docs/products/kafka/kafka-connect/howto/debezium-source-connector-sql-server.rst +++ b/docs/products/kafka/kafka-connect/howto/debezium-source-connector-sql-server.rst @@ -17,7 +17,9 @@ To use the Debezium source connector for SQL server, you need to enabled at data Enable CDC at database level '''''''''''''''''''''''''''' -To enable the CDC at database level, you can use the following command:: +To enable the CDC at database level, you can use the following command: + +.. code:: USE GO @@ -26,16 +28,20 @@ To enable the CDC at database level, you can use the following command:: .. Note:: - If you're using GCP Cloud SQL for SQL Server, you can enable database CDC with:: + If you're using GCP Cloud SQL for SQL Server, you can enable database CDC with: + + .. code:: - EXEC msdb.dbo.gcloudsql_cdc_enable_db '' + EXEC msdb.dbo.gcloudsql_cdc_enable_db '' Once the CDC is enabled, a new schema called ``cdc`` is created for the target database, containing all the required tables. Enable CDC at table level ''''''''''''''''''''''''' -To enable CDC for a table you can execute the following command:: +To enable CDC for a table you can execute the following command: + +.. code:: USE GO diff --git a/docs/products/kafka/kafka-connect/howto/elasticsearch-sink.rst b/docs/products/kafka/kafka-connect/howto/elasticsearch-sink.rst index 1710c20058..050b8ea8ef 100644 --- a/docs/products/kafka/kafka-connect/howto/elasticsearch-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/elasticsearch-sink.rst @@ -111,7 +111,7 @@ To create a Kafka Connect connector, follow these steps: .. Note:: - You can also create connectors using the :ref:`Aiven CLI command `. + You can also create connectors using the :ref:`Aiven CLI command `. Create daily Elasticsearch indices ---------------------------------- @@ -119,7 +119,7 @@ Create daily Elasticsearch indices You might need to create a new Elasticsearch index on daily basis to store the Apache Kafka messages. Adding the following ``TimestampRouter`` transformation in the connector properties file provides a way to define the index name as concatenation of the topic name and message date. -:: +.. code:: "transforms": "TimestampRouter", "transforms.TimestampRouter.topic.format": "${topic}-${timestamp}", diff --git a/docs/products/kafka/kafka-connect/howto/gcp-bigquery-sink.rst b/docs/products/kafka/kafka-connect/howto/gcp-bigquery-sink.rst index beee231ead..a65b70a601 100644 --- a/docs/products/kafka/kafka-connect/howto/gcp-bigquery-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/gcp-bigquery-sink.rst @@ -114,13 +114,17 @@ The configuration file contains the following entries: The configuration of the BigQuery connector in Aiven has a non-backward-compatible change between versions ``1.2.0`` and ``1.6.5``: - * version ``1.2.0`` uses the ``credentials`` field to specify the Google Cloud credentials in JSON format:: + * version ``1.2.0`` uses the ``credentials`` field to specify the Google Cloud credentials in JSON format: + + .. code:: ... "credentials": "{...}", ... - * from version ``1.6.5`` on, use the ``keyfield`` field and set the ``keySource`` parameter to ``JSON``:: + * from version ``1.6.5`` on, use the ``keyfield`` field and set the ``keySource`` parameter to ``JSON``: + + .. code:: ... "keyfile": "{...}", diff --git a/docs/products/kafka/kafka-connect/howto/gcp-pubsub-lite-source.rst b/docs/products/kafka/kafka-connect/howto/gcp-pubsub-lite-source.rst index 3fac2d3dd9..14b83b1833 100644 --- a/docs/products/kafka/kafka-connect/howto/gcp-pubsub-lite-source.rst +++ b/docs/products/kafka/kafka-connect/howto/gcp-pubsub-lite-source.rst @@ -103,7 +103,8 @@ To create a Kafka Connect connector, follow these steps: 8. After all the settings are correctly configured, select **Create connector**. 9. Verify the connector status under the **Connectors** screen. -10. Verify the presence of the data in the target Pub/Sub dataset, the table name is equal to the Apache Kafka topic name. If you need to change the target table name, you can do so using the Kafka Connect ``RegexRouter`` transformation. +10. Verify the presence of the data in the target Pub/Sub dataset, the table name is equal to the Apache Kafka topic name. + If you need to change the target table name, you can do so using the Kafka Connect ``RegexRouter`` transformation. .. note:: diff --git a/docs/products/kafka/kafka-connect/howto/gcp-pubsub-sink.rst b/docs/products/kafka/kafka-connect/howto/gcp-pubsub-sink.rst index 2215b00efa..fa8d43dbe2 100644 --- a/docs/products/kafka/kafka-connect/howto/gcp-pubsub-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/gcp-pubsub-sink.rst @@ -112,9 +112,9 @@ To create a Apache Kafka Connect connector, follow these steps: 9. Verify the connector status under the **Connectors** screen. 10. Verify the presence of the data in the target Pub/Sub dataset, the table name is equal to the Apache Kafka topic name. - .. note:: + .. note:: - You can also create connectors using the :ref:`Aiven CLI command `. + You can also create connectors using the :ref:`Aiven CLI command `. Example: Create a Google Pub/Sub sink connector ------------------------------------------------- diff --git a/docs/products/kafka/kafka-connect/howto/gcp-pubsub-source.rst b/docs/products/kafka/kafka-connect/howto/gcp-pubsub-source.rst index abbb8c79eb..ee583e2fcb 100644 --- a/docs/products/kafka/kafka-connect/howto/gcp-pubsub-source.rst +++ b/docs/products/kafka/kafka-connect/howto/gcp-pubsub-source.rst @@ -106,15 +106,18 @@ To create a Kafka Connect connector, follow these steps: .. note:: - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tabs and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + The Aiven Console parses the configuration file and fills the relevant UI fields. + You can review the UI fields across the various tabs and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. 8. After all the settings are correctly configured, select **Create connector**. 9. Verify the connector status under the **Connectors** screen. -10. Verify the presence of the data in the target Pub/Sub dataset, the table name is equal to the Apache Kafka topic name. If you need to change the target table name, you can do so using the Kafka Connect ``RegexRouter`` transformation. +10. Verify the presence of the data in the target Pub/Sub dataset, the table + name is equal to the Apache Kafka topic name. If you need to change the target table name, you can do so using + the Kafka Connect ``RegexRouter`` transformation. - .. note:: + .. note:: - You can also create connectors using the :ref:`Aiven CLI command `. + You can also create connectors using the :ref:`Aiven CLI command `. Example: Create a Google Pub/Sub source connector ------------------------------------------------- diff --git a/docs/products/kafka/kafka-connect/howto/gcs-sink.rst b/docs/products/kafka/kafka-connect/howto/gcs-sink.rst index ec02256157..a306188fa9 100644 --- a/docs/products/kafka/kafka-connect/howto/gcs-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/gcs-sink.rst @@ -36,7 +36,7 @@ Define an Apache Kafka Connect® configuration file Define the connector configurations in a file (we'll refer to it with the name ``gcs_sink.json``) with the following content: -:: +.. code:: { "name": "my-gcs-connector", @@ -116,7 +116,7 @@ The example creates an GCS sink connector with the following properties: The connector configuration is the following: -:: +.. code:: { "name": "my_gcs_sink", diff --git a/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-mysql.rst b/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-mysql.rst index e97f0a24e3..cf7e6079bb 100644 --- a/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-mysql.rst +++ b/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-mysql.rst @@ -41,7 +41,7 @@ Define a Kafka Connect configuration file Define the connector configurations in a file (we'll refer to it with the name ``jdbc_source_mysql.json``) with the following content: -:: +.. code:: { "name":"CONNECTOR_NAME", @@ -72,13 +72,13 @@ Create a Kafka Connect connector with Aiven CLI To create the connector, execute the following :ref:`Aiven CLI command `, replacing the ``SERVICE_NAME`` with the name of the Aiven service where the connector needs to run: -:: +.. code:: avn service connector create SERVICE_NAME @jdbc_source_mysql.json Check the connector status with the following command, replacing the ``SERVICE_NAME`` with the Aiven service and the ``CONNECTOR_NAME`` with the name of the connector defined before: -:: +.. code:: avn service connector status SERVICE_NAME CONNECTOR_NAME @@ -102,7 +102,7 @@ The example creates an :doc:`incremental <../concepts/jdbc-source-modes>` JDBC c The connector configuration is the following: -:: +.. code:: { "name":"jdbc_source_mysql_increment", @@ -120,6 +120,6 @@ The connector configuration is the following: With the above configuration stored in a ``jdbc_incremental_source_mysql.json`` file, you can create the connector in the ``demo-kafka`` instance with: -:: +.. code:: avn service connector create demo-kafka @jdbc_incremental_source_mysql.json diff --git a/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-pg.rst b/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-pg.rst index 4e176db1e3..4c496d9928 100644 --- a/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-pg.rst +++ b/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-pg.rst @@ -42,7 +42,7 @@ Define a Kafka Connect configuration file Define the connector configurations in a file (we'll refer to it with the name ``jdbc_source_pg.json``) with the following content: -:: +.. code:: { "name":"CONNECTOR_NAME", @@ -78,13 +78,13 @@ Create a Kafka Connect connector with Aiven CLI To create the connector, execute the following :ref:`Aiven CLI command `, replacing the ``SERVICE_NAME`` with the name of the Aiven service where the connector needs to run: -:: +.. code:: avn service connector create SERVICE_NAME @jdbc_source_pg.json Check the connector status with the following command, replacing the ``SERVICE_NAME`` with the Aiven service and the ``CONNECTOR_NAME`` with the name of the connector defined before: -:: +.. code:: avn service connector status SERVICE_NAME CONNECTOR_NAME @@ -108,7 +108,7 @@ The example creates an :doc:`incremental <../concepts/jdbc-source-modes>` JDBC c The connector configuration is the following: -:: +.. code:: { "name":"jdbc_source_pg_increment", @@ -126,6 +126,6 @@ The connector configuration is the following: With the above configuration stored in a ``jdbc_incremental_source_pg.json`` file, you can create the connector in the ``demo-kafka`` instance with: -:: +.. code:: avn service connector create demo-kafka @jdbc_incremental_source_pg.json diff --git a/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-sql-server.rst b/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-sql-server.rst index 7533ff0e3a..a45ac6afd3 100644 --- a/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-sql-server.rst +++ b/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-sql-server.rst @@ -41,7 +41,7 @@ Define a Kafka Connect configuration file Define the connector configurations in a file (we'll refer to it with the name ``jdbc_source_sqlserver.json``) with the following content: -:: +.. code:: { "name":"CONNECTOR_NAME", @@ -72,13 +72,13 @@ Create a Kafka Connect connector with Aiven CLI To create the connector, execute the following :ref:`Aiven CLI command `, replacing the ``SERVICE_NAME`` with the name of the Aiven service where the connector needs to run: -:: +.. code:: avn service connector create SERVICE_NAME @jdbc_source_sqlserver.json Check the connector status with the following command, replacing the ``SERVICE_NAME`` with the Aiven service and the ``CONNECTOR_NAME`` with the name of the connector defined before: -:: +.. code:: avn service connector status SERVICE_NAME CONNECTOR_NAME @@ -102,7 +102,7 @@ The example creates an :doc:`incremental <../concepts/jdbc-source-modes>` JDBC c The connector configuration is the following: -:: +.. code:: { "name":"jdbc_source_sqlserver_increment", @@ -120,6 +120,6 @@ The connector configuration is the following: With the above configuration stored in the ``jdbc_incremental_source_sqlserver.json`` file, you can create the connector in the ``demo-kafka`` instance with: -:: +.. code:: avn service connector create demo-kafka @jdbc_incremental_source_sqlserver.json diff --git a/docs/products/kafka/kafka-connect/howto/manage-logging-level.rst b/docs/products/kafka/kafka-connect/howto/manage-logging-level.rst index cffa38a5ec..5d83ae0088 100644 --- a/docs/products/kafka/kafka-connect/howto/manage-logging-level.rst +++ b/docs/products/kafka/kafka-connect/howto/manage-logging-level.rst @@ -21,7 +21,7 @@ Get the Kafka Connect nodes connection URI To update the logging level in all the Kafka Connect nodes, you need to get their connection URI using the :ref:`Aiven CLI service get command ` -:: +.. code:: avn service get SERVICE_NAME --format '{connection_info}' @@ -34,7 +34,7 @@ Retrieve the list of loggers and connectors You can retrieve the list of loggers, connectors and their current logging level on each worker using the dedicated ``/admin/loggers`` Kafka Connect API -:: +.. code:: curl https://avnadmin:PASSWORD@IP_ADDRESS:PORT/admin/loggers --insecure @@ -69,7 +69,7 @@ Change the logging level for a particular logger To change the logging level for a particular logger you can use the same ``admin/loggers`` endpoint, specifying the logger name (``LOGGER_NAME`` in the following command) -:: +.. code:: curl -X PUT -H "Content-Type:application/json" \ -d '{"level": "TRACE"}' \ diff --git a/docs/products/kafka/kafka-connect/howto/mongodb-poll-source-connector.rst b/docs/products/kafka/kafka-connect/howto/mongodb-poll-source-connector.rst index 9a2e3af9d4..745e3b38a8 100644 --- a/docs/products/kafka/kafka-connect/howto/mongodb-poll-source-connector.rst +++ b/docs/products/kafka/kafka-connect/howto/mongodb-poll-source-connector.rst @@ -46,7 +46,7 @@ Define a Kafka Connect configuration file Define the connector configurations in a file (we'll refer to it with the name ``mongodb_source.json``) with the following content, creating a file is not strictly necessary but allows to have all the information in one place before copy/pasting them in the `Aiven Console `_: -:: +.. code:: { "name":"CONNECTOR_NAME", diff --git a/docs/products/kafka/kafka-connect/howto/mqtt-sink-connector.rst b/docs/products/kafka/kafka-connect/howto/mqtt-sink-connector.rst index aaa286eb32..801da8afd9 100644 --- a/docs/products/kafka/kafka-connect/howto/mqtt-sink-connector.rst +++ b/docs/products/kafka/kafka-connect/howto/mqtt-sink-connector.rst @@ -21,7 +21,7 @@ To set up an MQTT sink connector, you need an Aiven for Apache Kafka service :do .. Tip:: - The connector will write to a topic defined in the ``"connect.mqtt.kcql"`` configuration, so either create the topic in your Kafka service, or enable the ``auto_create_topic`` parameter so that the topic will be created automatically. + The connector will write to a topic defined in the ``"connect.mqtt.kcql"`` configuration, so either create the topic in your Kafka service, or enable the ``auto_create_topic`` parameter so that the topic will be created automatically. Furthermore you need to collect the following information about the sink MQTT server upfront: @@ -49,7 +49,7 @@ Define a Kafka Connect configuration file Define the connector configurations in a file (we'll refer to it with the name ``mqtt_sink.json``) with the following content. Creating a file is not strictly necessary but allows to have all the information in one place before copy/pasting them in the `Aiven Console `_: -:: +.. code:: { "name": "CONNECTOR_NAME", @@ -83,11 +83,11 @@ To create a Apache Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``mqtt_sink.json`` file) in the form. 7. Select **Apply**. -To create the connector, access the `Aiven Console `_ and select the Aiven for Apache Kafka® or Aiven for Apache Kafka® Connect service where the connector needs to be defined, then: + To create the connector, access the `Aiven Console `_ and select the Aiven for Apache Kafka® or Aiven for Apache Kafka® Connect service where the connector needs to be defined, then: .. Note:: - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tabs and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tabs and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. 8. After all the settings are correctly configured, select **Create connector**. 9. Verify the connector status under the **Connectors** screen. @@ -95,4 +95,4 @@ To create the connector, access the `Aiven Console `_ .. Tip:: - You can also create connectors using the :ref:`Aiven CLI command `. + You can also create connectors using the :ref:`Aiven CLI command `. diff --git a/docs/products/kafka/kafka-connect/howto/mqtt-source-connector.rst b/docs/products/kafka/kafka-connect/howto/mqtt-source-connector.rst index a090a2f585..5fb13dfe5d 100644 --- a/docs/products/kafka/kafka-connect/howto/mqtt-source-connector.rst +++ b/docs/products/kafka/kafka-connect/howto/mqtt-source-connector.rst @@ -46,7 +46,7 @@ Define a Kafka Connect configuration file Define the connector configurations in a file (we'll refer to it with the name ``mqtt_source.json``) with the following content, creating a file is not strictly necessary but allows to have all the information in one place before copy/pasting them in the `Aiven Console `_: -:: +.. code:: { "name": "CONNECTOR_NAME", diff --git a/docs/products/kafka/kafka-connect/howto/redis-streamreactor-sink.rst b/docs/products/kafka/kafka-connect/howto/redis-streamreactor-sink.rst index 3decdb6ad3..fbb275e15b 100644 --- a/docs/products/kafka/kafka-connect/howto/redis-streamreactor-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/redis-streamreactor-sink.rst @@ -157,7 +157,7 @@ The configuration file contains the following peculiarities: Once the connector is created successfully, you should see the following three entries in the target Redis database. -:: +.. code:: 1) "students-1" containing "{\"name\":\"carlo\",\"id\":1,\"age\":77}" 2) "students-2" containing "{\"name\":\"lucy\",\"id\":2,\"age\":21}" diff --git a/docs/products/kafka/kafka-connect/howto/s3-sink-connector-aiven.rst b/docs/products/kafka/kafka-connect/howto/s3-sink-connector-aiven.rst index 344afd79fd..88e31d936d 100644 --- a/docs/products/kafka/kafka-connect/howto/s3-sink-connector-aiven.rst +++ b/docs/products/kafka/kafka-connect/howto/s3-sink-connector-aiven.rst @@ -37,7 +37,7 @@ Define a Kafka Connect® configuration file Define the connector configurations in a file (we'll refer to it with the name ``s3_sink.json``) with the following content: -:: +.. code:: { "name": "", @@ -75,13 +75,13 @@ Create an S3 sink connector with Aiven CLI To create the connector, execute the following :ref:`Aiven CLI command `, replacing the ``SERVICE_NAME`` with the name of the existing Aiven for Apache Kafka® service where the connector needs to run: -:: +.. code:: avn service connector create SERVICE_NAME @s3_sink.json Check the connector status with the following command, replacing the ``SERVICE_NAME`` with the existing Aiven for Apache Kafka® service and the ``CONNECTOR_NAME`` with the name of the connector defined before: -:: +.. code:: avn service connector status SERVICE_NAME CONNECTOR_NAME @@ -102,7 +102,7 @@ The example creates an S3 sink connector with the following properties: The connector configuration is the following: -:: +.. code:: { "name": "my_s3_sink", @@ -118,6 +118,6 @@ The connector configuration is the following: With the above configuration stored in a ``s3_sink.json`` file, you can create the connector in the ``demo-kafka`` instance with: -:: +.. code:: avn service connector create demo-kafka @s3_sink.json diff --git a/docs/products/kafka/kafka-connect/howto/s3-sink-connector-confluent.rst b/docs/products/kafka/kafka-connect/howto/s3-sink-connector-confluent.rst index 8f47100c2d..b73870121b 100644 --- a/docs/products/kafka/kafka-connect/howto/s3-sink-connector-confluent.rst +++ b/docs/products/kafka/kafka-connect/howto/s3-sink-connector-confluent.rst @@ -33,7 +33,7 @@ Define a Kafka Connect® configuration file Define the connector configurations in a file (we'll refer to it with the name ``s3_sink.json``) with the following content: -:: +.. code:: { "name": "", @@ -72,13 +72,13 @@ Create an S3 sink connector with Aiven CLI To create the connector, execute the following :ref:`Aiven CLI command `, replacing the ``SERVICE_NAME`` with the name of the existing Aiven for Apache Kafka® service where the connector needs to run: -:: +.. code:: avn service connector create SERVICE_NAME @s3_sink.json Check the connector status with the following command, replacing the ``SERVICE_NAME`` with the existing Aiven for Apache Kafka® service and the ``CONNECTOR_NAME`` with the name of the connector defined before: -:: +.. code:: avn service connector status SERVICE_NAME CONNECTOR_NAME @@ -100,7 +100,7 @@ The example creates an S3 sink connector with the following properties: The connector configuration is the following: -:: +.. code:: { "name": "my_s3_sink", @@ -120,6 +120,6 @@ The connector configuration is the following: With the above configuration stored in a ``s3_sink.json`` file, you can create the connector in the ``demo-kafka`` instance with: -:: +.. code:: avn service connector create demo-kafka @s3_sink.json diff --git a/docs/products/kafka/kafka-connect/howto/s3-sink-prereq.rst b/docs/products/kafka/kafka-connect/howto/s3-sink-prereq.rst index b422a31143..99281a2817 100644 --- a/docs/products/kafka/kafka-connect/howto/s3-sink-prereq.rst +++ b/docs/products/kafka/kafka-connect/howto/s3-sink-prereq.rst @@ -30,7 +30,7 @@ The Apache Kafka Connect® S3 sink connector needs the following permission to t The following is an example of AWS inline policy that can be added to the IAM user by replacing the ```` placeholder: -:: +.. code:: { "Version": "2012-10-17", diff --git a/docs/products/kafka/kafka-connect/howto/snowflake-sink-prereq.rst b/docs/products/kafka/kafka-connect/howto/snowflake-sink-prereq.rst index a166e8dec6..b3fdaa9ddf 100644 --- a/docs/products/kafka/kafka-connect/howto/snowflake-sink-prereq.rst +++ b/docs/products/kafka/kafka-connect/howto/snowflake-sink-prereq.rst @@ -81,9 +81,11 @@ Creating a new role is strongly suggested to provide the minimal amount of privi grant role aiven_snowflake_sink_connector_role to user aiven; -4. Run the following query to alter the user making the new role default when logging in:: +4. Run the following query to alter the user making the new role default when logging in: - alter user aiven set default_role=aiven_snowflake_sink_connector_role; + .. code:: + + alter user aiven set default_role=aiven_snowflake_sink_connector_role; Grant the Snowflake role access to the required database -------------------------------------------------------- diff --git a/docs/products/kafka/kafka-connect/reference/gcs-sink-formats.rst b/docs/products/kafka/kafka-connect/reference/gcs-sink-formats.rst index 4c456f60bf..ec9e60afa7 100644 --- a/docs/products/kafka/kafka-connect/reference/gcs-sink-formats.rst +++ b/docs/products/kafka/kafka-connect/reference/gcs-sink-formats.rst @@ -8,7 +8,7 @@ File name format The connector uses the following format for output files (blobs) -:: +.. code:: --[.gz] diff --git a/docs/products/kafka/kafka-connect/reference/s3-sink-additional-parameters-confluent.rst b/docs/products/kafka/kafka-connect/reference/s3-sink-additional-parameters-confluent.rst index 08135fa022..2a7baf6dcd 100644 --- a/docs/products/kafka/kafka-connect/reference/s3-sink-additional-parameters-confluent.rst +++ b/docs/products/kafka/kafka-connect/reference/s3-sink-additional-parameters-confluent.rst @@ -15,7 +15,7 @@ S3 naming format The Apache Kafka Connect® S3 sink connector by Confluent stores a series of files as objects in the specified S3 bucket. By default, each object is named using the pattern: -:: +.. code:: topics//partition=/++. @@ -28,7 +28,7 @@ The placeholders are the following: For example, a topic with 3 partitions generates initially the following files in the destination S3 bucket: -:: +.. code:: topics//partition=0/+0+0000000000.bin topics//partition=1/+1+0000000000.bin @@ -41,7 +41,7 @@ By default, data is stored in binary format, one line per message. The connector In the above example, having a topic with 3 partitions and 10 messages, setting the ``flush.size`` parameter to 1 generates the following files (one per message) in the destination S3 bucket: -:: +.. code:: topics//partition=0/+0+0000000000.bin topics//partition=0/+0+0000000001.bin diff --git a/docs/products/kafka/kafka-connect/reference/s3-sink-additional-parameters.rst b/docs/products/kafka/kafka-connect/reference/s3-sink-additional-parameters.rst index 0d9e2ef269..1513fb07f7 100644 --- a/docs/products/kafka/kafka-connect/reference/s3-sink-additional-parameters.rst +++ b/docs/products/kafka/kafka-connect/reference/s3-sink-additional-parameters.rst @@ -15,7 +15,7 @@ S3 naming format The Apache Kafka Connect® S3 sink connector by Aiven stores a series of files as objects in the specified S3 bucket. By default, each object is named using the pattern: -:: +.. code:: --. @@ -46,10 +46,10 @@ You can define the output data fields with the ``format.output.fields`` connecto For example, setting ``format.output.fields`` to ``value,key,timestamp`` results in rows in the S3 files like the following: -:: +.. code:: bWVzc2FnZV9jb250ZW50,cGFydGl0aW9uX2tleQ==,1511801218777 .. Tip:: - You can disable the `Base64` encoding by setting the ``format.output.fields.value.encoding`` to ``none`` + You can disable the ``Base64`` encoding by setting the ``format.output.fields.value.encoding`` to ``none`` diff --git a/docs/products/kafka/kafka-mirrormaker/concepts/mirrormaker2-tuning.rst b/docs/products/kafka/kafka-mirrormaker/concepts/mirrormaker2-tuning.rst index 9e8e968cff..4acf0fd467 100644 --- a/docs/products/kafka/kafka-mirrormaker/concepts/mirrormaker2-tuning.rst +++ b/docs/products/kafka/kafka-mirrormaker/concepts/mirrormaker2-tuning.rst @@ -41,7 +41,7 @@ To ensure MirrorMaker 2 is up-to-date with message processing, monitor these: 3. **Retrieve latest messages with `kt`**: Use `kt `_ to retrieve the latest messages from all partitions with the following command: - :: + .. code:: kt consume -auth ./mykafka.conf \ -brokers SERVICE-PROJECT.aivencloud.com:PORT \ diff --git a/docs/products/kafka/kafka-mirrormaker/howto/remove-mirrormaker-prefix.rst b/docs/products/kafka/kafka-mirrormaker/howto/remove-mirrormaker-prefix.rst index afee56d1c6..470a19115b 100644 --- a/docs/products/kafka/kafka-mirrormaker/howto/remove-mirrormaker-prefix.rst +++ b/docs/products/kafka/kafka-mirrormaker/howto/remove-mirrormaker-prefix.rst @@ -15,7 +15,7 @@ Remove topic prefix from a replication flow To remove the source cluster alias as topic prefix in an existing replication flow via the :doc:`Aiven CLI ` execute the following command, replacing the ````, ```` and ```` placeholders: -:: +.. code:: avn MirrorMaker replication-flow update \ --source-cluster \ diff --git a/docs/products/kafka/kafka-mirrormaker/howto/setup-mirrormaker-monitoring.rst b/docs/products/kafka/kafka-mirrormaker/howto/setup-mirrormaker-monitoring.rst index 7f94b1429d..4f8dbad1ef 100644 --- a/docs/products/kafka/kafka-mirrormaker/howto/setup-mirrormaker-monitoring.rst +++ b/docs/products/kafka/kafka-mirrormaker/howto/setup-mirrormaker-monitoring.rst @@ -11,7 +11,7 @@ To set up an integration to push Aiven for Apache Kafka MirrorMaker 2 metrics to The following example demonstrates how to push the metrics of an Aiven for Apache Kafka MirrorMaker 2 service named ``mirrormaker-demo`` into an Aiven for InfluxDB service named ``influxdb-demo`` via the :ref:`Aiven CLI `. -:: +.. code:: avn service integration-create \ -t influxdb-demo \ @@ -32,7 +32,7 @@ Other methods to monitor the replication * Monitor the latest messages from all partitions. An example using ``kt`` and ``jq``: -:: +.. code:: kt consume -auth ./kafka.conf -brokers service-project.aivencloud.com:24949 \ -topic topicname -offsets all=newest:newest | \ diff --git a/docs/products/kafka/kafka-mirrormaker/reference/terminology.rst b/docs/products/kafka/kafka-mirrormaker/reference/terminology.rst index 2046176290..7373d88490 100644 --- a/docs/products/kafka/kafka-mirrormaker/reference/terminology.rst +++ b/docs/products/kafka/kafka-mirrormaker/reference/terminology.rst @@ -3,18 +3,12 @@ Terminology for Aiven for Apache Kafka® MirrorMaker 2 .. _Terminology MM2ClusterAlias: -Cluster alias - The name alias defined in MirrorMaker 2 for a certain Apache Kafka® source or target cluster. +Cluster alias: The name alias defined in MirrorMaker 2 for a certain Apache Kafka® source or target cluster. .. _Terminology MM2ReplicationFlow: -Replication flow - The flow of data between two Apache Kafka® clusters (called source and target) executed by Apache Kafka® MirrorMaker 2. - One Apache Kafka® MirrorMaker 2 service can execute multiple replication flows. +Replication flow: The flow of data between two Apache Kafka® clusters (called source and target) executed by Apache Kafka® MirrorMaker 2. One Apache Kafka® MirrorMaker 2 service can execute multiple replication flows. .. _Terminology MM2RemoteTopics: -Remote topics - Topics replicated by MirrorMaker 2 from a source Apache Kafka® cluster to a target Apache Kafka® cluster. - There is only one source topic for each remote topic. - Remote topics refer to the source cluster by the topic name prefix: ``{source_cluster_alias}.{source_topic_name}``. +Remote topics: Topics replicated by MirrorMaker 2 from a source Apache Kafka® cluster to a target Apache Kafka® cluster. There is only one source topic for each remote topic. Remote topics refer to the source cluster by the topic name prefix: ``{source_cluster_alias}.{source_topic_name}``. diff --git a/docs/products/m3db/howto/telegraf_local_example.rst b/docs/products/m3db/howto/telegraf_local_example.rst index 566e9e5998..40a394805f 100644 --- a/docs/products/m3db/howto/telegraf_local_example.rst +++ b/docs/products/m3db/howto/telegraf_local_example.rst @@ -30,15 +30,19 @@ To simplify this example, we will install the Telegraf agent on a MacBook to col Of course, Telegraf can also be installed on `Windows and Linux `_ machines. Assuming you have Homebrew installed on a MacBook, simply run the following command at the Terminal -to install Telegraf (https://formulae.brew.sh/formula/telegraf):: +to install Telegraf (https://formulae.brew.sh/formula/telegraf): - brew update && brew install telegraf + .. code:: + + brew update && brew install telegraf Configure Telegraf and integrate it with M3 ------------------------------------------- -Use the Telegraf agent to generate a default configuration file for editing:: +Use the Telegraf agent to generate a default configuration file for editing: + +.. code:: - telegraf config > telegraf.conf + telegraf config > telegraf.conf Modify the ``telegraf.conf`` configuration file to change the output endpoint to that of our M3 instance. @@ -46,7 +50,9 @@ Change the URL under the ``outputs.influxdb`` section to that of your Aiven for **NOTE:** The URL prefix should simply be ``https://`` and remove the ``username:password`` from the URI (see snippet below). Specify the service username/password and set the database name to ``default`` -(the database that is automatically created when your service is provisioned):: +(the database that is automatically created when your service is provisioned): + +.. code:: [[outputs.influxdb]] urls = ["https://my-M3-service-my-project.aivencloud.com:24947/api/v1/influxdb"] diff --git a/docs/products/m3db/howto/write-php.rst b/docs/products/m3db/howto/write-php.rst index fb1261b200..c37e8a2407 100644 --- a/docs/products/m3db/howto/write-php.rst +++ b/docs/products/m3db/howto/write-php.rst @@ -40,8 +40,10 @@ Add the following to ``index.php`` and replace the placeholders with values for This code creates an InfluxDBClient and connects to the InfluxDB-literate endpoint on the M3DB. Then the code constructs the expected data format, and writes it to the client. -To run the code:: +To run the code: - php -f index.php +.. code:: + + php -f index.php If the script outputs ``bool(true)`` then there is data in your M3DB. If you'd like to you can take a look at :doc:`grafana` to see how to inspect your data with Grafana®. diff --git a/docs/products/mysql/howto/connect-from-cli.rst b/docs/products/mysql/howto/connect-from-cli.rst index 8682f6ec25..4fbebbf32c 100644 --- a/docs/products/mysql/howto/connect-from-cli.rst +++ b/docs/products/mysql/howto/connect-from-cli.rst @@ -36,13 +36,13 @@ Code Execute the following from a terminal window to connect to the MySQL database: -:: +.. code:: mysqlsh --sql SERVICE_URI You can execute this query to test: -:: +.. code:: MySQL ssl defaultdb SQL> select 1 + 2 as three; +-------+ @@ -90,7 +90,7 @@ This step requires to manually specify individual parameters. You can find those Once you have these parameters, execute the following from a terminal window to connect to the MySQL database: -:: +.. code:: mysql --user avnadmin --password=USER_PASSWORD --host USER_HOST --port USER_PORT DB_NAME diff --git a/docs/products/mysql/howto/connect-with-java.rst b/docs/products/mysql/howto/connect-with-java.rst index 27c35a62fc..f56d9ad6bd 100644 --- a/docs/products/mysql/howto/connect-with-java.rst +++ b/docs/products/mysql/howto/connect-with-java.rst @@ -47,9 +47,11 @@ Add the following to ``MySqlExample.java``: This code creates a MySQL client and connects to the database. It fetches version of MySQL and prints it the output. -Run the code after replacement of the placeholders with values for your project:: +Run the code after replacement of the placeholders with values for your project: - javac MySqlExample.java && java -cp mysql-driver-8.0.28.jar:. MySqlExample -host MYSQL_HOST -port MYSQL_PORT -database MYSQL_DATABASE -username avnadmin -password MYSQL_PASSWORD +.. code:: + + javac MySqlExample.java && java -cp mysql-driver-8.0.28.jar:. MySqlExample -host MYSQL_HOST -port MYSQL_PORT -database MYSQL_DATABASE -username avnadmin -password MYSQL_PASSWORD If the script runs successfully, the output will be the values that were inserted into the table:: diff --git a/docs/products/mysql/howto/connect-with-php.rst b/docs/products/mysql/howto/connect-with-php.rst index 6adc3e548c..2bea1b8ed8 100644 --- a/docs/products/mysql/howto/connect-with-php.rst +++ b/docs/products/mysql/howto/connect-with-php.rst @@ -35,10 +35,14 @@ This code creates a MySQL client and opens a connection to the database. It then .. note:: This example replaces the query string parameter to specify ``sslmode=verify-ca`` to make sure that the SSL certificate is verified, and adds the location of the cert. -Run the following code:: +Run the following code: + +.. code:: php index.php -If the script runs successfully, the output is the MySQL version running in your service like:: +If the script runs successfully, the output is the MySQL version running in your service like: + +.. code:: - 8.0.28 + 8.0.28 diff --git a/docs/products/mysql/howto/connect-with-python.rst b/docs/products/mysql/howto/connect-with-python.rst index d7907c6bb5..d9cdd40fee 100644 --- a/docs/products/mysql/howto/connect-with-python.rst +++ b/docs/products/mysql/howto/connect-with-python.rst @@ -45,13 +45,17 @@ Add the following to ``main.py`` and replace the placeholders with values for yo This code creates a MySQL client and connects to the database. It creates a table, inserts some values, fetches them and prints the output. -To run the code:: +To run the code: - python main.py +.. code:: -If the script runs successfully, the output will be the values that were inserted into the table:: + python main.py - [{'id': 1}, {'id': 2}] +If the script runs successfully, the output will be the values that were inserted into the table: + +.. code:: + + [{'id': 1}, {'id': 2}] Now that your application is connected, you are all set to use Python with Aiven for MySQL. diff --git a/docs/products/mysql/howto/migrate-from-external-mysql.rst b/docs/products/mysql/howto/migrate-from-external-mysql.rst index 9466989551..fc02ce499c 100644 --- a/docs/products/mysql/howto/migrate-from-external-mysql.rst +++ b/docs/products/mysql/howto/migrate-from-external-mysql.rst @@ -70,13 +70,17 @@ Perform the migration -c migration.ssl=SRC_SSL \ DEST_NAME -4. Check the migration status via the dedicated ``avn service migration-status`` :ref:`Aiven CLI command `:: +4. Check the migration status via the dedicated ``avn service migration-status`` :ref:`Aiven CLI command `: - avn --show-http service migration-status DEST_NAME + .. code:: + + avn --show-http service migration-status DEST_NAME -Whilst the migration process is ongoing, the ``migration_detail.status`` will be ``syncing``:: - - { +Whilst the migration process is ongoing, the ``migration_detail.status`` will be ``syncing``: + + .. code:: + + { "migration": { "error": null, "method": "replication", @@ -92,7 +96,7 @@ Whilst the migration process is ongoing, the ``migration_detail.status`` will be "status": "syncing" } ] - } + } .. Note:: @@ -102,7 +106,9 @@ Whilst the migration process is ongoing, the ``migration_detail.status`` will be Stop the replication -------------------- -If you reach a point where you no longer need the ongoing replication to happen, you can remove the configuration from the destination service via the ``avn service update`` :ref:`Aiven CLI command `:: +If you reach a point where you no longer need the ongoing replication to happen, you can remove the configuration from the destination service via the ``avn service update`` :ref:`Aiven CLI command `: + +.. code:: - avn service update --remove-option migration DEST_NAME + avn service update --remove-option migration DEST_NAME diff --git a/docs/products/opensearch/howto/migrating_elasticsearch_data_to_aiven.rst b/docs/products/opensearch/howto/migrating_elasticsearch_data_to_aiven.rst index 6bdec15079..50291fbfb4 100644 --- a/docs/products/opensearch/howto/migrating_elasticsearch_data_to_aiven.rst +++ b/docs/products/opensearch/howto/migrating_elasticsearch_data_to_aiven.rst @@ -26,7 +26,7 @@ To migrate or copy data: #. Use the `Aiven CLI client `_ to set the ``reindex.remote.whitelist`` parameter to point to your source Elasticsearch service: - :: + .. code:: avn service update your-service-name -c opensearch.reindex_remote_whitelist=your.non-aiven-service.example.com:9200 @@ -44,33 +44,33 @@ To migrate or copy data: #. Export mapping from your source Elasticsearch index. For example, using ``curl``: - :: + .. code:: curl https://avnadmin:yourpassword@os-123-demoprj.aivencloud.com:23125/logs-2024-09-21/_mapping > mapping.json if you have ``jq`` you can run the following or else you need to manually edit ``mapping.json`` to remove the wrapping ``{"logs-2024-09-21":{"mappings": ... }}`` and keep ``{"properties":...}}`` - :: + .. code:: jq .[].mappings mapping.json > src_mapping.json #. Create the empty index on your destination OpenSearch service. - :: + .. code:: curl -XPUT https://avnadmin:yourpassword@os-123-demoprj.aivencloud.com:23125/logs-2024-09-21 #. Import mapping on your destination OpenSearch index. - :: + .. code:: curl -XPUT https://avnadmin:yourpassword@os-123-demoprj.aivencloud.com:23125/logs-2024-09-21/_mapping \ -H 'Content-type: application/json' -T src_mapping.json #. Submit the reindexing request. - :: + .. code:: curl -XPOST https://avnadmin:yourpassword@os-123-demoprj.aivencloud.com:23125/_reindex \ -H 'Content-type: application/json' \ @@ -89,7 +89,7 @@ To migrate or copy data: #. Wait for the reindexing process to finish. If you see a message like the following in the response, check that the host name and port match the ones that you set earlier: - :: + .. code:: [your.non-aiven-service.example.com:9200] not whitelisted in reindex.remote.whitelist diff --git a/docs/products/opensearch/howto/opensearch-aggregations-and-nodejs.rst b/docs/products/opensearch/howto/opensearch-aggregations-and-nodejs.rst index 2db792d010..cbd44def80 100644 --- a/docs/products/opensearch/howto/opensearch-aggregations-and-nodejs.rst +++ b/docs/products/opensearch/howto/opensearch-aggregations-and-nodejs.rst @@ -121,15 +121,17 @@ Using the draft structure of an aggregation we can create a method to calculate ); }; -Run the method from the command line:: +Run the method from the command line: - run-func aggregate averageRating +.. code:: + + run-func aggregate averageRating You'll see a calculated numeric value, the average of all values from the rating field across the documents. -:: +.. code:: - { value: 3.7130597014925373 } + { value: 3.7130597014925373 } ``avg`` is one of many metric aggregation functions offered by OpenSearch. We can also use ``max``, ``min``, ``sum`` and others. @@ -139,7 +141,6 @@ To have a possibility to easily change aggregation function and aggregation fiel * generate name dynamically based on field name * separate the callback function and use the dynamically generated name to print out the result - With these changes our method looks like this: .. code-block:: javascript @@ -177,9 +178,11 @@ With these changes our method looks like this: ); }; -Run the method to make sure that we still can calculate the average rating :: +Run the method to make sure that we still can calculate the average rating: - run-func aggregate metric avg rating +.. code:: + + run-func aggregate metric avg rating And because we like clean code, move and export the ``logAggs`` function from ``helpers.js`` and reference it in ``aggregate.js``. @@ -192,7 +195,7 @@ Other simple metrics We can use the method we created to run other types of metric aggregations, for example, to find what the minimum sodium value is, in any of the recipes: -:: +.. code:: run-func aggregate metric min sodium @@ -203,15 +206,15 @@ Cardinality Another interesting single-value metric is ``cardinality``. Cardinality is an estimated number of distinct values found in a field of a document. -For example, by calculating the cardinality of the rating field, you will learn that there are only eight distinct rating values over all 20k recipes. Which makes me suspect that the rating data was added artificially later into the data set. The cardinality of `calories`, `sodium` and `fat` field contain more realistic diversity: +For example, by calculating the cardinality of the rating field, you will learn that there are only eight distinct rating values over all 20k recipes. Which makes me suspect that the rating data was added artificially later into the data set. The cardinality of ``calories``, ``sodium`` and ``fat`` field contain more realistic diversity: -:: +.. code:: - run-func aggregate metric cardinality rating + run-func aggregate metric cardinality rating -:: +.. code:: - { value: 8 } + { value: 8 } Calculating cardinality for sodium and other fields and see what conclusions you can make! @@ -222,21 +225,21 @@ A multi-value aggregation returns an object rather than a single value. An examp Get a set of metrics (``avg``, ``count``, ``max``, ``min`` and ``sum``) by using ``stats`` aggregation type: -:: +.. code:: run-func aggregate metric stats rating -:: +.. code:: { count: 20100, min: 0, max: 5, avg: 3.7130597014925373, sum: 74632.5 } To get additional information, such as standard deviation, variance and bounds, use ``extended_stats``: -:: +.. code:: run-func aggregate metric extended_stats rating -:: +.. code:: { count: 20100, @@ -266,13 +269,13 @@ Percentiles Another example of a multi-value aggregation are ``percentiles``. Percentiles are used to interpret and understand data indicating how a given data point compares to other values in a data set. For example, if you take a test and score on the 80th percentile, it means that you did better than 80% of participants. Similarly, when a provider measures internet usage and peaks, the 90th percentile indicates that 90% of time the usage falls below that amount. -Calculate percentiles for `calories`: +Calculate percentiles for ``calories``: -:: +.. code:: run-func aggregate metric percentiles calories -:: +.. code:: { values: { @@ -288,11 +291,11 @@ Calculate percentiles for `calories`: From the returned result you can see that 50% of recipes have less than 331 calories. Interestingly, only one percent of the meals is more than 3256 calories. You must be curious what falls within that last percentile ;) Now that we know the value to look for, we can use `a range query `_ to find the recipes. Set the minimum value, but keep the maximum empty to allow no bounds: -:: +.. code:: run-func search range calories 3256 -:: +.. code:: [ 'Ginger Crunch Cake with Strawberry Sauce ', @@ -356,12 +359,16 @@ We use ``range`` aggregation and add a property ``ranges`` to describe how we wa ); }; -Run it with :: +Run it with: + +.. code:: - run-func aggregate sodiumRange + run-func aggregate sodiumRange -And then check the results:: +And then check the results: +.. code:: + { buckets: [ { key: '*-500.0', to: 500, doc_count: 10411 }, @@ -418,16 +425,22 @@ However, our method is narrowed down to a specific scenario. We want to refactor ); }; -To make sure that the upgraded function works just like the one one, run:: +To make sure that the upgraded function works just like the one one, run: + +.. code:: + + run-func aggregate range sodium 500 1000 + +Now you can run the method with other fields and custom ranges, for example, split recipes into buckets based on values in the field ``fat``: - run-func aggregate range sodium 500 1000 +.. code:: -Now you can run the method with other fields and custom ranges, for example, split recipes into buckets based on values in the field `fat`:: + run-func aggregate range fat 1 5 10 30 50 100 - run-func aggregate range fat 1 5 10 30 50 100 +The returned buckets are: -The returned buckets are:: +.. code:: { buckets: [ @@ -441,7 +454,7 @@ The returned buckets are:: ] } -Why not experiment more with the range aggregation? We still have `protein` values, and can also play with the values for the ranges to learn more about recipes from our dataset. +Why not experiment more with the range aggregation? We still have ``protein`` values, and can also play with the values for the ranges to learn more about recipes from our dataset. Buckets for every unique value ------------------------------ @@ -449,7 +462,7 @@ Buckets for every unique value Sometimes we want to divide the data into buckets, where each bucket corresponds to a unique value present in a field. This type of aggregations is called ``terms`` aggregation and is helpful when we need to have more granular understanding of a dataset. For example, we can learn how many recipes belong to each category. -The structure of the method for `terms aggregation` will be similar to what we wrote for the ranges, with a couple of differences: +The structure of the method for ``terms aggregation`` will be similar to what we wrote for the ranges, with a couple of differences: * use aggregation type ``terms`` * use an optional property ``size``, which specifies the upper limit of the buckets we want to create. @@ -481,11 +494,15 @@ The structure of the method for `terms aggregation` will be similar to what we w ); }; -To get the buckets created for different categories run:: +To get the buckets created for different categories run: - run-func aggregate terms categories.keyword +.. code:: -Here are the resulting delicious categories:: + run-func aggregate terms categories.keyword + +Here are the resulting delicious categories: + +.. code:: { doc_count_error_upper_bound: 0, @@ -506,9 +523,11 @@ Here are the resulting delicious categories:: We can see a couple of interesting things in the response. First, there were just 10 buckets created, each of which contains ``doc_count`` indicating number of recipes within particular category. Second, ``sum_other_doc_count`` is the sum of documents which are left out of response, this number is high because almost every recipe is assigned to more than one category. -We can increase the number of created buckets by using the ``size`` property:: +We can increase the number of created buckets by using the ``size`` property: + +.. code:: - run-func aggregate terms categories.keyword 30 + run-func aggregate terms categories.keyword 30 Now the list of buckets contains 30 items. @@ -519,7 +538,7 @@ Did you notice that the buckets created with the help of ``terms`` aggregation a You can use the ``rare_terms`` aggregation! This creates a set of buckets sorted by number of documents in ascending order. As a result, the most rarely used items will be at the top of the response. -``rare_terms`` request is very similar to ``terms``, however, instead of `size` property which defines total number of created buckets, ``rare_terms`` relies on ``max_doc_count``, which sets upper limit for number of documents per bucket. +``rare_terms`` request is very similar to ``terms``, however, instead of ``size`` property which defines total number of created buckets, ``rare_terms`` relies on ``max_doc_count``, which sets upper limit for number of documents per bucket. .. code-block:: javascript @@ -549,7 +568,7 @@ You can use the ``rare_terms`` aggregation! This creates a set of buckets sorted }; -:: +.. code:: run-func aggregate rareTerms categories.keyword 3 @@ -558,7 +577,7 @@ The result will return us all the categories with at most three documents each. Histograms ---------- -The story of bucket aggregations won't be complete without speaking about histograms. Histograms aggregate date based on provided interval. And since we have a `date` property, we'll build a date histogram. +The story of bucket aggregations won't be complete without speaking about histograms. Histograms aggregate date based on provided interval. And since we have a ``date`` property, we'll build a date histogram. The format of the histogram aggregation is similar to what we saw so far, so we can create a new method almost identical to previous ones: @@ -589,13 +608,15 @@ The format of the histogram aggregation is similar to what we saw so far, so we ); }; -Values for the interval field can be from `minute` up to a `year`. +Values for the interval field can be from ``minute`` up to a ``year``. -:: +.. code:: run-func aggregate dateHistogram date year -The results when we use a year:: +The results when we use a year: + +.. code:: { buckets: [ @@ -689,11 +710,15 @@ When put these pieces together we can write this method: ); }; -Run it on the command line:: +Run it on the command line: + +.. code:: + + run-func aggregate movingAverage - run-func aggregate movingAverage +The returned data for every year including a value ``moving_average``: -The returned data for every year including a value ``moving_average``:: +.. code:: [ { @@ -764,7 +789,7 @@ We used one of existing built-in functions ``MovingFunctions.unweightedAvg(value You can also use other functions such as max(), min(), stdDev() and sum(). Additionally, you can write your own functions, such as -:: +.. code:: moving_fn: { script: "return values.length === 1 ? 1 : 0" diff --git a/docs/products/opensearch/howto/opensearch-alerting-api.rst b/docs/products/opensearch/howto/opensearch-alerting-api.rst index 016cb33268..c856886736 100644 --- a/docs/products/opensearch/howto/opensearch-alerting-api.rst +++ b/docs/products/opensearch/howto/opensearch-alerting-api.rst @@ -10,8 +10,8 @@ We are using a ``sample-host-health`` index as datasource to create a simple ale OpenSearch API Alerting API URL can be copied from Aiven console: -Click the **Overview** tab -> **OpenSearch** under `Connection Information` -> **Service URI** -append ``_plugins/_alerting/monitors`` to the **Service URI** +Click the **Overview** tab > **OpenSearch** under ``Connection Information`` > **Service URI** +append ``_plugins/_alerting/monitors`` to the **Service URI**. Example: @@ -30,5 +30,5 @@ Use ``curl`` to create the alert https://username:password@os-name-myproject.aivencloud.com:24947/_plugins/_alerting/monitors \ -H 'Content-type: application/json' -T cpu_alert.json -* The required JSON request format can be found in `OpenSearch Alerting API documentation `_ +* The required JSON request format can be found in `OpenSearch Alerting API documentation `_ diff --git a/docs/products/opensearch/howto/opensearch-and-nodejs.rst b/docs/products/opensearch/howto/opensearch-and-nodejs.rst index 55cbdf6428..0df3b0812a 100644 --- a/docs/products/opensearch/howto/opensearch-and-nodejs.rst +++ b/docs/products/opensearch/howto/opensearch-and-nodejs.rst @@ -134,7 +134,7 @@ One of the examples of a term-level query is searching for all entries containin ); }; -:: +.. code:: run-func search term sodium 0 @@ -174,7 +174,7 @@ When dealing with numeric values, naturally we want to be able to search for cer ); }; -:: +.. code:: run-func search range sodium 0 10 @@ -216,7 +216,7 @@ When searching for terms inside text fields, we can take into account typos and See if you can find recipes with misspelled pineapple 🍍 -:: +.. code:: run-func search fuzzy title pinapple 2 @@ -255,7 +255,7 @@ To see ``match`` in action use the method below to search for "Tomato garlic sou ); }; -:: +.. code:: run-func search match title 'Tomato-garlic soup with dill' @@ -300,7 +300,7 @@ When the order of the words is important, use ``match_phrase`` instead of ``matc We can use this method to find some recipes for pizza with pineapple. I learned from my Italian colleague that this considered a combination only for tourists, not a true pizza recipe. We'll do it by searching the ``directions`` field for words "pizza" and "pineapple" with top-most distance of 10 words in between. -:: +.. code:: run-func search slop directions "pizza pineapple" 10 @@ -345,7 +345,7 @@ This example also sets ``size`` to demonstrate how we can get more than 10 resul To find recipes with tomato, salmon or tuna and no onion run this query: -:: +.. code:: run-func search query ingredients "(salmon|tuna) +tomato -onion" 100 @@ -389,7 +389,7 @@ In the next method we combine what we learned so far, using both term-level and ); }; -:: +.. code:: run-func search boolean diff --git a/docs/products/opensearch/howto/opensearch-search-and-python.rst b/docs/products/opensearch/howto/opensearch-search-and-python.rst index 915656bb4e..e8736cd30a 100644 --- a/docs/products/opensearch/howto/opensearch-search-and-python.rst +++ b/docs/products/opensearch/howto/opensearch-search-and-python.rst @@ -23,7 +23,7 @@ We use ``Typer`` Python `library `_ to create CLI co 1. Clone the repository and install the dependencies -:: +.. code:: git clone https://github.com/aiven/demo-opensearch-python pip install -r requirements.txt @@ -60,15 +60,19 @@ Upload data to OpenSearch using Python Once you're connected, the next step should be to :ref:`inject data into our cluster `. This is done in our demo with the `load_data function `__. -You can inject the data to your cluster by running:: +You can inject the data to your cluster by running: - python index.py load-data +.. code:: + + python index.py load-data Once the data is loaded, we can :ref:`retrieve the data mapping ` to explore the structure of the data, with their respective fields and types. You can find the code implementation in the `get_mapping function `__. -Check the structure of your data by running:: +Check the structure of your data by running: - python index.py get-mapping +.. code:: + + python index.py get-mapping You should be able to see the fields' output: @@ -121,10 +125,11 @@ Use the ``search()`` method You have an OpenSearch client and data injected in your cluster, so you can start writing search queries. Python OpenSearch client has a handy method called ``search()``, which we'll use to run our queries. -We can check the method signature to understand the function and which parameters we'll use. As you can see, all the parameters are optional in the ``search()`` method. Find below the method signature:: +We can check the method signature to understand the function and which parameters we'll use. As you can see, all the parameters are optional in the ``search()`` method. Find below the method signature: - client.search: (body=None, index=None, doc_type=None, params=None, headers=None) +.. code:: + client.search: (body=None, index=None, doc_type=None, params=None, headers=None) To run the search queries, we'll use two of these parameters - ``index`` and ``body``: @@ -161,9 +166,11 @@ For the **Query DSL**, the field ``body`` expects a dictionary object which can } } -In this example, we are searching for "Garlic-Lemon" across ``title`` and ``ingredients`` fields. Try out yourself using our demo:: +In this example, we are searching for "Garlic-Lemon" across ``title`` and ``ingredients`` fields. Try out yourself using our demo: - python search.py multi-match title ingredients Garlic-Lemon +.. code:: + + python search.py multi-match title ingredients Garlic-Lemon Check what comes out from this interesting combination 🧄 🍋 : @@ -244,7 +251,7 @@ This is possible because `full-text queries `__. +The default standard analyzer drops most punctuation, breaks up text into individual words, and lower cases them to optimize the search. If you want to choose a different analyzer, check out the available ones in the `OpenSearch documentation `__. You can find out how a customized match query can be written with your Python OpenSearch client in the `search_match() `__ function. You can run yourself the code to explore the ``match`` function. For example, if you want to find out recipes with the name "Spring" on them: @@ -271,7 +278,7 @@ As a result of the "Spring" search recipes, you'll find: .. seealso:: - Find out more about `match queries `_. + Find out more about `match queries `_. Use a ``multi_match`` query --------------------------- @@ -292,7 +299,7 @@ In our demo, we have a function called `search_multi_match() `_ as: -:: +.. code:: python search.py multi-match title ingredients lemon @@ -323,7 +330,7 @@ If you know exactly which phrases you're looking for, you can try out our ``matc For example, try searching for ``pannacotta with lemon marmalade`` in the title: -:: +.. code:: python search.py match-phrase title "Pannacotta with lemon marmalade" @@ -337,7 +344,7 @@ Match phrases and add some ``slop`` You can use the ``slop`` parameter to create more flexible searches. Suppose you're searching for ``pannacotta marmalade`` with the ``match_phrase`` query, and no results are found. This happens because you are looking for exact phrases, as discussed in :ref:`match phrase query ` section. You can expand your searches by configuring the ``slop`` parameter. The default value for the ``slop`` parameter is 0. -The ``slop`` parameter allows to control the degree of disorder in your search as explained in the `OpenSearch documentation for the slop feature `_: +The ``slop`` parameter allows to control the degree of disorder in your search as explained in the `OpenSearch documentation for the slop feature `_: ``slop`` is the number of other words allowed between words in the query phrase. For example, to switch the order of two words requires two moves (the first move places the words atop one another), so to permit re-orderings of phrases, the slop must be at least two. A value of zero requires an exact match. @@ -374,13 +381,13 @@ So with ``slop`` parameter adjusted, you're may be able to find results even wit .. seealso:: - Read more about ``slop`` parameter on the `OpenSearch project specifications `_. + Read more about ``slop`` parameter on the `OpenSearch project specifications `_. Use a ``term`` query -------------------- -If you want results with a precise value in a ``field``, the `term query `_ is the right choice. The term query can be used to find documents according to a precise value such as a price or product ID, for example. +If you want results with a precise value in a ``field``, the `term query `_ is the right choice. The term query can be used to find documents according to a precise value such as a price or product ID, for example. This query can be constructed as: @@ -402,7 +409,7 @@ You can look the `search_term() `_. + See more about the range query in the `OpenSearch documentation `_. .. _fuzzy-query: diff --git a/docs/products/opensearch/howto/opensearch-with-curl.rst b/docs/products/opensearch/howto/opensearch-with-curl.rst index 9ce9dbb2b3..9a930f223c 100644 --- a/docs/products/opensearch/howto/opensearch-with-curl.rst +++ b/docs/products/opensearch/howto/opensearch-with-curl.rst @@ -18,9 +18,11 @@ Variable Description Connect to OpenSearch --------------------- -Connect to your service with:: +Connect to your service with: - curl OPENSEARCH_URI +.. code:: + + curl OPENSEARCH_URI If the connection is successful, one of the nodes in your cluster will respond with some information including: @@ -36,13 +38,17 @@ OpenSearch groups data into an index rather than a table. Create an index ''''''''''''''' -Create an index by making a ``PUT`` call to it:: +Create an index by making a ``PUT`` call to it: - curl -X PUT OPENSEARCH_URI/shopping-list +.. code:: + + curl -X PUT OPENSEARCH_URI/shopping-list The response should have status 200 and the body data will have ``acknowledged`` set to true. -If you already know something about the fields that will be in the documents you'll store, you can create an index with mappings to describe those known fields:: +If you already know something about the fields that will be in the documents you'll store, you can create an index with mappings to describe those known fields: + +.. code:: curl -X PUT -H "Content-Type: application/json" \ OPENSEARCH_URI/shopping-list \ @@ -60,15 +66,19 @@ This example creates the shopping list example but adds information to help the List of indices ''''''''''''''' -To list the indices do:: +To list the indices do: + +.. code:: - curl OPENSEARCH_URI/_cat/indices + curl OPENSEARCH_URI/_cat/indices Add an item to the index '''''''''''''''''''''''' -OpenSearch is a document database so there is no enforced schema structure for the data you store. To add an item, ``POST`` the JSON data that should be stored:: +OpenSearch is a document database so there is no enforced schema structure for the data you store. To add an item, ``POST`` the JSON data that should be stored: + +.. code:: curl -H "Content-Type: application/json" \ OPENSEARCH_URI/shopping-list/_doc \ @@ -77,7 +87,9 @@ OpenSearch is a document database so there is no enforced schema structure for t "quantity": 2 }' -Other data fields don't need to match in format:: +Other data fields don't need to match in format: + +.. code:: curl -H "Content-Type: application/json" \ OPENSEARCH_URI/shopping-list/_doc \ @@ -115,15 +127,19 @@ Search results include some key fields to look at when you try this example: Simple search ''''''''''''' -For the most simple search to match a string, you can use:: +For the most simple search to match a string, you can use: + +.. code:: - curl OPENSEARCH_URI/_search?q=apple + curl OPENSEARCH_URI/_search?q=apple Advanced search options ''''''''''''''''''''''' -For more advanced searches, you can send a more detailed payload to specify which fields to search among other options:: +For more advanced searches, you can send a more detailed payload to specify which fields to search among other options: +.. code:: + curl -H "Content-Type: application/json" \ OPENSEARCH_URI/_search \ -d '{ diff --git a/docs/products/opensearch/howto/resolve-shards-too-large.rst b/docs/products/opensearch/howto/resolve-shards-too-large.rst index 77c99c1739..8b8fb8c20f 100644 --- a/docs/products/opensearch/howto/resolve-shards-too-large.rst +++ b/docs/products/opensearch/howto/resolve-shards-too-large.rst @@ -16,7 +16,7 @@ When dealing with excessively large shards, you can consider the one of the foll ````````````````````````````````` If your application permits, permanently delete records, such as old logs or unnecessary records, from your index. For example, to delete records older than five days, use the following query: -:: +.. code:: POST /my-index/_delete_by_query { @@ -34,9 +34,10 @@ If your application permits, permanently delete records, such as old logs or unn 2. Re-index into several small indices ``````````````````````````````````````` -You can split your index into several smaller indices based on certain criteria. For example, to create an index for each ``event_type``, you can use following script:: - +You can split your index into several smaller indices based on certain criteria. For example, to create an index for each ``event_type``, you can use following script: +.. code:: + POST _reindex { diff --git a/docs/products/opensearch/howto/set_index_retention_patterns.rst b/docs/products/opensearch/howto/set_index_retention_patterns.rst index b5d96c460a..f6874b746d 100644 --- a/docs/products/opensearch/howto/set_index_retention_patterns.rst +++ b/docs/products/opensearch/howto/set_index_retention_patterns.rst @@ -16,8 +16,10 @@ To create cleanup patterns for OpenSearch indices: #. Enter the pattern that you want to use and the maximum index count for the pattern, then select **Create**. -Alternatively, you can use our `API `_ with a request similar to the following:: +Alternatively, you can use our `API `_ with a request similar to the following: - curl -X PUT --data '{"user_config":{"index_patterns": [{"pattern": "logs*", "max_index_count": 2},{"pattern":"test.?", "max_index_count": 3}]}' header "content-type: application-json" --header "authorization: aivenv1 " https://api.aiven.io/v1beta/project//service/ +.. code:: + + curl -X PUT --data '{"user_config":{"index_patterns": [{"pattern": "logs*", "max_index_count": 2},{"pattern":"test.?", "max_index_count": 3}]}' header "content-type: application-json" --header "authorization: aivenv1 " https://api.aiven.io/v1beta/project//service/ diff --git a/docs/products/opensearch/howto/setup-cross-cluster-replication-opensearch.rst b/docs/products/opensearch/howto/setup-cross-cluster-replication-opensearch.rst index 4e8c87e995..199e776ccc 100644 --- a/docs/products/opensearch/howto/setup-cross-cluster-replication-opensearch.rst +++ b/docs/products/opensearch/howto/setup-cross-cluster-replication-opensearch.rst @@ -26,7 +26,7 @@ Follow these steps to set up :doc:`cross cluster replication <../concepts/cross- * Add additional disk storage based on your business requirements 4. Select **Create**. -The follower cluster service will be in a `Rebuilding` state, and, once complete, the follower cluster will be ready to pull all data and indexes from the leader service. +The follower cluster service will be in a ``Rebuilding`` state, and, once complete, the follower cluster will be ready to pull all data and indexes from the leader service. .. note:: To learn about the current limitations with cross cluster replications for Aiven for OpenSearch, see the :ref:`Limitations ` section. diff --git a/docs/products/opensearch/reference/restapi-limited-access.rst b/docs/products/opensearch/reference/restapi-limited-access.rst index dd289d9d5c..253fdf78ad 100644 --- a/docs/products/opensearch/reference/restapi-limited-access.rst +++ b/docs/products/opensearch/reference/restapi-limited-access.rst @@ -4,7 +4,7 @@ For operational reasons, Aiven for OpenSearch® limits access to REST API endpoi The following endpoints are allowed: -:: +.. code:: GET /_cluster/health GET /_cluster/pending_tasks @@ -17,7 +17,7 @@ The following endpoints are allowed: The following API endpoint hierarchies are blocked: -:: +.. code:: /_cat/repositories /_cluster diff --git a/docs/products/postgresql/concepts/pg-shared-buffers.rst b/docs/products/postgresql/concepts/pg-shared-buffers.rst index 13fdaf5148..bafbf57875 100644 --- a/docs/products/postgresql/concepts/pg-shared-buffers.rst +++ b/docs/products/postgresql/concepts/pg-shared-buffers.rst @@ -116,7 +116,7 @@ Calculate how many blocks from tables (r), indexes (i), sequences (S), and other ---------+---------+----------+-----------------+--------------------- records | r | 781 MB | 99.7 | 27.2 -Relations with object IDs (``oid``) below `16384` are reserved system objects. +Relations with object IDs (``oid``) below ``16384`` are reserved system objects. Inspecting the query cache performance -------------------------------------- diff --git a/docs/products/postgresql/howto/enable-jit.rst b/docs/products/postgresql/howto/enable-jit.rst index 29e1c26426..b4bf5ad38c 100644 --- a/docs/products/postgresql/howto/enable-jit.rst +++ b/docs/products/postgresql/howto/enable-jit.rst @@ -21,7 +21,7 @@ To enable JIT in the `Aiven console `_, take the foll To enable JIT via :doc:`Aiven CLI `, you can use the :ref:`service update command `: -:: +.. code:: avn service update -c pg.jit=true PG_SERVICE_NAME @@ -32,13 +32,13 @@ You might not want to use JIT for most simple queries since it would increase th 1. Connect to the database where you want to enable JIT. E.g. with ``psql`` and the service URI available in the Aiven for PostgreSQL service overview console page -:: +.. code:: psql PG_CONNECTION_URI 2. Alter the database (in the example ``mytestdb``) and enable JIT -:: +.. code:: alter database mytestdb set jit=on; @@ -53,13 +53,13 @@ JIT can be enabled also for a specific user: 1. Connect to the database where you want to enable JIT using, for example, ``psql`` and the service URI available in `Aiven Console `_ > the **Overview** page of your Aiven for PostgreSQL service. -:: +.. code:: psql PG_CONNECTION_URI 2. Alter the role (in the example: ``mytestrole``), and enable JIT. -:: +.. code:: alter role mytestrole set jit=on; @@ -69,13 +69,13 @@ JIT can be enabled also for a specific user: 3. Start a new session with the role, and check that JIT is running. -:: +.. code:: show jit; The result should be: -:: +.. code:: jit ----- @@ -84,7 +84,7 @@ The result should be: 4. Run a simple query to test JIT is applied properly. -:: +.. code:: defaultdb=> explain analyze select sum(row) from table; QUERY PLAN diff --git a/docs/products/postgresql/howto/manage-extensions.rst b/docs/products/postgresql/howto/manage-extensions.rst index 51047c2a5c..9cc79bf740 100644 --- a/docs/products/postgresql/howto/manage-extensions.rst +++ b/docs/products/postgresql/howto/manage-extensions.rst @@ -6,17 +6,21 @@ Aiven for PostgreSQL® allows a series of pre-approved extensions to be installe Install an extension -------------------- -Any available extension can be installed by the ``avnadmin`` user with the following ``CREATE EXTENSION`` command:: +Any available extension can be installed by the ``avnadmin`` user with the following ``CREATE EXTENSION`` command: - CREATE EXTENSION CASCADE; +.. code:: + + CREATE EXTENSION CASCADE; Update an extension ------------------- -To upgrade an already-installed extension to the latest version available, run as the ``avnadmin`` user:: +To upgrade an already-installed extension to the latest version available, run as the ``avnadmin`` user: + +.. code:: - ALTER EXTENSION UPDATE; + ALTER EXTENSION UPDATE; If you want to experiment with upgrading, remember that you can fork your existing database to try this operation on a copy rather than your live database. @@ -31,5 +35,6 @@ We are always open to suggestions of additional extensions that could be useful * which database service and user database should have them .. warning:: - "Untrusted" language extensions such as ``plpythonu`` cannot be supported as they would compromise our ability to guarantee the highest possible service level. + + "Untrusted" language extensions such as ``plpythonu`` cannot be supported as they would compromise our ability to guarantee the highest possible service level. diff --git a/docs/products/postgresql/howto/migrate-aiven-db-migrate.rst b/docs/products/postgresql/howto/migrate-aiven-db-migrate.rst index 984b1e65f4..968f961c35 100644 --- a/docs/products/postgresql/howto/migrate-aiven-db-migrate.rst +++ b/docs/products/postgresql/howto/migrate-aiven-db-migrate.rst @@ -34,9 +34,11 @@ In order to use the **logical replication** method, you'll need the following: * An available replication slot on the destination cluster for each database migrated from the source cluster. -1. If you don't have an Aiven for PostgreSQL database yet, run the following command to create a couple of PostgreSQL services via :doc:`../../../tools/cli` substituting the parameters accordingly:: - - avn service create -t pg -p DEST_PG_PLAN DEST_PG_NAME +1. If you don't have an Aiven for PostgreSQL database yet, run the following command to create a couple of PostgreSQL services via :doc:`../../../tools/cli` substituting the parameters accordingly: + + .. code:: + + avn service create -t pg -p DEST_PG_PLAN DEST_PG_NAME 2. Enable the ``aiven_extras`` extension in the Aiven for PostgreSQL® target database as written in the :ref:`dedicated document `. @@ -47,13 +49,14 @@ In order to use the **logical replication** method, you'll need the following: * :doc:`Google Cloud SQL <./logical-replication-gcp-cloudsql>` .. Note:: - Aiven for PostgreSQL has ``wal_level`` set to ``logical`` by default + + Aiven for PostgreSQL has ``wal_level`` set to ``logical`` by default To review the current ``wal_level``, run the following command on the source cluster via ``psql`` .. code:: sql - show wal_level; + show wal_level; .. _pg_migrate_wal: @@ -115,10 +118,13 @@ You can check the migration status using the :doc:`Aiven CLI <../../../tools/cli .. Note:: - There maybe delay for migration status to update the current progress, keep running this command to see the most up-to-date status. + + There may be delay for migration status to update the current progress, keep running this command to see the most up-to-date status. + +The output should be similar to the following, which mentions that the ``pg_dump`` migration of the ``defaultdb`` database is ``done`` and the logical ``replication`` of the ``has_aiven_extras`` database is syncing: -The output should be similar to the following, which mentions that the ``pg_dump`` migration of the ``defaultdb`` database is ``done`` and the logical ``replication`` of the ``has_aiven_extras`` database is syncing:: +.. code:: -----Response Begin----- { @@ -149,7 +155,8 @@ The output should be similar to the following, which mentions that the ``pg_dump .. Note:: - The overall ``method`` field is left empty due to the mixed methods used to migrate each database. + + The overall ``method`` field is left empty due to the mixed methods used to migrate each database. Stop the migration process using the Aiven CLI '''''''''''''''''''''''''''''''''''''''''''''' diff --git a/docs/products/postgresql/howto/migrate-pg-dump-restore.rst b/docs/products/postgresql/howto/migrate-pg-dump-restore.rst index 00b8e1f31e..bad602f10f 100644 --- a/docs/products/postgresql/howto/migrate-pg-dump-restore.rst +++ b/docs/products/postgresql/howto/migrate-pg-dump-restore.rst @@ -42,9 +42,11 @@ Perform the migration Aiven automatically creates a ``defaultdb`` database and ``avnadmin`` user account, which are used by default. -2. Run the ``pg_dump`` command substituting the ``SRC_SERVICE_URI`` with the service URI of your source PostgreSQL service, and ``DUMP_FOLDER`` with the folder where you want to store the dump in:: +2. Run the ``pg_dump`` command substituting the ``SRC_SERVICE_URI`` with the service URI of your source PostgreSQL service, and ``DUMP_FOLDER`` with the folder where you want to store the dump in: - pg_dump -d 'SRC_SERVICE_URI' --jobs 4 --format directory -f DUMP_FOLDER + .. code:: + + pg_dump -d 'SRC_SERVICE_URI' --jobs 4 --format directory -f DUMP_FOLDER The ``--jobs`` option in this command instructs the operation to use 4 CPUs to dump the database. Depending on the number of CPUs you have available, you can use this option to adjust the performance to better suit your server. diff --git a/docs/products/postgresql/howto/migrate-using-bucardo.rst b/docs/products/postgresql/howto/migrate-using-bucardo.rst index 7bf2b285ef..2c57e968ea 100644 --- a/docs/products/postgresql/howto/migrate-using-bucardo.rst +++ b/docs/products/postgresql/howto/migrate-using-bucardo.rst @@ -54,7 +54,7 @@ To migrate your data using Bucardo: b. In line 5359 in `Bucardo.pm `_, change ``SET session_replication_role = default`` to the following: - :: + .. code:: $dbh->do(q{select aiven_extras.session_replication_role('replica');}); @@ -63,7 +63,7 @@ To migrate your data using Bucardo: d. On line 5428, change ``SET session_replication_role = default`` to the following: - :: + .. code:: $dbh->do(q{select aiven_extras.session_replication_role('origin');}); @@ -72,7 +72,7 @@ To migrate your data using Bucardo: #. | Add your source and destination databases. | For example: - :: + .. code:: bucardo add db srcdb dbhost=0.0.0.0 dbport=5432 dbname=all_your_base dbuser=$DBUSER dbpass=$DBPASS @@ -80,7 +80,7 @@ To migrate your data using Bucardo: #. Add the tables that you want to replicate: - :: + .. code:: bucardo add table belong to us herd=$HERD db=srcdb @@ -88,7 +88,7 @@ To migrate your data using Bucardo: #. Dump and restore the database from your source to your Aiven service: - :: + .. code:: pg_dump --schema-only --no-owner all_your_base > base.sql psql "$AIVEN_DB_URL" < base.sql @@ -98,7 +98,7 @@ To migrate your data using Bucardo: #. Create the ``dbgroup`` for Bucardo: - :: + .. code:: bucardo add dbgroup src_to_dest srcdb:source destdb:target bucardo add sync sync_src_to_dest relgroup=$HERD db=srcdb,destdb diff --git a/docs/products/postgresql/howto/monitor-database-with-datadog.rst b/docs/products/postgresql/howto/monitor-database-with-datadog.rst index 44b07fc189..7ba8c3b5dd 100644 --- a/docs/products/postgresql/howto/monitor-database-with-datadog.rst +++ b/docs/products/postgresql/howto/monitor-database-with-datadog.rst @@ -13,7 +13,7 @@ To use Datadog Database Monitoring with your Aiven for PostgreSQL® services, yo * Ensure the :doc:`Datadog Metrics integration ` is enabled. * The :doc:`PostgreSQL extensions <../reference/list-of-extensions>` - ``pg_stat_statements`` and ``aiven_extras``, must be enabled by executing the following `CREATE EXTENSION `_ SQL commands directly on the Aiven for PostgreSQL® database service. -:: +.. code:: CREATE EXTENSION pg_stat_statements; CREATE EXTENSION aiven_extras; @@ -33,17 +33,21 @@ Using the ``avn service integration-list`` :ref:`Aiven CLI command -* Check if user-config ``datadog_dbm_enabled`` set correctly:: +* Check if user-config ``datadog_dbm_enabled`` set correctly: + + .. code:: - avn service integration-list \ + avn service integration-list \ --project \ --json | jq '.[] | select(.integration_type=="datadog").user_config' - ``datadog_dbm_enabled`` should be set to ``true``:: + ``datadog_dbm_enabled`` should be set to ``true``: + + .. code:: - { - "datadog_dbm_enabled": true - } + { + "datadog_dbm_enabled": true + } Executing the steps successfully results in enabling Datadog Database Monitoring for your service. diff --git a/docs/products/postgresql/howto/pg-long-running-queries.rst b/docs/products/postgresql/howto/pg-long-running-queries.rst index 0aa9e90a31..b5884004e6 100644 --- a/docs/products/postgresql/howto/pg-long-running-queries.rst +++ b/docs/products/postgresql/howto/pg-long-running-queries.rst @@ -20,31 +20,41 @@ Terminate long running queries from the Aiven Console Detect and terminate long running queries with ``psql`` ------------------------------------------------------- -You can :doc:`login to your service <./connect-psql>` by running on the terminal ``psql ``. Once connected, you can call the following function on the ``psql`` shell to terminate a query manually:: +You can :doc:`login to your service <./connect-psql>` by running on the terminal ``psql ``. Once connected, you can call the following function on the ``psql`` shell to terminate a query manually: - SELECT pg_terminate_backend(pid); +.. code:: + + SELECT pg_terminate_backend(pid); You can learn more about the ``pg_terminate_backend()`` function from the `official documentation `_. -You can then use the following query to monitor currently running queries:: +You can then use the following query to monitor currently running queries: + +.. code:: - SELECT * FROM pg_stat_activity WHERE state <> 'idle'; + SELECT * FROM pg_stat_activity WHERE state <> 'idle'; Client applications can use the ``statement_timeout`` session variable to voluntarily request the server to automatically cancel any query using the current connection that runs over a specified length of time. For example, the following would cancel any query that runs for more 15 seconds automatically:: - SET statement_timeout = 15000 +.. code:: + + SET statement_timeout = 15000 You may check the `client connection defaults `_ documentation for more information on the available session variables. Database user error ------------------- -If you run the above command using a database user not being a member of the database you're connecting to, you will encounter the error:: +If you run the above command using a database user not being a member of the database you're connecting to, you will encounter the error: + +.. code:: + + ERROR: must be a member of the role whose process is being terminated or member of pg_signal_backend - ERROR: must be a member of the role whose process is being terminated or member of pg_signal_backend +You can check the roles assigned to each user with the following command: -You can check the roles assigned to each user with the following command:: +.. code:: SELECT r.rolname as username,r1.rolname as "role" FROM pg_catalog.pg_roles r @@ -54,7 +64,9 @@ You can check the roles assigned to each user with the following command:: WHERE r.rolcanlogin ORDER BY 1; -where you would see the following:: +where you would see the following: + +.. code:: username | role ----------+--------------------- @@ -62,16 +74,22 @@ where you would see the following:: avnadmin | pg_stat_scan_tables (3 rows) -To be able to check the database owner and grant the role, you can run the following:: +To be able to check the database owner and grant the role, you can run the following: +.. code:: + \l -which you should see the role:: +which you should see the role: + +.. code:: Name | Owner | -----------+----------+ testdb | testrole | -To resolve the permission issue, you may grant the user the appropriate role as per below:: +To resolve the permission issue, you may grant the user the appropriate role as per below: - grant testrole to avnadmin; +.. code:: + + grant testrole to avnadmin; diff --git a/docs/products/postgresql/howto/prevent-full-disk.rst b/docs/products/postgresql/howto/prevent-full-disk.rst index c47f1a0db9..40c684499f 100644 --- a/docs/products/postgresql/howto/prevent-full-disk.rst +++ b/docs/products/postgresql/howto/prevent-full-disk.rst @@ -40,7 +40,7 @@ Enable database writes for a specific session If you want to enable writes for a session, login to the required database and execute the following command: -:: +.. code:: SET default_transaction_read_only = OFF; @@ -51,7 +51,7 @@ Enable database writes for a limited amount of time If you want to enable any writes to the database for a limited amount of time, send the following ``POST`` request using :doc:`Aiven APIs ` and replacing the ``PROJECT_NAME`` and ``SERVICE_NAME`` placeholders: -:: +.. code:: https://api.aiven.io/v1/project//service//enable-writes diff --git a/docs/products/postgresql/howto/repair-pg-index.rst b/docs/products/postgresql/howto/repair-pg-index.rst index 9057f11159..f783abc203 100644 --- a/docs/products/postgresql/howto/repair-pg-index.rst +++ b/docs/products/postgresql/howto/repair-pg-index.rst @@ -8,7 +8,7 @@ Rebuild non-unique indexes You can rebuild corrupted indexes that do not have ``UNIQUE`` in their definition using the following command, that creates a new index replacing the old one: -:: +.. code:: REINDEX INDEX ; @@ -17,7 +17,7 @@ You can rebuild corrupted indexes that do not have ``UNIQUE`` in their definitio Re-indexing applies locks to the table and may interfere with normal use of the database. In some cases, it can be useful to manually build a second index concurrently alongside the old index and then remove the old index: - :: + .. code:: CREATE INDEX CONCURRENTLY foo_index_new ON table_a (...); DROP INDEX CONCURRENTLY foo_index_old; @@ -42,7 +42,7 @@ To identify conflicting duplicate rows, you need to run a query that counts the For example, the following ``route`` table has a ``unique_route_index`` index defining unique rows based on the combination of the ``source`` and ``destination`` columns: -:: +.. code:: CREATE TABLE route( source TEXT, @@ -55,7 +55,7 @@ For example, the following ``route`` table has a ``unique_route_index`` index de If the ``unique_route_index`` is corrupted, you can find duplicated rows in the ``route`` table by issuing the following query: -:: +.. code:: SELECT source, diff --git a/docs/products/postgresql/howto/restore-backup.rst b/docs/products/postgresql/howto/restore-backup.rst index 6f2202f955..9304724260 100644 --- a/docs/products/postgresql/howto/restore-backup.rst +++ b/docs/products/postgresql/howto/restore-backup.rst @@ -13,10 +13,10 @@ To restore a PostgreSQL database, take the following steps: 3. In the **Overview** page of your service, select **New database fork**. 4. Enter a service name and choose a project name, database version, cloud region and plan for the new instance. 5. Select the **Source service state** defining the backup point, the options are as follows: - * **Latest transaction** - * **Point in time** - the date selector allows to chose a precise point in time within the available backup retention period. + * **Latest transaction** + * **Point in time** - the date selector allows to chose a precise point in time within the available backup retention period. -Once the new service is running, you can change your application’s connection settings to point to it. +Once the new service is running, you can change your application's connection settings to point to it. .. Tip:: Forked services can also be very useful for testing purposes, allowing you to create a completely realistic, separate copy of the actual production database with its data. diff --git a/docs/products/postgresql/howto/use-dblink-extension.rst b/docs/products/postgresql/howto/use-dblink-extension.rst index 75d6b50467..9898b0a37e 100644 --- a/docs/products/postgresql/howto/use-dblink-extension.rst +++ b/docs/products/postgresql/howto/use-dblink-extension.rst @@ -28,7 +28,7 @@ To enable the ``dblink`` extension on an Aiven for PostgreSQL service: * Connect to the database with the ``avnadmin`` user. The following shows how to do it with ``psql``, the service URI can be found in the `Aiven console `_ the service's **Overview** page: -:: +.. code:: psql "postgres://avnadmin:[AVNADMIN_PWD]@[PG_HOST]:[PG_PORT]/[PG_DB_NAME]?sslmode=require" @@ -38,7 +38,7 @@ To enable the ``dblink`` extension on an Aiven for PostgreSQL service: * Create the ``dblink`` extension -:: +.. code:: CREATE EXTENSION dblink; @@ -49,20 +49,20 @@ To create a foreign data wrapper using the ``dblink_fwd`` you need to perform th * Connect to the database with the ``avnadmin`` user. The following shows how to do it with ``psql``, the service URI can be found in the `Aiven console `_ the service's **Overview** page: -:: +.. code:: psql "postgres://avnadmin:[AVNADMIN_PWD]@[PG_HOST]:[PG_PORT]/[PG_DB_NAME]?sslmode=require" * Create a user ``user1`` that will be access the ``dblink`` -:: +.. code:: CREATE USER user1 PASSWORD 'secret1' * Create a remote server definition (named ``pg_remote``) using ``dblink_fdw`` and the target PostgreSQL connection details -:: +.. code:: CREATE SERVER pg_remote FOREIGN DATA WRAPPER dblink_fdw @@ -74,7 +74,7 @@ To create a foreign data wrapper using the ``dblink_fwd`` you need to perform th * Create a user mapping for the ``user1`` to automatically authenticate as the ``TARGET_PG_USER`` when using the ``dblink`` -:: +.. code:: CREATE USER MAPPING FOR user1 SERVER pg_remote @@ -85,7 +85,7 @@ To create a foreign data wrapper using the ``dblink_fwd`` you need to perform th * Enable ``user1`` to use the remote PostgreSQL connection ``pg_remote`` -:: +.. code:: GRANT USAGE ON FOREIGN SERVER pg_remote TO user1; @@ -98,13 +98,13 @@ To query a foreign data wrapper you must be a database user having the necessary * Establish the ``dblink`` connection to the remote target -:: +.. code:: SELECT dblink_connect('my_new_conn', 'pg_remote'); * Execute the query passing the foreign server definition as parameter -:: +.. code:: SELECT * FROM dblink('pg_remote','SELECT item_id FROM inventory') AS target_inventory(target_item_id int); diff --git a/docs/products/postgresql/howto/use-pg-cron-extension.rst b/docs/products/postgresql/howto/use-pg-cron-extension.rst index caea1e34f0..43f0cae71e 100644 --- a/docs/products/postgresql/howto/use-pg-cron-extension.rst +++ b/docs/products/postgresql/howto/use-pg-cron-extension.rst @@ -8,7 +8,7 @@ Use the PostgreSQL® ``pg_cron`` extension The schedule uses the standard cron syntax, where an asterisk (*) signifies "execute at every time interval", and a specific number indicates "execute exclusively at this specific time": -:: +.. code:: ┌───────────── min (0 - 59) │ ┌────────────── hour (0 - 23) @@ -26,17 +26,16 @@ Enable ``pg_cron`` for specific users To use the ``pg_cron`` extension: 1. Connect to the database as ``avnadmin`` user and make sure to use the ``defaultdb`` database -:: - CREATE EXTENSION pg_cron; - -2. As a optional step, you can grant usage permission to regular users -:: - - GRANT USAGE ON SCHEMA cron TO janedoe; + .. code:: + CREATE EXTENSION pg_cron; +2. As a optional step, you can grant usage permission to regular users + .. code:: + + GRANT USAGE ON SCHEMA cron TO janedoe; Setup the cron job ------------------ @@ -45,7 +44,7 @@ List all the jobs ``````````````````` To view the full list of existing jobs, you can run the following query: -:: +.. code:: postgres=> SELECT * FROM cron.job; jobid | schedule | command | nodename | nodeport | database | username | active | jobname @@ -59,7 +58,7 @@ Schedule a job ``````````````` You can schedule a new job with the following command. In this example, the job is set to vacuum daily at 10:00am (GMT): -:: +.. code:: ###Vacuum every day at 10:00am (GMT) SELECT cron.schedule('nightly-vacuum', '0 10 * * *', 'VACUUM'); @@ -71,14 +70,14 @@ To stop scheduling a job, you have two options: 1. By using the ``jobname``: -:: +.. code:: ###Stop scheduling jobs using jobname SELECT cron.unschedule('nightly-vacuum' ); 2. By using the ``jobid``: -:: +.. code:: ###Stop scheduling jobs using jobid SELECT cron.unschedule(1); @@ -88,7 +87,7 @@ View completed jobs ``````````````````````` To view a list of all completed job runs, you can use the following query: -:: +.. code:: select * from cron.job_run_details order by start_time desc limit 5; +------------------------------------------------------------------------------------------------------------------------------------------------------------------+ diff --git a/docs/products/postgresql/howto/use-pg-repack-extension.rst b/docs/products/postgresql/howto/use-pg-repack-extension.rst index d9528710ef..d358f0c657 100644 --- a/docs/products/postgresql/howto/use-pg-repack-extension.rst +++ b/docs/products/postgresql/howto/use-pg-repack-extension.rst @@ -31,14 +31,16 @@ Use ``pg_repack`` extension To use the ``pg_repack`` extension: 1. Connect to the database as ``avnadmin`` user, and run the following command to create the extension: -:: - CREATE EXTENSION pg_repack; + .. code:: + + CREATE EXTENSION pg_repack; 2. Run the ``pg_repack`` command on the table to reorganize it. -:: - pg_repack -k -U avnadmin -h -p -d -t + .. code:: + + pg_repack -k -U avnadmin -h -p -d -t .. note:: - Using ``-k`` skips the superuser checks in the client. This setting is useful when using pg_repack on platforms that support running it as non-superusers. diff --git a/docs/products/redis/get-started.rst b/docs/products/redis/get-started.rst index 2ee3aa7410..4e119a3441 100644 --- a/docs/products/redis/get-started.rst +++ b/docs/products/redis/get-started.rst @@ -20,7 +20,7 @@ Create a Redis®* service using the Aiven CLI 1. Determine the service plan, cloud provider, and region you want to use for your Redis®* service. 2. Run the following command to create a Redis®* service named demo-Redis: -:: + .. code:: avn service create demo-Redis \ --service-type redis \ diff --git a/docs/products/redis/howto/configure-acl-permissions.rst b/docs/products/redis/howto/configure-acl-permissions.rst index 27a737ce9b..74a43d6f67 100644 --- a/docs/products/redis/howto/configure-acl-permissions.rst +++ b/docs/products/redis/howto/configure-acl-permissions.rst @@ -63,13 +63,13 @@ To create a user and configure ACLs using the Aiven CLI, follow these steps: 2. Create a user named ``mynewuser`` with read-only access to the ``mykeys.*`` keys using the following command: - :: + .. code:: avn service user-create --project myproject myservicename --username mynewuser --redis-acl-keys 'mykeys.*' --redis-acl-commands '+get' --redis-acl-categories '' 3. Confirm the ACL is applied by connecting to the service using the new username and password: - :: + .. code:: redis-cli --user mynewuser --pass ... --tls -h myservice-myproject.aivencloud.com -p 12719 diff --git a/docs/products/redis/howto/connect-go.rst b/docs/products/redis/howto/connect-go.rst index 328566595b..e731b1fb51 100644 --- a/docs/products/redis/howto/connect-go.rst +++ b/docs/products/redis/howto/connect-go.rst @@ -1,7 +1,7 @@ Connect with Go --------------- -This example connects to Redis®* service from Go, making use of the ``go-redis/redis`` library. +This example connects to Redis® service from Go, making use of the ``go-redis/redis`` library. Variables ''''''''' @@ -30,10 +30,14 @@ Create a new file named ``main.go``, add the following content and replace the p This code creates a key named ``key`` with the value ``hello world`` and no expiration time. Then, it gets the key back from Redis and prints its value. -Run the code:: +Run the code: - go run main.go +.. code:: + + go run main.go -If the script runs successfully, the outputs should be:: +If the script runs successfully, the outputs should be: - The value of key is: hello world +.. code:: + + The value of key is: hello world diff --git a/docs/products/redis/howto/connect-java.rst b/docs/products/redis/howto/connect-java.rst index 5c087f6b5f..9501878186 100644 --- a/docs/products/redis/howto/connect-java.rst +++ b/docs/products/redis/howto/connect-java.rst @@ -35,11 +35,15 @@ Create a new file named ``RedisExample.java``: This code creates a key named ``key`` with the value ``hello world`` and no expiration time. Then, it gets the key back from Redis and prints its value. -Replace the placeholder with the **Redis URI** and compile and run the code:: +Replace the placeholder with the **Redis URI** and compile and run the code: - javac -cp lib/*:. RedisExample.java && java -cp lib/*:. RedisExample REDIS_URI +.. code:: + javac -cp lib/*:. RedisExample.java && java -cp lib/*:. RedisExample REDIS_URI -If the command runs successfully, the outputs should be:: - The value of key is: hello world +If the command runs successfully, the outputs should be: + +.. code:: + + The value of key is: hello world diff --git a/docs/products/redis/howto/connect-node.rst b/docs/products/redis/howto/connect-node.rst index f48c41a79b..0f0a89e6b0 100644 --- a/docs/products/redis/howto/connect-node.rst +++ b/docs/products/redis/howto/connect-node.rst @@ -17,9 +17,11 @@ Variable Description Pre-requisites '''''''''''''' -Install the ``ioredis`` library:: +Install the ``ioredis`` library: - npm install --save ioredis +.. code:: + + npm install --save ioredis Code '''' @@ -30,10 +32,14 @@ Create a new file named ``index.js``, add the following content and replace the This code creates a key named ``key`` with the value ``hello world`` and no expiration time. Then, it gets the key back from Redis and prints its value. -Run the code:: +Run the code: - node index.js +.. code:: + + node index.js -If the script runs successfully, the outputs should be:: +If the script runs successfully, the outputs should be: - The value of key is: hello world +.. code:: + + The value of key is: hello world diff --git a/docs/products/redis/howto/connect-php.rst b/docs/products/redis/howto/connect-php.rst index f7a65c9fce..127fdd6701 100644 --- a/docs/products/redis/howto/connect-php.rst +++ b/docs/products/redis/howto/connect-php.rst @@ -17,9 +17,11 @@ Variable Description Pre-requisites '''''''''''''' -Install the ``predis`` library:: +Install the ``predis`` library: - composer require predis/predis +.. code:: + + composer require predis/predis Code '''' @@ -30,10 +32,14 @@ Create a new file named ``index.php``, add the following content and replace the This code creates a key named ``key`` with the value ``hello world`` and no expiration time. Then, it gets the key back from Redis and prints its value. -Run the code:: +Run the code: - php index.php +.. code:: + + php index.php -If the script runs successfully, the outputs should be:: +If the script runs successfully, the outputs should be: - The value of key is: hello world +.. code:: + + The value of key is: hello world diff --git a/docs/products/redis/howto/connect-python.rst b/docs/products/redis/howto/connect-python.rst index 4614c5486b..b2158e08bb 100644 --- a/docs/products/redis/howto/connect-python.rst +++ b/docs/products/redis/howto/connect-python.rst @@ -17,9 +17,11 @@ Variable Description Pre-requisites '''''''''''''' -Install the ``redis-py`` library:: +Install the ``redis-py`` library: - pip install redis +.. code:: + + pip install redis Code '''' @@ -30,14 +32,18 @@ Create a new file named ``main.py``, add the following content and replace the p This code creates a key named ``key`` with the value ``hello world`` and no expiration time. Then, it gets the key back from Redis and prints its value. -Run the code:: +Run the code: - python main.py +.. code:: + + python main.py .. note:: - Note that on some systems you will need to use `python3` to get Python3 rather than the previous Python2 + Note that on some systems you will need to use ``python3`` to get Python3 rather than the previous Python2 -If the script runs successfully, the outputs should be:: +If the script runs successfully, the outputs should be: - The value of key is: hello world +.. code:: + + The value of key is: hello world diff --git a/docs/products/redis/howto/connect-redis-cli.rst b/docs/products/redis/howto/connect-redis-cli.rst index f4963fbe11..1ac0f470e0 100644 --- a/docs/products/redis/howto/connect-redis-cli.rst +++ b/docs/products/redis/howto/connect-redis-cli.rst @@ -27,7 +27,7 @@ Code Execute the following from a terminal window: -:: +.. code:: redis-cli -u REDIS_URI diff --git a/docs/products/redis/howto/manage-ssl-connectivity.rst b/docs/products/redis/howto/manage-ssl-connectivity.rst index 7f842a0e51..72f7626568 100644 --- a/docs/products/redis/howto/manage-ssl-connectivity.rst +++ b/docs/products/redis/howto/manage-ssl-connectivity.rst @@ -31,7 +31,8 @@ Set up ``stunnel`` process If you want to keep SSL settings on database side, but hide it from the client side, you can set up a ``stunnel`` process on the client to handle encryption. You can use the following ``stunnel`` configuration, for example ``stunnel.conf``, to set up a ``stunnel`` process. -:: + +.. code:: client = yes foreground = yes diff --git a/docs/products/redis/howto/migrate-aiven-redis.rst b/docs/products/redis/howto/migrate-aiven-redis.rst index 092ba5c77d..854f664cac 100644 --- a/docs/products/redis/howto/migrate-aiven-redis.rst +++ b/docs/products/redis/howto/migrate-aiven-redis.rst @@ -31,8 +31,10 @@ Create a service and perform the migration 1. Check the Aiven configuration options and Redis connection details: - - for Aiven configuration options, type:: + - For Aiven configuration options, type: + .. code:: + avn service types -v ... @@ -51,7 +53,9 @@ Create a service and perform the migration User name for authentication with the server where to migrate data from => -c migration.username= - - for the VPC information, type:: + - for the VPC information, type: + + .. code:: avn vpc list --project test @@ -59,47 +63,57 @@ Create a service and perform the migration ==================================== ============= 40ddf681-0e89-4bce-bd89-25e246047731 aws-eu-west-1 - .. Note:: + .. Note:: Here are your required values for the hostname, port and password of the source Redis service, as well as the VPD ID and cloud name. -2. Create the Aiven for Redis service (if you don't have one yet), and migrate:: - - avn service create --project test -t redis -p hobbyist --cloud aws-eu-west-1 --project-vpc-id 40ddf681-0e89-4bce-bd89-25e246047731 -c migration.host="master.jappja-redis.kdrxxz.euw1.cache.amazonaws.com" -c migration.port=6379 -c migration.password= redis +2. Create the Aiven for Redis service (if you don't have one yet), and migrate: + + .. code:: + + avn service create --project test -t redis -p hobbyist --cloud aws-eu-west-1 --project-vpc-id 40ddf681-0e89-4bce-bd89-25e246047731 -c migration.host="master.jappja-redis.kdrxxz.euw1.cache.amazonaws.com" -c migration.port=6379 -c migration.password= redis -.. Tip:: + .. Tip:: - If the source Redis server is publicly accessible, the project-vpc-id and cloud parameters are not needed. + If the source Redis server is publicly accessible, the project-vpc-id and cloud parameters are not needed. -3. Check the migration status:: +3. Check the migration status: + + .. code:: + + avn service migration-status --project test redis - avn service migration-status --project test redis + STATUS METHOD ERROR + ====== ====== ===== + done scan null - STATUS METHOD ERROR - ====== ====== ===== - done scan null + .. Note:: -.. Note:: - - Status can be one of ``done``, ``failed`` or ``running``. In case of failure, the error contains the error message:: - - avn service migration-status --project test redis + Status can be one of ``done``, ``failed`` or ``running``. In case of failure, the error contains the error message: + + .. code:: + + avn service migration-status --project test redis - STATUS METHOD ERROR - ====== ====== ================ - failed scan invalid password + STATUS METHOD ERROR + ====== ====== ================ + failed scan invalid password Migrate to an existing Aiven for Redis service ---------------------------------------------------- -Migrate to an existing Aiven for Redis service by updating the service configuration:: +Migrate to an existing Aiven for Redis service by updating the service configuration: + +.. code:: - avn service update --project test -c migration.host="master.jappja-redis.kdrxxz.euw1.cache.amazonaws.com" -c migration.port=6379 -c migration.password= redis + avn service update --project test -c migration.host="master.jappja-redis.kdrxxz.euw1.cache.amazonaws.com" -c migration.port=6379 -c migration.password= redis Remove migration from configuration --------------------------------------------- -Migration is one-time operation - once the status is ``done``, the migration cannot be restarted. If you need to run migration again, you should first remove it from the configuration, and then configure it again:: +Migration is one-time operation - once the status is ``done``, the migration cannot be restarted. If you need to run migration again, you should first remove it from the configuration, and then configure it again: - avn service update --project test --remove-option migration redis +.. code:: + + avn service update --project test --remove-option migration redis diff --git a/docs/products/redis/howto/warning-overcommit_memory.rst b/docs/products/redis/howto/warning-overcommit_memory.rst index ddd10cc434..bd4ceb7875 100644 --- a/docs/products/redis/howto/warning-overcommit_memory.rst +++ b/docs/products/redis/howto/warning-overcommit_memory.rst @@ -1,8 +1,10 @@ Handle warning ``overcommit_memory`` ==================================== -When starting a Redis®* service on `Aiven console `_, you may notice on **Logs** the following **warning** ``overcommit_memory``:: +When starting a Redis®* service on `Aiven console `_, you may notice on **Logs** the following **warning** ``overcommit_memory``: - # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. +.. code:: + + # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. This warning can be safely ignored as Aiven for Redis®* ensures that the available memory will never drop low enough to hit this particular failure case. diff --git a/docs/tools/cli.rst b/docs/tools/cli.rst index cc8602a26f..72eab308df 100644 --- a/docs/tools/cli.rst +++ b/docs/tools/cli.rst @@ -13,9 +13,11 @@ The ``avn`` client is an ideal way to use Aiven's services in a scriptable way. Install ''''''' -The ``avn`` utility is a Python package, so you can install using ``pip``:: +The ``avn`` utility is a Python package, so you can install using ``pip``: - pip install aiven-client +.. code:: + + pip install aiven-client Check your install by running ``avn`` and looking for usage output. @@ -23,16 +25,21 @@ Check your install by running ``avn`` and looking for usage output. Authenticate '''''''''''' -There are two options for authenticating. The first is to use your username, and then enter your password when prompted:: +There are two options for authenticating. The first is to use your username, and then enter your password when prompted: - avn user login +.. code:: + + avn user login -For security reasons, it is recommended to use an access token, especially if you use SSO. You can use a command like:: +For security reasons, it is recommended to use an access token, especially if you use SSO. You can use a command like: - avn user login --token +.. code:: + + avn user login --token .. tip:: - To learn how to create an authentication token refer to :doc:`../platform/howto/create_authentication_token` + + To learn how to create an authentication token refer to :doc:`../platform/howto/create_authentication_token` This command will prompt you for a token rather than a password. diff --git a/docs/tools/cli/account.rst b/docs/tools/cli/account.rst index 85160b0f2e..17fd9af1cc 100644 --- a/docs/tools/cli/account.rst +++ b/docs/tools/cli/account.rst @@ -30,13 +30,13 @@ To create a new organizational unit, specify the parent organization using ``--p **Example:** Create an organizational unit for production in an organization with the ID ``123456789123``. -:: +.. code:: avn account create --name "Production" --parent-account-id 123456789123 **Example:** Create a new organization for the billing analytics department. -:: +.. code:: avn account create --name "Billing Analytics" @@ -57,7 +57,7 @@ Deletes an organization or organizational unit. **Example:** Delete the organization with id ``123456789123``. -:: +.. code:: avn account delete 123456789123 @@ -100,6 +100,6 @@ Changes the name of an organization or organizational unit. **Example:** Change the name of organizational unit with the ID ``123456789123`` to ``Billing Analytics Account``. -:: +.. code:: avn account update 123456789123 --name "Billing Analytics Account" \ No newline at end of file diff --git a/docs/tools/cli/account/account-authentication-method.rst b/docs/tools/cli/account/account-authentication-method.rst index feed57a010..9b4d969fda 100644 --- a/docs/tools/cli/account/account-authentication-method.rst +++ b/docs/tools/cli/account/account-authentication-method.rst @@ -33,7 +33,7 @@ Creates a new authentication method. More information about authentication metho **Example:** Create a new ``saml`` authentication method named ``My Authentication Method`` for the account id ``123456789123``. -:: +.. code:: avn account authentication-method create 123456789123 \ --name "My Authentication Method" \ @@ -57,7 +57,7 @@ Deletes an existing authentication method. **Example:** Delete the authentication method with id ``88888888888`` belonging to the account id ``123456789123``. -:: +.. code:: avn account authentication-method delete 123456789123 88888888888 @@ -77,7 +77,7 @@ Lists the existing authentication methods. **Example:** List all the authentication methods belonging to the account id ``123456789123``. -:: +.. code:: avn account authentication-method list 123456789123 @@ -121,6 +121,6 @@ Updates an existing authentication method. **Example:** Disable the authentication method with id ``am2exxxxxxxxx`` for the account id ``123456789123``. -:: +.. code:: avn account authentication-method update 123456789123 am2exxxxxxxxx --disable diff --git a/docs/tools/cli/account/account-team.rst b/docs/tools/cli/account/account-team.rst index fd2cefc77b..da22bcf841 100644 --- a/docs/tools/cli/account/account-team.rst +++ b/docs/tools/cli/account/account-team.rst @@ -27,7 +27,7 @@ Creates a new account team. **Example:** Create a new team named ``clickstream analytics`` for the account id ``123456789123``. -:: +.. code:: avn account team create 123456789123 --team-name "clickstream analytics" @@ -49,7 +49,7 @@ Deletes an existing account team. **Example:** Delete the team with id ``at31d79d311b3`` for the account id ``123456789123``. -:: +.. code:: avn account team delete 123456789123 --team-id at31d79d311b3 @@ -70,7 +70,7 @@ Lists an existing account teams. **Example:** List all the teams belonging to the account id ``123456789123``. -:: +.. code:: avn account team list 123456789123 @@ -108,7 +108,7 @@ Attaches an existing account team to a project. **Example:** Attach the team with id ``at3exxxxxxxxx`` belonging to the account ``123456789123`` to the project named ``testing-sandbox`` granting ``operator`` access. -:: +.. code:: avn account team project-attach 123456789123 \ --team-id at3exxxxxxxxx \ @@ -136,7 +136,7 @@ Detaches an existing account team from a project. **Example:** Detach the team with id ``at3exxxxxxxxx`` belonging to the account ``123456789123`` from the project named ``testing-sandbox``. -:: +.. code:: avn account team project-detach 123456789123 \ --team-id at3exxxxxxxxx \ @@ -162,7 +162,7 @@ Invites a new user to an Aiven team. **Example:** Invite the user ``jane.doe@example.com`` to the team id ``at3exxxxxxxxx`` belonging to the account ``123456789123``. -:: +.. code:: avn account team user-invite 123456789123 jane.doe@example.com --team-id at3exxxxxxxxx @@ -186,7 +186,7 @@ Deletes an existing user from an Aiven team. **Example:** Remove the user with id ``x5dxxxxxxxxx`` from the team id ``at3exxxxxxxxx`` belonging to the account ``123456789123``. -:: +.. code:: avn account team user-delete 123456789123 --team-id at3exxxxxxxxx --user-id x5dxxxxxxxxx @@ -208,7 +208,7 @@ Lists the existing users in an Aiven team. **Example:** List all the users in the team id ``at3exxxxxxxxx`` belonging to the account ``123456789123``. -:: +.. code:: avn account team user-list 123456789123 --team-id at3exxxxxxxxx @@ -242,7 +242,7 @@ Lists the users with pending invitation from an Aiven team. Unacknowledged invit **Example:** List all the users with pending invitations for the team id ``at3exxxxxxxxx`` belonging to the account ``123456789123``. -:: +.. code:: avn account team user-list-pending 123456789123 --team-id at3exxxxxxxxx diff --git a/docs/tools/cli/cloud.rst b/docs/tools/cli/cloud.rst index fbf356ae74..3225046810 100644 --- a/docs/tools/cli/cloud.rst +++ b/docs/tools/cli/cloud.rst @@ -27,14 +27,14 @@ Lists cloud regions with related geographical region, latitude and longitude. **Example:** Show the clouds available to the currently selected project. -:: +.. code:: avn cloud list **Example:** Show the clouds available to a named project. -:: +.. code:: avn cloud list --project my-project diff --git a/docs/tools/cli/credits.rst b/docs/tools/cli/credits.rst index d822f9f20d..8779ae7e00 100644 --- a/docs/tools/cli/credits.rst +++ b/docs/tools/cli/credits.rst @@ -28,14 +28,14 @@ Add an Aiven credit code to a project. **Example:** Add a credit code to the currently selected project. -:: +.. code:: avn credits claim "credit-code-123" **Example:** Add a credit code to a named project. -:: +.. code:: avn credits claim "credit-code-123" --project my-project @@ -57,12 +57,12 @@ List the credit codes associated with a project. **Example:** List all credit codes associated with the currently selected project. -:: +.. code:: avn credits list **Example:** List all credit codes associated with a named project. -:: +.. code:: avn credits list --project my-project diff --git a/docs/tools/cli/events.rst b/docs/tools/cli/events.rst index 236fa6ce45..7c20396dac 100644 --- a/docs/tools/cli/events.rst +++ b/docs/tools/cli/events.rst @@ -46,13 +46,13 @@ Lists instance or integration creation, deletion or modification events. **Example:** Show the recent events of the currently selected project. -:: +.. code:: avn events **Example:** Show the most recent 10 events of a named project. -:: +.. code:: avn events -n 10 --project my-project diff --git a/docs/tools/cli/mirrormaker.rst b/docs/tools/cli/mirrormaker.rst index fd8ef675ba..36b5fec8c1 100644 --- a/docs/tools/cli/mirrormaker.rst +++ b/docs/tools/cli/mirrormaker.rst @@ -51,7 +51,7 @@ Creates a new Aiven for Apache Kafka® MirrorMaker 2 replication flow. * enable MirrorMaker 2 heartbeats * enable synching of consumer groups offset every ``60`` seconds -:: +.. code:: avn mirrormaker replication-flow create kafka-mm \ --source-cluster kafka-source-alias \ @@ -95,7 +95,7 @@ Deletes an existing Aiven for Apache Kafka® MirrorMaker 2 replication flow. **Example:** In the service ``kafka-mm`` delete the replication flow from an Aiven for Apache Kafka service with integration alias ``kafka-source-alias`` to the service named ``kafka-target-alias``. -:: +.. code:: avn mirrormaker replication-flow delete kafka-mm \ --source-cluster kafka-source-alias \ @@ -122,7 +122,7 @@ Retrieves the configuration details of an existing Aiven for Apache Kafka® Mirr **Example:** In the service ``kafka-mm`` retrieve the details of the replication flow from an Aiven for Apache Kafka service with integration alias ``kafka-source-alias`` to the service named ``kafka-target-alias``. -:: +.. code:: avn mirrormaker replication-flow get kafka-mm \ --source-cluster kafka-source-alias \ @@ -165,7 +165,7 @@ Lists the configuration details for all replication flows defined in an existing **Example:** List the configuration details for all replication flows defined in an existing Aiven for Apache Kafka MirrorMaker 2 named ``kafka-mm``. -:: +.. code:: avn mirrormaker replication-flow list kafka-mm @@ -213,7 +213,7 @@ Updates an existing Aiven for Apache Kafka® MirrorMaker 2 replication flow. **Example:** In the service ``kafka-mm`` update the replication flow from an Aiven for Apache Kafka service with integration alias ``kafka-source-alias`` to a service named ``kafka-target-alias`` with the settings contained in a file named ``replication-flow.json``. -:: +.. code:: avn mirrormaker replication-flow update kafka-mm \ --source-cluster kafka-source-alias \ diff --git a/docs/tools/cli/project.rst b/docs/tools/cli/project.rst index dea117e316..5e533c93d1 100644 --- a/docs/tools/cli/project.rst +++ b/docs/tools/cli/project.rst @@ -298,6 +298,6 @@ SBOM reports are generated per project and can be downloaded as long as the nece **Example:** Get the SBOM report download link for the project ``my-project`` in ``csv`` format: -:: +.. code:: avn project generate-sbom --project my-project --output csv diff --git a/docs/tools/cli/service.rst b/docs/tools/cli/service.rst index a0404fc885..dc30730785 100644 --- a/docs/tools/cli/service.rst +++ b/docs/tools/cli/service.rst @@ -34,7 +34,7 @@ Retrieves the list of backups for a certain service. **Example:** Retrieve the list of backups for the service ``grafana-25c408a5``. -:: +.. code:: avn service backup-list grafana-25c408a5 @@ -66,7 +66,7 @@ Retrieves the project CA that the selected service belongs to. **Example:** Retrieve the CA certificate for the project where the service named ``kafka-doc`` belongs and store it under ``/tmp/ca.pem``. -:: +.. code:: avn service ca get kafka-doc --target-filepath /tmp/ca.pem @@ -88,7 +88,7 @@ Opens the appropriate interactive shell, such as ``psql`` or ``redis-cli``, to t **Example:** Open a new ``psql`` shell connecting to an Aiven for PostgreSQL® service named ``pg-doc``. -:: +.. code:: avn service cli pg-doc @@ -153,7 +153,7 @@ Creates a new service. * Kafka Connect enabled * 600 GiB of total storage capacity -:: +.. code:: avn service create kafka-demo \ --service-type kafka \ @@ -179,7 +179,7 @@ Resets the service credentials. More information on user password change is prov **Example:** Reset the credentials of a service named ``kafka-demo``. -:: +.. code:: avn service credentials-reset kafka-demo @@ -200,7 +200,7 @@ List current service connections/queries for an Aiven for PostgreSQL®, Aiven fo **Example:** List the queries running for a service named ``pg-demo``. -:: +.. code:: avn service current-queries pg-demo @@ -246,13 +246,13 @@ Retrieves a single service details. **Example:** Retrieve the ``pg-demo`` service details in the ``'{service_name} {service_uri}'`` format. -:: +.. code:: avn service get pg-demo --format '{service_name} {service_uri}' **Example:** Retrieve the ``pg-demo`` full service details in JSON format. -:: +.. code:: avn service get pg-demo --json @@ -293,7 +293,7 @@ Service keypair commands. The use cases for this command are limited to accessin **Example:** Retrieve the keypair, and save them to the ``/tmp`` directory, for an Aiven for Apache Cassandra® service, called ``test-cass``, that was started in migration mode. -:: +.. code:: avn service keypair get --key-filepath /tmp/keyfile --cert-filepath /tmp/certfile test-cass cassandra_migrate_sstableloader_user @@ -315,7 +315,7 @@ Lists services within an Aiven project. **Example:** Retrieve all the services running in the currently selected project. -:: +.. code:: avn service list @@ -334,7 +334,7 @@ An example of ``service list`` output: **Example:** Retrieve all the services with name ``demo-pg`` running in the project named ``mytestproject``. -:: +.. code:: avn service list demo-pg --project mytestproject @@ -356,7 +356,7 @@ Retrieves the selected service logs. **Example:** Retrieve the logs for the service named ``pg-demo``. -:: +.. code:: avn service logs pg-demo @@ -388,7 +388,7 @@ Starts the service maintenance updates. **Example:** Start the maintenance updates for the service named ``pg-demo``. -:: +.. code:: avn service maintenance-start pg-demo @@ -436,7 +436,7 @@ Retrieves the metrics for a defined service in Google chart compatible format. T **Example:** Retrieve the daily metrics for the service named ``pg-demo``. -:: +.. code:: avn service metrics pg-demo --period day @@ -469,7 +469,7 @@ Lists the service plans available in a selected project for a defined service ty **Example:** List the service plans available for a PostgreSQL® service in the ``google-europe-west3`` region. -:: +.. code:: avn service plans --service-type pg --cloud google-europe-west3 @@ -516,7 +516,7 @@ A description of the retrieved columns for Aiven for PostgreSQL can be found in **Example:** List the queries for an Aiven for PostgreSQL service named ``pg-demo`` including the query blurb, number of calls and both total and mean execution time. -:: +.. code:: avn service queries pg-demo --format '{query},{calls},{total_time},{mean_time}' @@ -538,7 +538,7 @@ Resetting query statistics could be useful to measure database behaviour in a pr **Example:** Reset the queries for a service named ``pg-demo``. -:: +.. code:: avn service queries-reset pg-demo @@ -597,7 +597,7 @@ Create a service task **Example:** Create a migration task to migrate a MySQL database to Aiven to the service ``mysql`` in project ``myproj`` -:: +.. code:: avn service task-create --operation migration_check --source-service-uri mysql://user:password@host:port/databasename --project myproj mysql @@ -633,7 +633,7 @@ Get details for a single task for your service **Example:** Check the status of your migration task with id ``e2df7736-66c5-4696-b6c9-d33a0fc4cbed`` for the service named ``mysql`` in the ``myproj`` project -:: +.. code:: avn service task-get --task-id e2df7736-66c5-4696-b6c9-d33a0fc4cbed --project myproj mysql @@ -670,7 +670,7 @@ Permanently deletes a service. **Example:** Terminate the service named ``demo-pg``. -:: +.. code:: avn service terminate demo-pg @@ -696,7 +696,7 @@ Lists the Aiven service types available in a project. **Example:** Retrieve all the services types available in the currently selected project. -:: +.. code:: avn service types @@ -770,7 +770,7 @@ Updates the settings for an Aiven service. **Example:** Update the service named ``demo-pg``, move it to ``azure-germany-north`` region and enable termination protection. -:: +.. code:: avn service update demo-pg \ --cloud azure-germany-north \ @@ -779,14 +779,14 @@ Updates the settings for an Aiven service. **Example:** Update the service named ``big-service`` to scale it down to the ``Business-4`` plan. -:: +.. code:: avn service update big-service \ --plan business-4 **Example:** Update the service named ``secure-database`` to only accept connections from the range ``10.0.1.0/24`` and the IP ``10.25.10.12``. -:: +.. code:: avn service update secure-database \ -c ip_filter=10.0.1.0/24,10.25.10.1/32 @@ -795,7 +795,7 @@ Updates the settings for an Aiven service. **Example:** Update the Kafka version of the service named ``kafka-service``. -:: +.. code:: avn service update \ kafka-service -c kafka_version=X.X @@ -824,7 +824,7 @@ For each service, lists the versions available together with: **Example:** List all service versions. -:: +.. code:: avn service versions @@ -861,7 +861,7 @@ Waits for the service to reach the ``RUNNING`` state **Example:** Wait for the service named ``pg-doc`` to reach the ``RUNNING`` state. -:: +.. code:: avn service wait pg-doc diff --git a/docs/tools/cli/service/acl.rst b/docs/tools/cli/service/acl.rst index 75658fa9f7..9476662973 100644 --- a/docs/tools/cli/service/acl.rst +++ b/docs/tools/cli/service/acl.rst @@ -31,7 +31,7 @@ Adds an Aiven for Apache Kafka® ACL entry. **Example:** Add an ACLs for users with username ending with ``userA`` to ``readwrite`` on topics having name starting with ``topic2020`` in the service ``kafka-doc``. -:: +.. code:: avn service acl-add kafka-doc --username *userA --permission readwrite --topic topic2020* @@ -56,7 +56,7 @@ Deletes an Aiven for Apache Kafka® ACL entry. **Example:** Delete the ACLs with id ``acl3604f96c74a`` on the Aiven for Apache Kafka instance named ``kafka-doc``. -:: +.. code:: avn service acl-delete kafka-doc acl3604f96c74a @@ -76,7 +76,7 @@ Lists Aiven for Apache Kafka® ACL entries. **Example:** List the ACLs defined for a service named ``kafka-doc``. -:: +.. code:: avn service acl-list kafka-doc diff --git a/docs/tools/cli/service/connection-info.rst b/docs/tools/cli/service/connection-info.rst index ae7c8519be..69187afd67 100644 --- a/docs/tools/cli/service/connection-info.rst +++ b/docs/tools/cli/service/connection-info.rst @@ -42,7 +42,7 @@ Retrieves the ``kcat`` command necessary to connect to an Aiven for Apache Kafka **Example:** Retrieve the ``kcat`` command to connect to an Aiven for Apache Kafka service named ``demo-kafka`` with SSL authentication (``certificate``), download the certificates necessary for the connection: -:: +.. code:: avn service connection-info kafkacat demo-kafka --write @@ -85,7 +85,7 @@ Retrieves the connection parameters for a certain Aiven for PostgreSQL® service **Example:** Retrieve the connection parameters for an Aiven for PostgreSQL® service named ``demo-pg``: -:: +.. code:: avn service connection-info pg string demo-pg @@ -125,7 +125,7 @@ Retrieves the connection URI for an Aiven for PostgreSQL® service. **Example:** Retrieve the connection URI for an Aiven for PostgreSQL® service named ``demo-pg``: -:: +.. code:: avn service connection-info pg uri demo-pg @@ -164,7 +164,7 @@ Retrieves the ``psql`` command needed to connect to an Aiven for PostgreSQL® se **Example:** Retrieve the ``psql`` command needed to connect to an Aiven for PostgreSQL® service named ``demo-pg``: -:: +.. code:: avn service connection-info psql demo-pg @@ -201,7 +201,7 @@ Retrieves the connection URI needed to connect to an Aiven for Redis®* service. **Example:** Retrieve the connection URI needed to connect to an Aiven for Redis® service named ``demo-redis``: -:: +.. code:: avn service connection-info redis uri demo-redis diff --git a/docs/tools/cli/service/connection-pool.rst b/docs/tools/cli/service/connection-pool.rst index 897689ce81..5e968f7c35 100644 --- a/docs/tools/cli/service/connection-pool.rst +++ b/docs/tools/cli/service/connection-pool.rst @@ -37,7 +37,7 @@ Creates a new :doc:`PgBouncer connection pool `, doesn't need an integration. - **Example**: to create an integration between an Aiven for Apache Flink service named ``flink-democli`` and an Aiven for Apache Kafka service named ``demo-kafka`` you can use the following command:: - - avn service integration-create \ - --integration-type flink \ - --dest-service flink-democli \ - --source-service demo-kafka + **Example**: to create an integration between an Aiven for Apache Flink service named ``flink-democli`` and an Aiven for Apache Kafka service named ``demo-kafka`` you can use the following command: + + .. code:: + + avn service integration-create \ + --integration-type flink \ + --dest-service flink-democli \ + --source-service demo-kafka All the available command integration options can be found in the :ref:`dedicated document ` @@ -417,7 +424,7 @@ The ``application_version_properties`` parameter should contain the following co **Example:** Validates the Aiven for Flink application version for the application-id ``986b2d5f-7eda-480c-bcb3-0f903a866222``. -:: +.. code:: avn service flink validate-application-version flink-democli \ --project my-project \ @@ -499,7 +506,7 @@ Retrieves information about a specific version of an Aiven for Flink® applicati * Application version id: ``7a1c6266-64da-4f6f-a8b0-75207f997c8d`` -:: +.. code:: avn service flink get-application-version flink-democli \ --project my-project \ @@ -533,7 +540,7 @@ Deletes a version of the Aiven for Flink® application in a specified project an * Application id: ``986b2d5f-7eda-480c-bcb3-0f903a866222`` * Application version id: ``7a1c6266-64da-4f6f-a8b0-75207f997c8d`` -:: +.. code:: avn service flink delete-application-version flink-democli \ --project my-project \ @@ -560,7 +567,7 @@ Lists all the Aiven for Flink® application deployments in a specified project a **Example:** Lists all the Aiven for Flink application deployments for application-id ``f171af72-fdf0-442c-947c-7f6a0efa83ad`` for the service ``flink-democli``, in the project ``my-project``. -:: +.. code:: avn service flink list-application-deployments flink-democli \ --project my-project \ @@ -589,7 +596,7 @@ Retrieves information about an Aiven for Flink® application deployment in a spe **Example:** Retrieves the details of the Aiven for Flink application deployment for the application-id ``f171af72-fdf0-442c-947c-7f6a0efa83ad``, deployment-id ``bee0b5cb-01e7-49e6-bddb-a750caed4229`` for the service ``flink-democli``, in the project ``my-project``. -:: +.. code:: avn service flink get-application-deployment flink-democli \ --project my-project \ @@ -637,7 +644,7 @@ The ``deployment_properties`` parameter should contain the following common prop **Example:** Create a new Aiven for Flink application deployment for the application id ``986b2d5f-7eda-480c-bcb3-0f903a866222``. -:: +.. code:: avn service flink create-application-deployment flink-democli \ --project my-project \ @@ -666,7 +673,7 @@ Deletes an Aiven for Flink® application deployment in a specified project and s **Example:** Deletes the Aiven for Flink application deployment with application-id ``f171af72-fdf0-442c-947c-7f6a0efa83ad`` and deployment-id ``6d5e2c03-2235-44a5-ab8f-c544a4de04ef``. -:: +.. code:: avn service flink delete-application-deployment flink-democli \ --project my-project \ @@ -696,7 +703,7 @@ Stops a running Aiven for Flink® application deployment in a specified project **Example:** Stops the Aiven for Flink application deployment with application-id ``f171af72-fdf0-442c-947c-7f6a0efa83ad`` and deployment-id ``6d5e2c03-2235-44a5-ab8f-c544a4de04ef``. -:: +.. code:: avn service flink stop-application-deployment flink-democli \ --project my-project \ @@ -725,7 +732,7 @@ Cancels an Aiven for Flink® application deployment in a specified project and s **Example:** Cancels the Aiven for Flink application deployment with application-id ``f171af72-fdf0-442c-947c-7f6a0efa83ad`` and deployment-id ``6d5e2c03-2235-44a5-ab8f-c544a4de04ef``. -:: +.. code:: avn service flink cancel-application-deployments flink-democli \ --project my-project \ diff --git a/docs/tools/cli/service/integration.rst b/docs/tools/cli/service/integration.rst index ff0656d5fa..1f1b138d43 100644 --- a/docs/tools/cli/service/integration.rst +++ b/docs/tools/cli/service/integration.rst @@ -48,7 +48,7 @@ Creates a new service integration. **Example:** Create a new ``kafka_logs`` service integration to send the logs of the service named ``demo-pg`` to an Aiven for Kafka service named ``demo-kafka`` in the topic ``test_log``. -:: +.. code:: avn service integration-create \ --integration-type kafka_logs \ @@ -72,7 +72,7 @@ Deletes a service integration. **Example:** Delete the integration with id ``8e752fa9-a0c1-4332-892b-f1757390d53f``. -:: +.. code:: avn service integration-delete 8e752fa9-a0c1-4332-892b-f1757390d53f @@ -100,7 +100,7 @@ Creates an external service integration endpoint. **Example:** Create an external Apache Kafka® endpoint named ``demo-ext-kafka``. -:: +.. code:: avn service integration-endpoint-create --endpoint-name demo-ext-kafka \ --endpoint-type external_kafka \ @@ -108,7 +108,7 @@ Creates an external service integration endpoint. **Example:** Create an external Loggly endpoint named ``Loggly-ext``. -:: +.. code:: avn service integration-endpoint-create \ --endpoint-name Loggly-ext \ @@ -137,7 +137,7 @@ Deletes a service integration endpoint. **Example:** Delete the endpoint with ID ``97590813-4a58-4c0c-91fd-eef0f074873b``. -:: +.. code:: avn service integration-endpoint-delete 97590813-4a58-4c0c-91fd-eef0f074873b @@ -150,7 +150,7 @@ Lists all service integration endpoints available in a selected project. **Example:** Lists all service integration endpoints available in the selected project. -:: +.. code:: avn service integration-endpoint-list @@ -173,7 +173,7 @@ Lists all available integration endpoint types for given project. **Example:** Lists all service integration endpoint types available in the selected project. -:: +.. code:: avn service integration-endpoint-types-list @@ -215,7 +215,7 @@ Updates a service integration endpoint. **Example:** Update an external Apache Kafka® endpoint with id ``821e0144-1503-42db-aa9f-b4aa34c4af6b``. -:: +.. code:: avn service integration-endpoint-update 821e0144-1503-42db-aa9f-b4aa34c4af6b \ --user-config-json '{"bootstrap_servers":"servertestABC:123","security_protocol":"PLAINTEXT"}' @@ -238,7 +238,7 @@ Lists the integrations defined for a selected service. **Example:** List all integrations for the service named ``demo-pg``. -:: +.. code:: avn service integration-list demo-pg @@ -263,7 +263,7 @@ Lists all available integration types for given project. **Example:** List all integration types for the currently selected project. -:: +.. code:: avn service integration-types-list @@ -307,7 +307,7 @@ Updates an existing service integration. **Example:** Update the service integration with ID ``8e752fa9-a0c1-4332-892b-f1757390d53f`` changing the Aiven for Kafka topic storing the logs to ``test_pg_log``. -:: +.. code:: avn service integration-update 8e752fa9-a0c1-4332-892b-f1757390d53f \ -c 'kafka_topic=test_pg_log' diff --git a/docs/tools/cli/service/m3.rst b/docs/tools/cli/service/m3.rst index 2b713bc363..c111d694d1 100644 --- a/docs/tools/cli/service/m3.rst +++ b/docs/tools/cli/service/m3.rst @@ -50,7 +50,7 @@ Adds a new :doc:`Aiven for M3 namespace +3. Remove the resource from the control of Terraform: -.. tip:: - Use the ``-dry-run`` flag to preview the changes without applying them. + .. code:: + + terraform state rm + + .. tip:: + + Use the ``-dry-run`` flag to preview the changes without applying them. + +4. Add the resource back to Terraform by importing it as a new resource: -4. Add the resource back to Terraform by importing it as a new resource:: + .. code:: + + terraform import project_name/service_name/db_name - terraform import project_name/service_name/db_name +5. Check that the import is going to run as you expect: -5. Check that the import is going to run as you expect:: + .. code:: - terraform plan + terraform plan -6. Apply the new configuration:: +6. Apply the new configuration: - terraform apply + .. code:: + + terraform apply diff --git a/docs/tools/terraform/howto/upgrade-provider-v1-v2.rst b/docs/tools/terraform/howto/upgrade-provider-v1-v2.rst index 3a76ce58b1..e78ca23b4e 100644 --- a/docs/tools/terraform/howto/upgrade-provider-v1-v2.rst +++ b/docs/tools/terraform/howto/upgrade-provider-v1-v2.rst @@ -119,44 +119,56 @@ To safely make this change you will: - Import already existing service to the Terraform state. 1. To change from the old ``aiven_service`` to the new ``aiven_kafka`` -resource, the resource type should be changed, and the old ``service_type`` -field removed. Any references to ``aiven_service.kafka.*`` should be updated to instead read ``aiven_kafka.kafka.*`` instead. Here's an example showing the update in action:: + resource, the resource type should be changed, and the old ``service_type`` + field removed. Any references to ``aiven_service.kafka.*`` should be updated to instead read ``aiven_kafka.kafka.*`` instead. Here's an example showing the update in action: + + .. code:: + + - resource "aiven_service" "kafka" { + - service_type = "kafka" + + resource "aiven_kafka" "kafka" { + ... + } + resource "aiven_service_user" "kafka_user" { + project = var.aiven_project_name + - service_name = aiven_service.kafka.service_name + + service_name = aiven_kafka.kafka.service_name + username = var.kafka_user_name + } - - resource "aiven_service" "kafka" { - - service_type = "kafka" - + resource "aiven_kafka" "kafka" { - ... - } - resource "aiven_service_user" "kafka_user" { - project = var.aiven_project_name - - service_name = aiven_service.kafka.service_name - + service_name = aiven_kafka.kafka.service_name - username = var.kafka_user_name - } +2. Check the current state of the world: + .. code:: -2. Check the current state of the world:: + terraform state list | grep kf - terraform state list | grep kf +3. Remove the service from the control of Terraform, and write a backup of the state into your local directory: -3. Remove the service from the control of Terraform, and write a backup of the state into your local directory:: + .. code:: - terraform state rm -backup=./ aiven_service.kafka + terraform state rm -backup=./ aiven_service.kafka -.. tip:: - Use the ``-dry-run`` flag to see this change before it is actually made + .. tip:: -4. Add the service back to Terraform by importing it as a new service with the new service type:: + Use the ``-dry-run`` flag to see this change before it is actually made - terraform import aiven_kafka.kafka demo-project/existing-kafka +4. Add the service back to Terraform by importing it as a new service with the new service type: -5. Check that the import is going to run as you expect:: + .. code:: + + terraform import aiven_kafka.kafka demo-project/existing-kafka + +5. Check that the import is going to run as you expect: + + .. code:: + + terraform plan - terraform plan +6. Apply the new configuration: -6. Finally, go ahead and apply the new configuration:: + .. code:: - terraform apply + terraform apply Further reading ''''''''''''''' diff --git a/docs/tools/terraform/howto/upgrade-provider-v2-v3.rst b/docs/tools/terraform/howto/upgrade-provider-v2-v3.rst index e5d47ae979..4a7a24089a 100644 --- a/docs/tools/terraform/howto/upgrade-provider-v2-v3.rst +++ b/docs/tools/terraform/howto/upgrade-provider-v2-v3.rst @@ -62,50 +62,64 @@ To safely make this change you will: 1. To change from the old ``aiven_vpc_peering_connection`` to the new ``aiven_azure_vpc_peering_connection`` resource, the resource type should be changed. Any references to ``aiven_vpc_peering_connection.foo.*`` should be updated to instead read ``aiven_azure_vpc_peering_connection.foo.*`` instead. -Here's an example showing the update in action:: - - - resource "aiven_vpc_peering_connection" "foo" { - vpc_id = data.aiven_project_vpc.vpc.id - - peer_cloud_account = "Azure subscription ID" - - peer_vpc = "Azure virtual network name of the peered VPC" - peer_azure_app_id = "Azure app registration id in UUID4 form" - peer_azure_tenant_id = "Azure tenant id in UUID4 form" - peer_resource_group = "Azure resource group name of the peered VPC" - } +Here's an example showing the update in action: - + resource "aiven_azure_vpc_peering_connection" "foo" { - vpc_id = data.aiven_project_vpc.vpc.id - + azure_subscription_id = "Azure subscription ID" - + vnet_name = "Azure virtual network name of the peered VPC" - peer_azure_app_id = "Azure app registration id in UUID4 form" - peer_azure_tenant_id = "Azure tenant id in UUID4 form" - peer_resource_group = "Azure resource group name of the peered VPC" - } + .. code:: + + - resource "aiven_vpc_peering_connection" "foo" { + vpc_id = data.aiven_project_vpc.vpc.id + - peer_cloud_account = "Azure subscription ID" + - peer_vpc = "Azure virtual network name of the peered VPC" + peer_azure_app_id = "Azure app registration id in UUID4 form" + peer_azure_tenant_id = "Azure tenant id in UUID4 form" + peer_resource_group = "Azure resource group name of the peered VPC" + } + + resource "aiven_azure_vpc_peering_connection" "foo" { + vpc_id = data.aiven_project_vpc.vpc.id + + azure_subscription_id = "Azure subscription ID" + + vnet_name = "Azure virtual network name of the peered VPC" + peer_azure_app_id = "Azure app registration id in UUID4 form" + peer_azure_tenant_id = "Azure tenant id in UUID4 form" + peer_resource_group = "Azure resource group name of the peered VPC" + } -2. Check the current state of the world:: - terraform state list | grep azure +2. Check the current state of the world: -3. Remove the resource from the control of Terraform:: + .. code:: - terraform state rm aiven_vpc_peering_connection.foo + terraform state list | grep azure -.. tip:: - Use the ``-dry-run`` flag to see this change before it is actually made +3. Remove the resource from the control of Terraform: + + .. code:: + + terraform state rm aiven_vpc_peering_connection.foo -4. Add the resource back to Terraform by importing it as a new resource with the new type:: + .. tip:: - terraform import aiven_azure_vpc_peering_connection.foo project_name/vpc_id/azure_subscription_id/vnet_name + Use the ``-dry-run`` flag to see this change before it is actually made -5. Check that the import is going to run as you expect:: +4. Add the resource back to Terraform by importing it as a new resource with the new type: - terraform plan + .. code:: -6. Finally, go ahead and apply the new configuration:: + terraform import aiven_azure_vpc_peering_connection.foo project_name/vpc_id/azure_subscription_id/vnet_name - terraform apply +5. Check that the import is going to run as you expect: -.. Note:: - You can follow a similar approach to update ``aiven_database`` and ``aiven_service_user`` resources, - which have been deprecated in v3 of the provider. + .. code:: + + terraform plan + +6. Apply the new configuration: + + .. code:: + + terraform apply + + .. Note:: + + You can follow a similar approach to update ``aiven_database`` and ``aiven_service_user`` resources, + which have been deprecated in v3 of the provider. diff --git a/docs/tools/terraform/howto/vpc-peering-aws.rst b/docs/tools/terraform/howto/vpc-peering-aws.rst index 6cb84a70d4..675bf43b47 100644 --- a/docs/tools/terraform/howto/vpc-peering-aws.rst +++ b/docs/tools/terraform/howto/vpc-peering-aws.rst @@ -12,9 +12,9 @@ Prerequisites: * Create an :doc:`Aiven authentication token `. -* Install the AWS CLI https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html. +* `Install the AWS CLI `_. -* Configure the AWS CLI https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html. +* `Configure the AWS CLI `_. Set up the Terraform variables: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/docs/tools/terraform/howto/vpc-peering-gcp.rst b/docs/tools/terraform/howto/vpc-peering-gcp.rst index e76e59bbc9..12a20d9ead 100644 --- a/docs/tools/terraform/howto/vpc-peering-gcp.rst +++ b/docs/tools/terraform/howto/vpc-peering-gcp.rst @@ -12,7 +12,7 @@ Prerequisites: * Create an :doc:`Aiven authentication token `. -* Install the Google Cloud SDK from https://cloud.google.com/sdk/docs/install. +* `Install the Google Cloud SDK `_. * Authenticate using the following command diff --git a/docs/tutorials/anomaly-detection.rst b/docs/tutorials/anomaly-detection.rst index e2dcb2e4bd..77bf5b9c23 100644 --- a/docs/tutorials/anomaly-detection.rst +++ b/docs/tutorials/anomaly-detection.rst @@ -265,19 +265,19 @@ It's time to start streaming the fake IoT data that you'll later process with wi #. Run the following command to build the Docker image: - :: + .. code:: docker build -t fake-data-producer-for-apache-kafka-docker . #. Run the following command to run the Docker image: - :: + .. code:: docker run fake-data-producer-for-apache-kafka-docker You should now see the above command pushing IoT sensor reading events to the ``cpu_load_stats_real`` topic in your Apache Kafka® service: - :: + .. code:: {"hostname": "dopey", "cpu": "cpu4", "usage": 98.3335306302198, "occurred_at": 1633956789277} {"hostname": "sleepy", "cpu": "cpu2", "usage": 87.28240549074823, "occurred_at": 1633956783483} @@ -503,9 +503,11 @@ You can create the thresholds table in the ``demo-postgresql`` service with the 1. In the `Aiven Console `_, open the Aiven for PostgreSQL service ``demo-postgresql``. 2. In the **Overview** tab locate the **Service URI** parameter and copy the value. -3. Connect via ``psql`` to ``demo postgresql`` with the following terminal command, replacing the ```` placeholder with the **Service URI** string copied in the step above:: - - psql "" +3. Connect via ``psql`` to ``demo postgresql`` with the following terminal command, replacing the ```` placeholder with the **Service URI** string copied in the step above: + + .. code:: + + psql "" 4. Create the ``cpu_thresholds`` table and populate the values with the following code: @@ -514,13 +516,13 @@ You can create the thresholds table in the ``demo-postgresql`` service with the 5. Enter the following command to check that the threshold values are correctly populated: - :: + .. code:: SELECT * FROM cpu_thresholds; The output shows you the content of the table: - :: + .. code:: hostname | allowed_top ---------+------------