From 444e203fd471c9d59efd320cf42fdb236965c18e Mon Sep 17 00:00:00 2001 From: byashimov Date: Thu, 21 Nov 2024 12:47:01 +0000 Subject: [PATCH] deploy: c94c0d288ccd648ecab2143ca19cab748a794570 --- 404.html | 23 +- api-reference/cassandra.html | 27 +- api-reference/clickhouse.html | 23 +- api-reference/clickhousedatabase.html | 23 +- api-reference/clickhousegrant.html | 23 +- api-reference/clickhouserole.html | 23 +- api-reference/clickhouseuser.html | 23 +- api-reference/connectionpool.html | 23 +- api-reference/database.html | 29 +- api-reference/examples/flink.yaml | 29 + .../serviceintegration.autoscaler.yaml | 29 + ...serviceintegrationendpoint.autoscaler.yaml | 17 + api-reference/flink.html | 2271 +++++++++++++++++ api-reference/grafana.html | 53 +- api-reference/index.html | 23 +- api-reference/kafka.html | 51 +- api-reference/kafkaacl.html | 23 +- api-reference/kafkaconnect.html | 39 +- api-reference/kafkaconnector.html | 23 +- api-reference/kafkaschema.html | 23 +- api-reference/kafkaschemaregistryacl.html | 23 +- api-reference/kafkatopic.html | 23 +- api-reference/mysql.html | 27 +- api-reference/opensearch.html | 453 +++- api-reference/postgresql.html | 37 +- api-reference/project.html | 23 +- api-reference/projectvpc.html | 23 +- api-reference/redis.html | 27 +- api-reference/serviceintegration.html | 339 +-- api-reference/serviceintegrationendpoint.html | 158 +- api-reference/serviceuser.html | 23 +- authentication.html | 23 +- changelog.html | 95 +- contributing/developer-guide.html | 23 +- contributing/index.html | 23 +- contributing/resource-generation.html | 23 +- index.html | 23 +- installation/helm.html | 23 +- installation/kubectl.html | 23 +- installation/prerequisites.html | 23 +- installation/uninstalling.html | 23 +- resources/cassandra.html | 23 +- resources/clickhouse.html | 23 +- resources/kafka/connect.html | 23 +- resources/kafka/index.html | 23 +- resources/kafka/schema.html | 23 +- resources/mysql.html | 23 +- resources/opensearch.html | 23 +- resources/postgresql.html | 23 +- resources/project-vpc.html | 23 +- resources/project.html | 23 +- resources/redis.html | 23 +- resources/service-integrations.html | 23 +- search/search_index.js | 2 +- search/search_index.json | 2 +- sitemap.xml | 102 +- sitemap.xml.gz | Bin 557 -> 563 bytes troubleshooting.html | 23 +- 58 files changed, 4333 insertions(+), 328 deletions(-) create mode 100644 api-reference/examples/flink.yaml create mode 100644 api-reference/examples/serviceintegration.autoscaler.yaml create mode 100644 api-reference/examples/serviceintegrationendpoint.autoscaler.yaml create mode 100644 api-reference/flink.html diff --git a/404.html b/404.html index 3bcbe35a..bc22c56c 100644 --- a/404.html +++ b/404.html @@ -12,7 +12,7 @@ - + @@ -1254,6 +1254,27 @@ +
  • + + + + + Flink + + + + +
  • + + + + + + + + + +
  • diff --git a/api-reference/cassandra.html b/api-reference/cassandra.html index 5d5e538f..2caa06fd 100644 --- a/api-reference/cassandra.html +++ b/api-reference/cassandra.html @@ -18,7 +18,7 @@ - + @@ -1431,6 +1431,27 @@ +
  • + + + + + Flink + + + + +
  • + + + + + + + + + +
  • @@ -2126,11 +2147,11 @@

    userConfigCassandra specific user configuration options.

    Optional

    auth_google

    Appears on spec.userConfig.

    @@ -2413,7 +2434,7 @@

    external_image_storageaccess_key (string, Pattern: ^[A-Z0-9]+$, MaxLength: 4096). S3 access key. Requires permissions to the S3 bucket for the s3:PutObject and s3:PutObjectAcl actions.

  • bucket_url (string, MaxLength: 2048). Bucket URL for S3.
  • -
  • provider (string, Enum: s3). Provider type.
  • +
  • provider (string, Enum: s3). External image store provider.
  • secret_key (string, Pattern: ^[A-Za-z0-9/+=]+$, MaxLength: 4096). S3 secret key.
  • ip_filter

    @@ -2462,7 +2483,7 @@

    smtp_serverfrom_name (string, Pattern: ^[^\x00-\x1F]+$, MaxLength: 128). Name used in outgoing emails, defaults to Grafana.
  • password (string, Pattern: ^[^\x00-\x1F]+$, MaxLength: 255). Password for SMTP authentication.
  • skip_verify (boolean). Skip verifying server certificate. Defaults to false.
  • -
  • starttls_policy (string, Enum: OpportunisticStartTLS, MandatoryStartTLS, NoStartTLS). Either OpportunisticStartTLS, MandatoryStartTLS or NoStartTLS. Default is OpportunisticStartTLS.
  • +
  • starttls_policy (string, Enum: MandatoryStartTLS, NoStartTLS, OpportunisticStartTLS). Either OpportunisticStartTLS, MandatoryStartTLS or NoStartTLS. Default is OpportunisticStartTLS.
  • username (string, Pattern: ^[^\x00-\x1F]+$, MaxLength: 255). Username for SMTP authentication.
  • @@ -2503,7 +2524,7 @@

    smtp_server - + diff --git a/api-reference/index.html b/api-reference/index.html index fdc3961c..78636928 100644 --- a/api-reference/index.html +++ b/api-reference/index.html @@ -18,7 +18,7 @@ - + @@ -1282,6 +1282,27 @@ +
  • + + + + + Flink + + + + +
  • + + + + + + + + + +
  • diff --git a/api-reference/kafka.html b/api-reference/kafka.html index ef833b9f..fef9af72 100644 --- a/api-reference/kafka.html +++ b/api-reference/kafka.html @@ -18,7 +18,7 @@ - + @@ -1272,6 +1272,27 @@ +
  • + + + + + Flink + + + + +
  • + + + + + + + + + +
  • @@ -2398,7 +2419,7 @@

    userConfigkafka_rest_authorization (boolean). Enable authorization in Kafka-REST service.

  • kafka_rest_config (object). Kafka REST configuration. See below for nested schema.
  • kafka_sasl_mechanisms (object). Kafka SASL mechanisms. See below for nested schema.
  • -
  • kafka_version (string, Enum: 3.5, 3.6, 3.7, 3.8). Kafka major version.
  • +
  • kafka_version (string, Enum: 3.7, 3.8). Kafka major version.
  • letsencrypt_sasl_privatelink (boolean). Use Letsencrypt CA for Kafka SASL via Privatelink.
  • private_access (object). Allow access to selected service ports from private networks. See below for nested schema.
  • privatelink_access (object). Allow access to selected service components through Privatelink. See below for nested schema.
  • @@ -2434,7 +2455,7 @@

    kafkaauto_create_topics_enable (boolean). Enable auto-creation of topics. (Default: true). -
  • compression_type (string, Enum: gzip, snappy, lz4, zstd, uncompressed, producer). Specify the final compression type for a given topic. This configuration accepts the standard compression codecs (gzip, snappy, lz4, zstd). It additionally accepts uncompressed which is equivalent to no compression; and producer which means retain the original compression codec set by the producer.(Default: producer).
  • +
  • compression_type (string, Enum: gzip, lz4, producer, snappy, uncompressed, zstd). Specify the final compression type for a given topic. This configuration accepts the standard compression codecs (gzip, snappy, lz4, zstd). It additionally accepts uncompressed which is equivalent to no compression; and producer which means retain the original compression codec set by the producer.(Default: producer).
  • connections_max_idle_ms (integer, Minimum: 1000, Maximum: 3600000). Idle connections timeout: the server socket processor threads close the connections that idle for longer than this. (Default: 600000 ms (10 minutes)).
  • default_replication_factor (integer, Minimum: 1, Maximum: 10). Replication factor for auto-created topics (Default: 3).
  • group_initial_rebalance_delay_ms (integer, Minimum: 0, Maximum: 300000). The amount of time, in milliseconds, the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins. The default value for this is 3 seconds. During development and testing it might be desirable to set this to 0 in order to not delay test execution time. (Default: 3000 ms (3 seconds)).
  • @@ -2444,7 +2465,7 @@

    kafkalog_cleaner_max_compaction_lag_ms (integer, Minimum: 30000). The maximum amount of time message will remain uncompacted. Only applicable for logs that are being compacted. (Default: 9223372036854775807 ms (Long.MAX_VALUE)).
  • log_cleaner_min_cleanable_ratio (number, Minimum: 0.2, Maximum: 0.9). Controls log compactor frequency. Larger value means more frequent compactions but also more space wasted for logs. Consider setting log.cleaner.max.compaction.lag.ms to enforce compactions sooner, instead of setting a very high value for this option. (Default: 0.5).
  • log_cleaner_min_compaction_lag_ms (integer, Minimum: 0). The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. (Default: 0 ms).
  • -
  • log_cleanup_policy (string, Enum: delete, compact, compact,delete). The default cleanup policy for segments beyond the retention window (Default: delete).
  • +
  • log_cleanup_policy (string, Enum: compact, compact,delete, delete). The default cleanup policy for segments beyond the retention window (Default: delete).
  • log_flush_interval_messages (integer, Minimum: 1). The number of messages accumulated on a log partition before messages are flushed to disk (Default: 9223372036854775807 (Long.MAX_VALUE)).
  • log_flush_interval_ms (integer, Minimum: 0). The maximum time in ms that a message in any topic is kept in memory (page-cache) before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used (Default: null).
  • log_index_interval_bytes (integer, Minimum: 0, Maximum: 104857600). The interval with which Kafka adds an entry to the offset index (Default: 4096 bytes (4 kibibytes)).
  • @@ -2493,10 +2514,10 @@

    kafka_connect_configconnector_client_config_override_policy (string, Enum: None, All). Defines what client configurations can be overridden by the connector. Default is None. +
  • connector_client_config_override_policy (string, Enum: All, None). Defines what client configurations can be overridden by the connector. Default is None.
  • consumer_auto_offset_reset (string, Enum: earliest, latest). What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.
  • consumer_fetch_max_bytes (integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.
  • -
  • consumer_isolation_level (string, Enum: read_uncommitted, read_committed). Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.
  • +
  • consumer_isolation_level (string, Enum: read_committed, read_uncommitted). Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.
  • consumer_max_partition_fetch_bytes (integer, Minimum: 1048576, Maximum: 104857600). Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.
  • consumer_max_poll_interval_ms (integer, Minimum: 1, Maximum: 2147483647). The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).
  • consumer_max_poll_records (integer, Minimum: 1, Maximum: 10000). The maximum number of records returned in a single call to poll() (defaults to 500).
  • @@ -2504,7 +2525,7 @@

    kafka_connect_configoffset_flush_timeout_ms (integer, Minimum: 1, Maximum: 2147483647). Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).
  • producer_batch_size (integer, Minimum: 0, Maximum: 5242880). This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).
  • producer_buffer_memory (integer, Minimum: 5242880, Maximum: 134217728). The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).
  • -
  • producer_compression_type (string, Enum: gzip, snappy, lz4, zstd, none). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip, snappy, lz4, zstd). It additionally accepts none which is the default and equivalent to no compression.
  • +
  • producer_compression_type (string, Enum: gzip, lz4, none, snappy, zstd). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip, snappy, lz4, zstd). It additionally accepts none which is the default and equivalent to no compression.
  • producer_linger_ms (integer, Minimum: 0, Maximum: 5000). This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger for the specified time waiting for more records to show up. Defaults to 0.
  • producer_max_request_size (integer, Minimum: 131072, Maximum: 67108864). This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.
  • scheduled_rebalance_max_delay_ms (integer, Minimum: 0, Maximum: 600000). The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.
  • @@ -2512,19 +2533,19 @@

    kafka_connect_configkafka_connect_secret_providers

    Appears on spec.userConfig.

    -

    SecretProvider.

    +

    Configure external secret providers in order to reference external secrets in connector configuration. Currently Hashicorp Vault and AWS Secrets Manager are supported.

    Required

    Optional

    aws

    Appears on spec.userConfig.kafka_connect_secret_providers.

    -

    AWS config for Secret Provider.

    +

    AWS secret provider configuration.

    Required

    @@ -2376,7 +2679,7 @@

    userConfigadditional_backup_regions (array of strings, MaxItems: 1). Additional Cloud Regions for Backup Replication.
  • azure_migration (object). Azure migration settings. See below for nested schema.
  • custom_domain (string, MaxLength: 255). Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.
  • -
  • disable_replication_factor_adjustment (boolean). DEPRECATED: Disable automatic replication factor adjustment for multi-node services. By default, Aiven ensures all indexes are replicated at least to two nodes. Note: Due to potential data loss in case of losing a service node, this setting can no longer be activated.
  • +
  • disable_replication_factor_adjustment (boolean). Disable automatic replication factor adjustment for multi-node services. By default, Aiven ensures all indexes are replicated at least to two nodes. Note: Due to potential data loss in case of losing a service node, this setting can not be activated unless specifically allowed for the project.
  • gcs_migration (object). Google Cloud Storage migration settings. See below for nested schema.
  • index_patterns (array of objects, MaxItems: 512). Index patterns. See below for nested schema.
  • index_rollup (object). Index rollup settings. See below for nested schema.
  • @@ -2404,9 +2707,10 @@

    azure_migrationAzure migration settings.

    Required

    Optional

    @@ -2414,8 +2718,9 @@

    azure_migrationchunk_size (string, Pattern: ^[^\r\n]*$). Big files can be broken down into chunks during snapshotting if needed. Should be the same as for the 3rd party repository.
  • compress (boolean). when set to true metadata files are stored in compressed format.
  • endpoint_suffix (string, Pattern: ^[^\r\n]*$). Defines the DNS suffix for Azure Storage endpoints.
  • -
  • indices (string). A comma-delimited list of indices to restore from the snapshot. Multi-index syntax is supported. By default, a restore operation includes all data streams and indices in the snapshot. If this argument is provided, the restore operation only includes the data streams and indices that you specify.
  • +
  • include_aliases (boolean). Whether to restore aliases alongside their associated indexes. Default is true.
  • key (string, Pattern: ^[^\r\n]*$). Azure account secret key. One of key or sas_token should be specified.
  • +
  • restore_global_state (boolean). If true, restore the cluster state. Defaults to false.
  • sas_token (string, Pattern: ^[^\r\n]*$). A shared access signatures (SAS) token. One of key or sas_token should be specified.
  • gcs_migration

    @@ -2426,13 +2731,15 @@

    gcs_migrationbase_path (string, Pattern: ^[^\r\n]*$). The path to the repository data within its container. The value of this setting should not start or end with a /.
  • bucket (string, Pattern: ^[^\r\n]*$). The path to the repository data within its container.
  • credentials (string, Pattern: ^[^\r\n]*$). Google Cloud Storage credentials file content.
  • +
  • indices (string). A comma-delimited list of indices to restore from the snapshot. Multi-index syntax is supported.
  • snapshot_name (string, Pattern: ^[^\r\n]*$). The snapshot name to restore from.
  • Optional

    index_patterns

    Appears on spec.userConfig.

    @@ -2535,7 +2842,10 @@

    opensearchplugins_alerting_filter_by_backend_roles (boolean). Enable or disable filtering of alerting by backend roles. Requires Security plugin. Defaults to false.
  • reindex_remote_whitelist (array of strings, MaxItems: 32). Whitelisted addresses for reindexing. Changing this value will cause all OpenSearch instances to restart.
  • script_max_compilations_rate (string, Pattern: ^[^\r\n]*$, MaxLength: 1024). Script compilation circuit breaker limits the number of inline script compilations within a period of time. Default is use-context.
  • +
  • search.insights.top_queries (object). See below for nested schema.
  • +
  • search_backpressure (object). Search Backpressure Settings. See below for nested schema.
  • search_max_buckets (integer, Minimum: 1, Maximum: 1000000). Maximum number of aggregation buckets allowed in a single response. OpenSearch default value is used when this is not defined.
  • +
  • shard_indexing_pressure (object). Shard indexing back pressure settings. See below for nested schema.
  • thread_pool_analyze_queue_size (integer, Minimum: 10, Maximum: 2000). Size for the thread pool queue. See documentation for exact details.
  • thread_pool_analyze_size (integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.
  • thread_pool_force_merge_size (integer, Minimum: 1, Maximum: 128). Size for the thread pool. See documentation for exact details. Do note this may have maximum value depending on CPU count - value is automatically lowered if set to higher than maximum value.
  • @@ -2560,7 +2870,7 @@

    spec.userConfig.opensearch.auth_failure_listeners.

    Optional

    saml

    diff --git a/api-reference/postgresql.html b/api-reference/postgresql.html index af88be7e..88253691 100644 --- a/api-reference/postgresql.html +++ b/api-reference/postgresql.html @@ -18,7 +18,7 @@ - + @@ -1272,6 +1272,27 @@ +
  • + + + + + Flink + + + + +
  • + + + + + + + + + +
  • @@ -2260,7 +2281,7 @@

    userConfigpg_read_replica (boolean). Should the service which is being forked be a read replica (deprecated, use read_replica service integration instead).

  • pg_service_to_fork_from (string, Immutable, Pattern: ^[a-z][-a-z0-9]{0,63}$|^$, MaxLength: 64). Name of the PG Service from which to fork (deprecated, use service_to_fork_from). This has effect only when a new service is being created.
  • pg_stat_monitor_enable (boolean). Enable the pg_stat_monitor extension. Enabling this extension will cause the cluster to be restarted.When this extension is enabled, pg_stat_statements results for utility commands are unreliable.
  • -
  • pg_version (string, Enum: 12, 13, 14, 15, 16). PostgreSQL major version.
  • +
  • pg_version (string, Enum: 13, 14, 15, 16). PostgreSQL major version.
  • pgaudit (object). Deprecated. System-wide settings for the pgaudit extension. See below for nested schema.
  • pgbouncer (object). PGBouncer connection pooling settings. See below for nested schema.
  • pglookout (object). System-wide settings for pglookout. See below for nested schema.
  • @@ -2273,7 +2294,7 @@

    userConfigservice_to_fork_from (string, Immutable, Pattern: ^[a-z][-a-z0-9]{0,63}$|^$, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.
  • shared_buffers_percentage (number, Minimum: 20, Maximum: 60). Percentage of total RAM that the database server uses for shared memory buffers. Valid range is 20-60 (float), which corresponds to 20% - 60%. This setting adjusts the shared_buffers configuration value.
  • static_ips (boolean). Use static public IP addresses.
  • -
  • synchronous_replication (string, Enum: quorum, off). Synchronous replication type. Note that the service plan also needs to support synchronous replication.
  • +
  • synchronous_replication (string, Enum: off, quorum). Synchronous replication type. Note that the service plan also needs to support synchronous replication.
  • timescaledb (object). System-wide settings for the timescaledb extension. See below for nested schema.
  • variant (string, Enum: aiven, timescale). Variant of the PostgreSQL service, may affect the features that are exposed by default.
  • work_mem (integer, Minimum: 1, Maximum: 1024). Sets the maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files, in MB. Default is 1MB + 0.075% of total RAM (up to 32MB).
  • @@ -2330,8 +2351,8 @@

    pgidle_in_transaction_session_timeout (integer, Minimum: 0, Maximum: 604800000). Time out sessions with open transactions after this number of milliseconds.
  • jit (boolean). Controls system-wide use of Just-in-Time Compilation (JIT).
  • log_autovacuum_min_duration (integer, Minimum: -1, Maximum: 2147483647). Causes each action executed by autovacuum to be logged if it ran for at least the specified number of milliseconds. Setting this to zero logs all autovacuum actions. Minus-one (the default) disables logging autovacuum actions.
  • -
  • log_error_verbosity (string, Enum: TERSE, DEFAULT, VERBOSE). Controls the amount of detail written in the server log for each message that is logged.
  • -
  • log_line_prefix (string, Enum: 'pid=%p,user=%u,db=%d,app=%a,client=%h ', '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h ', '%m [%p] %q[user=%u,db=%d,app=%a] ', 'pid=%p,user=%u,db=%d,app=%a,client=%h,txid=%x,qid=%Q '). Choose from one of the available log formats.
  • +
  • log_error_verbosity (string, Enum: DEFAULT, TERSE, VERBOSE). Controls the amount of detail written in the server log for each message that is logged.
  • +
  • log_line_prefix (string, Enum: '%m [%p] %q[user=%u,db=%d,app=%a] ', '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h ', 'pid=%p,user=%u,db=%d,app=%a,client=%h ', 'pid=%p,user=%u,db=%d,app=%a,client=%h,txid=%x,qid=%Q '). Choose from one of the available log formats.
  • log_min_duration_statement (integer, Minimum: -1, Maximum: 86400000). Log statements that take more than this number of milliseconds to run, -1 disables.
  • log_temp_files (integer, Minimum: -1, Maximum: 2147483647). Log statements for each temporary file created larger than this number of kilobytes, -1 disables.
  • max_files_per_process (integer, Minimum: 1000, Maximum: 4096). PostgreSQL maximum number of files that can be open per process.
  • @@ -2352,12 +2373,12 @@

    pgpg_partman_bgw.role (string, Pattern: ^[_A-Za-z0-9][-._A-Za-z0-9]{0,63}$, MaxLength: 64). Controls which role to use for pg_partman's scheduled background tasks.
  • pg_stat_monitor.pgsm_enable_query_plan (boolean). Enables or disables query plan monitoring.
  • pg_stat_monitor.pgsm_max_buckets (integer, Minimum: 1, Maximum: 10). Sets the maximum number of buckets.
  • -
  • pg_stat_statements.track (string, Enum: all, top, none). Controls which statements are counted. Specify top to track top-level statements (those issued directly by clients), all to also track nested statements (such as statements invoked within functions), or none to disable statement statistics collection. The default value is top.
  • +
  • pg_stat_statements.track (string, Enum: all, none, top). Controls which statements are counted. Specify top to track top-level statements (those issued directly by clients), all to also track nested statements (such as statements invoked within functions), or none to disable statement statistics collection. The default value is top.
  • temp_file_limit (integer, Minimum: -1, Maximum: 2147483647). PostgreSQL temporary file limit in KiB, -1 for unlimited.
  • timezone (string, Pattern: ^[\w/]*$, MaxLength: 64). PostgreSQL service timezone.
  • track_activity_query_size (integer, Minimum: 1024, Maximum: 10240). Specifies the number of bytes reserved to track the currently executing command for each active session.
  • track_commit_timestamp (string, Enum: off, on). Record commit time of transactions.
  • -
  • track_functions (string, Enum: all, pl, none). Enables tracking of function call counts and time used.
  • +
  • track_functions (string, Enum: all, none, pl). Enables tracking of function call counts and time used.
  • track_io_timing (string, Enum: off, on). Enables timing of database I/O calls. This parameter is off by default, because it will repeatedly query the operating system for the current time, which may cause significant overhead on some platforms.
  • wal_sender_timeout (integer). Terminate replication connections that are inactive for longer than this amount of time, in milliseconds. Setting this value to zero disables the timeout.
  • wal_writer_delay (integer, Minimum: 10, Maximum: 200). WAL flush interval in milliseconds. Note that setting this value to lower than the default 200ms may negatively impact performance.
  • @@ -2400,7 +2421,7 @@

    pgbouncerautodb_idle_timeout (integer, Minimum: 0, Maximum: 86400). If the automatically created database pools have been unused this many seconds, they are freed. If 0 then timeout is disabled. [seconds].
  • autodb_max_db_connections (integer, Minimum: 0, Maximum: 2147483647). Do not allow more than this many server connections per database (regardless of user). Setting it to 0 means unlimited.
  • -
  • autodb_pool_mode (string, Enum: session, transaction, statement). PGBouncer pool mode.
  • +
  • autodb_pool_mode (string, Enum: session, statement, transaction). PGBouncer pool mode.
  • autodb_pool_size (integer, Minimum: 0, Maximum: 10000). If non-zero then create automatically a pool of that size per user when a pool doesn't exist.
  • ignore_startup_parameters (array of strings, MaxItems: 32). List of parameters to ignore when given in startup packet.
  • max_prepared_statements (integer, Minimum: 0, Maximum: 3000). PgBouncer tracks protocol-level named prepared statements related commands sent by the client in transaction and statement pooling modes when max_prepared_statements is set to a non-zero value. Setting it to 0 disables prepared statements. max_prepared_statements defaults to 100, and its maximum is 3000.
  • diff --git a/api-reference/project.html b/api-reference/project.html index 6973d47f..82f8f3b6 100644 --- a/api-reference/project.html +++ b/api-reference/project.html @@ -18,7 +18,7 @@ - + @@ -1272,6 +1272,27 @@ +
  • + + + + + Flink + + + + +
  • + + + + + + + + + +
  • diff --git a/api-reference/projectvpc.html b/api-reference/projectvpc.html index 22ce3eb8..ccb1990e 100644 --- a/api-reference/projectvpc.html +++ b/api-reference/projectvpc.html @@ -18,7 +18,7 @@ - + @@ -1272,6 +1272,27 @@ +
  • + + + + + Flink + + + + +
  • + + + + + + + + + +
  • diff --git a/api-reference/redis.html b/api-reference/redis.html index 2cb97c2a..68e8f808 100644 --- a/api-reference/redis.html +++ b/api-reference/redis.html @@ -18,7 +18,7 @@ - + @@ -1272,6 +1272,27 @@ +
  • + + + + + Flink + + + + +
  • + + + + + + + + + +
  • @@ -2149,13 +2170,13 @@

    userConfigredis_io_threads (integer, Minimum: 1, Maximum: 32). Set Redis IO thread count. Changing this will cause a restart of the Redis service.

  • redis_lfu_decay_time (integer, Minimum: 1, Maximum: 120). LFU maxmemory-policy counter decay time in minutes.
  • redis_lfu_log_factor (integer, Minimum: 0, Maximum: 100). Counter logarithm factor for volatile-lfu and allkeys-lfu maxmemory-policies.
  • -
  • redis_maxmemory_policy (string, Enum: noeviction, allkeys-lru, volatile-lru, allkeys-random, volatile-random, volatile-ttl, volatile-lfu, allkeys-lfu). Redis maxmemory-policy.
  • +
  • redis_maxmemory_policy (string, Enum: allkeys-lfu, allkeys-lru, allkeys-random, noeviction, volatile-lfu, volatile-lru, volatile-random, volatile-ttl). Redis maxmemory-policy.
  • redis_notify_keyspace_events (string, Pattern: ^[KEg\$lshzxentdmA]*$, MaxLength: 32). Set notify-keyspace-events option.
  • redis_number_of_databases (integer, Minimum: 1, Maximum: 128). Set number of Redis databases. Changing this will cause a restart of the Redis service.
  • redis_persistence (string, Enum: off, rdb). When persistence is rdb, Redis does RDB dumps each 10 minutes if any key is changed. Also RDB dumps are done according to the backup schedule for backup purposes. When persistence is off, no RDB dumps or backups are done, so data can be lost at any moment if the service is restarted for any reason, or if the service is powered off. Also, the service can't be forked.
  • redis_pubsub_client_output_buffer_limit (integer, Minimum: 32, Maximum: 512). Set output buffer limit for pub / sub clients in MB. The value is the hard limit, the soft limit is 1/4 of the hard limit. When setting the limit, be mindful of the available memory in the selected service plan.
  • redis_ssl (boolean). Require SSL to access Redis.
  • -
  • redis_timeout (integer, Minimum: 0, Maximum: 31536000). Redis idle connection timeout in seconds.
  • +
  • redis_timeout (integer, Minimum: 0, Maximum: 2073600). Redis idle connection timeout in seconds.
  • redis_version (string, Enum: 7.0). Redis major version.
  • service_log (boolean). Store logs for the service so that they are available in the HTTP API and console.
  • service_to_fork_from (string, Immutable, Pattern: ^[a-z][-a-z0-9]{0,63}$|^$, MaxLength: 64). Name of another service to fork from. This has effect only when a new service is being created.
  • diff --git a/api-reference/serviceintegration.html b/api-reference/serviceintegration.html index aef35987..87f29bcc 100644 --- a/api-reference/serviceintegration.html +++ b/api-reference/serviceintegration.html @@ -18,7 +18,7 @@ - + @@ -1272,6 +1272,27 @@ +
  • + + + + + Flink + + + + +
  • + + + + + + + + + +
  • @@ -2296,7 +2317,7 @@

    ServiceIntegration

    Usage examples

    -clickhouse_postgresql +autoscaler
    apiVersion: aiven.io/v1alpha1
     kind: ServiceIntegration
     metadata:
    @@ -2307,52 +2328,29 @@ 

    Usage examples key: token project: aiven-project-name - integrationType: clickhouse_postgresql + integrationType: autoscaler sourceServiceName: my-pg - destinationServiceName: my-clickhouse - - clickhousePostgresql: - databases: - - database: defaultdb - schema: public - ---- - -apiVersion: aiven.io/v1alpha1 -kind: Clickhouse -metadata: - name: my-clickhouse -spec: - authSecretRef: - name: aiven-token - key: token - - project: aiven-project-name - cloudName: google-europe-west1 - plan: startup-16 - maintenanceWindowDow: friday - maintenanceWindowTime: 23:00:00 - ---- - -apiVersion: aiven.io/v1alpha1 -kind: PostgreSQL -metadata: - name: my-pg -spec: - authSecretRef: - name: aiven-token - key: token - - project: aiven-project-name - cloudName: google-europe-west1 - plan: startup-4 - maintenanceWindowDow: friday - maintenanceWindowTime: 23:00:00 + # Look up autoscaler integration endpoint ID via Console + destinationEndpointId: my-destination-endpoint-id + +--- + +apiVersion: aiven.io/v1alpha1 +kind: PostgreSQL +metadata: + name: my-pg +spec: + authSecretRef: + name: aiven-token + key: token + + project: aiven-project-name + cloudName: google-europe-west1 + plan: startup-4

    -datadog +clickhouse_postgresql
    apiVersion: aiven.io/v1alpha1
     kind: ServiceIntegration
     metadata:
    @@ -2363,34 +2361,52 @@ 

    Usage examples key: token project: aiven-project-name - integrationType: datadog + integrationType: clickhouse_postgresql sourceServiceName: my-pg - destinationEndpointId: destination-endpoint-id + destinationServiceName: my-clickhouse - datadog: - datadog_dbm_enabled: True - datadog_tags: - - tag: env - comment: test - ---- - -apiVersion: aiven.io/v1alpha1 -kind: PostgreSQL -metadata: - name: my-pg -spec: - authSecretRef: - name: aiven-token - key: token - - project: aiven-project-name - cloudName: google-europe-west1 - plan: startup-4 + clickhousePostgresql: + databases: + - database: defaultdb + schema: public + +--- + +apiVersion: aiven.io/v1alpha1 +kind: Clickhouse +metadata: + name: my-clickhouse +spec: + authSecretRef: + name: aiven-token + key: token + + project: aiven-project-name + cloudName: google-europe-west1 + plan: startup-16 + maintenanceWindowDow: friday + maintenanceWindowTime: 23:00:00 + +--- + +apiVersion: aiven.io/v1alpha1 +kind: PostgreSQL +metadata: + name: my-pg +spec: + authSecretRef: + name: aiven-token + key: token + + project: aiven-project-name + cloudName: google-europe-west1 + plan: startup-4 + maintenanceWindowDow: friday + maintenanceWindowTime: 23:00:00

    -kafka_connect +datadog
    apiVersion: aiven.io/v1alpha1
     kind: ServiceIntegration
     metadata:
    @@ -2401,22 +2417,22 @@ 

    Usage examples key: token project: aiven-project-name - integrationType: kafka_connect - sourceServiceName: my-kafka - destinationServiceName: my-kafka-connect + integrationType: datadog + sourceServiceName: my-pg + destinationEndpointId: destination-endpoint-id - kafkaConnect: - kafka_connect: - group_id: connect - status_storage_topic: __connect_status - offset_storage_topic: __connect_offsets + datadog: + datadog_dbm_enabled: True + datadog_tags: + - tag: env + comment: test --- apiVersion: aiven.io/v1alpha1 -kind: Kafka +kind: PostgreSQL metadata: - name: my-kafka + name: my-pg spec: authSecretRef: name: aiven-token @@ -2424,32 +2440,11 @@

    Usage examples project: aiven-project-name cloudName: google-europe-west1 - plan: business-4 - ---- - -apiVersion: aiven.io/v1alpha1 -kind: KafkaConnect -metadata: - name: my-kafka-connect -spec: - authSecretRef: - name: aiven-token - key: token - - project: aiven-project-name - cloudName: google-europe-west1 - plan: business-4 - - userConfig: - kafka_connect: - consumer_isolation_level: read_committed - public_access: - kafka_connect: true + plan: startup-4

    -kafka_logs +kafka_connect
    apiVersion: aiven.io/v1alpha1
     kind: ServiceIntegration
     metadata:
    @@ -2460,43 +2455,102 @@ 

    Usage examples key: token project: aiven-project-name - integrationType: kafka_logs + integrationType: kafka_connect sourceServiceName: my-kafka - destinationServiceName: my-kafka + destinationServiceName: my-kafka-connect - kafkaLogs: - kafka_topic: my-kafka-topic - ---- - -apiVersion: aiven.io/v1alpha1 -kind: Kafka -metadata: - name: my-kafka -spec: - authSecretRef: - name: aiven-token - key: token - - project: aiven-project-name - cloudName: google-europe-west1 - plan: business-4 - ---- - -apiVersion: aiven.io/v1alpha1 -kind: KafkaTopic -metadata: - name: my-kafka-topic -spec: - authSecretRef: - name: aiven-token - key: token - - project: aiven-project-name - serviceName: my-kafka - replication: 2 - partitions: 1 + kafkaConnect: + kafka_connect: + group_id: connect + status_storage_topic: __connect_status + offset_storage_topic: __connect_offsets + +--- + +apiVersion: aiven.io/v1alpha1 +kind: Kafka +metadata: + name: my-kafka +spec: + authSecretRef: + name: aiven-token + key: token + + project: aiven-project-name + cloudName: google-europe-west1 + plan: business-4 + +--- + +apiVersion: aiven.io/v1alpha1 +kind: KafkaConnect +metadata: + name: my-kafka-connect +spec: + authSecretRef: + name: aiven-token + key: token + + project: aiven-project-name + cloudName: google-europe-west1 + plan: business-4 + + userConfig: + kafka_connect: + consumer_isolation_level: read_committed + public_access: + kafka_connect: true +

    +
    +
    +kafka_logs +
    apiVersion: aiven.io/v1alpha1
    +kind: ServiceIntegration
    +metadata:
    +  name: my-service-integration
    +spec:
    +  authSecretRef:
    +    name: aiven-token
    +    key: token
    +
    +  project: aiven-project-name
    +  integrationType: kafka_logs
    +  sourceServiceName: my-kafka
    +  destinationServiceName: my-kafka
    +
    +  kafkaLogs:
    +    kafka_topic: my-kafka-topic
    +
    +---
    +
    +apiVersion: aiven.io/v1alpha1
    +kind: Kafka
    +metadata:
    +  name: my-kafka
    +spec:
    +  authSecretRef:
    +    name: aiven-token
    +    key: token
    +
    +  project: aiven-project-name
    +  cloudName: google-europe-west1
    +  plan: business-4
    +
    +---
    +
    +apiVersion: aiven.io/v1alpha1
    +kind: KafkaTopic
    +metadata:
    +  name: my-kafka-topic
    +spec:
    +  authSecretRef:
    +    name: aiven-token
    +    key: token
    +
    +  project: aiven-project-name
    +  serviceName: my-kafka
    +  replication: 2
    +  partitions: 1
     
    @@ -2504,14 +2558,14 @@

    Usage examplescreated first.

    Apply the resource with:

    -
    kubectl apply -f example.yaml
    +
    kubectl apply -f example.yaml
     

    Verify the newly created ServiceIntegration:

    -
    kubectl get serviceintegrations my-service-integration
    +
    kubectl get serviceintegrations my-service-integration
     

    The output is similar to the following: -

    Name                      Project               Type                     Source Service Name    Destination Service Name    
    -my-service-integration    aiven-project-name    clickhouse_postgresql    my-pg                  my-clickhouse               
    +
    Name                      Project               Type          Source Service Name    Destination Endpoint ID       
    +my-service-integration    aiven-project-name    autoscaler    my-pg                  my-destination-endpoint-id    
     

    ServiceIntegration

    ServiceIntegration is the Schema for the serviceintegrations API.

    @@ -2533,6 +2587,7 @@

    spec&par

    Optional

    • authSecretRef (object). Authentication reference to Aiven token in a secret. See below for nested schema.
    • +
    • autoscaler (object). Autoscaler specific user configuration options.
    • clickhouseKafka (object). Clickhouse Kafka configuration values. See below for nested schema.
    • clickhousePostgresql (object). Clickhouse PostgreSQL configuration values. See below for nested schema.
    • datadog (object). Datadog specific user configuration options. See below for nested schema.
    • @@ -2570,14 +2625,14 @@

      tablescolumns (array of objects, MaxItems: 100). Table columns. See below for nested schema. -
    • data_format (string, Enum: Avro, CSV, JSONAsString, JSONCompactEachRow, JSONCompactStringsEachRow, JSONEachRow, JSONStringsEachRow, MsgPack, TSKV, TSV, TabSeparated, RawBLOB, AvroConfluent, Parquet). Message data format.
    • +
    • data_format (string, Enum: Avro, AvroConfluent, CSV, JSONAsString, JSONCompactEachRow, JSONCompactStringsEachRow, JSONEachRow, JSONStringsEachRow, MsgPack, Parquet, RawBLOB, TSKV, TSV, TabSeparated). Message data format.
    • group_name (string, MinLength: 1, MaxLength: 249). Kafka consumers group.
    • name (string, MinLength: 1, MaxLength: 40). Name of the table.
    • topics (array of objects, MaxItems: 100). Kafka topics. See below for nested schema.

    Optional

      -
    • auto_offset_reset (string, Enum: smallest, earliest, beginning, largest, latest, end). Action to take when there is no initial offset in offset store or the desired offset is out of range.
    • +
    • auto_offset_reset (string, Enum: beginning, earliest, end, largest, latest, smallest). Action to take when there is no initial offset in offset store or the desired offset is out of range.
    • date_time_input_format (string, Enum: basic, best_effort, best_effort_us). Method to read DateTime from text input formats.
    • handle_error_mode (string, Enum: default, stream). How to handle errors for Kafka engine.
    • max_block_size (integer, Minimum: 0, Maximum: 1000000000). Number of row collected by poll(s) for flushing data from Kafka.
    • @@ -2734,7 +2789,7 @@

      kafka_mirrormakerconsumer_max_poll_records (integer, Minimum: 100, Maximum: 20000). Set consumer max.poll.records. The default is 500.
    • producer_batch_size (integer, Minimum: 0, Maximum: 5242880). The batch size in bytes producer will attempt to collect before publishing to broker.
    • producer_buffer_memory (integer, Minimum: 5242880, Maximum: 134217728). The amount of bytes producer can use for buffering data before publishing to broker.
    • -
    • producer_compression_type (string, Enum: gzip, snappy, lz4, zstd, none). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip, snappy, lz4, zstd). It additionally accepts none which is the default and equivalent to no compression.
    • +
    • producer_compression_type (string, Enum: gzip, lz4, none, snappy, zstd). Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip, snappy, lz4, zstd). It additionally accepts none which is the default and equivalent to no compression.
    • producer_linger_ms (integer, Minimum: 0, Maximum: 5000). The linger time (ms) for waiting new data to arrive for publishing.
    • producer_max_request_size (integer, Minimum: 0, Maximum: 268435456). The maximum request size in bytes.
    diff --git a/api-reference/serviceintegrationendpoint.html b/api-reference/serviceintegrationendpoint.html index 29d10455..880f70fa 100644 --- a/api-reference/serviceintegrationendpoint.html +++ b/api-reference/serviceintegrationendpoint.html @@ -18,7 +18,7 @@ - + @@ -1272,6 +1272,27 @@ +
  • + + + + + Flink + + + + +
  • + + + + + + + + + +
  • @@ -1663,6 +1684,30 @@ +
  • + +
  • + + + autoscaler + + + + +
  • @@ -1892,6 +1937,30 @@ +
  • + +
  • + + + autoscaler + + + + +
  • @@ -2056,7 +2125,7 @@

    ServiceIntegrationEndpoint

    Usage examples

    -external_postgresql +autoscaler
    apiVersion: aiven.io/v1alpha1
     kind: ServiceIntegrationEndpoint
     metadata:
    @@ -2067,19 +2136,17 @@ 

    Usage examples key: token project: aiven-project-name - endpointName: my-external-postgresql - endpointType: external_postgresql + endpointName: my-autoscaler + endpointType: autoscaler - externalPostgresql: - username: username - password: password - host: example.example - port: 5432 - ssl_mode: require + autoscaler: + autoscaling: + - type: autoscale_disk + cap_gb: 100

    -external_schema_registry +external_postgresql
    apiVersion: aiven.io/v1alpha1
     kind: ServiceIntegrationEndpoint
     metadata:
    @@ -2090,14 +2157,37 @@ 

    Usage examples key: token project: aiven-project-name - endpointName: my-external-schema-registry - endpointType: external_schema_registry + endpointName: my-external-postgresql + endpointType: external_postgresql - externalSchemaRegistry: - url: https://schema-registry.example.com:8081 - authentication: basic - basic_auth_username: username - basic_auth_password: password + externalPostgresql: + username: username + password: password + host: example.example + port: 5432 + ssl_mode: require +

    +
    +
    +external_schema_registry +
    apiVersion: aiven.io/v1alpha1
    +kind: ServiceIntegrationEndpoint
    +metadata:
    +  name: my-service-integration-endpoint
    +spec:
    +  authSecretRef:
    +    name: aiven-token
    +    key: token
    +
    +  project: aiven-project-name
    +  endpointName: my-external-schema-registry
    +  endpointType: external_schema_registry
    +
    +  externalSchemaRegistry:
    +    url: https://schema-registry.example.com:8081
    +    authentication: basic
    +    basic_auth_username: username
    +    basic_auth_password: password
     
    @@ -2105,14 +2195,14 @@

    Usage examplescreated first.

    Apply the resource with:

    -
    kubectl apply -f example.yaml
    +
    kubectl apply -f example.yaml
     

    Verify the newly created ServiceIntegrationEndpoint:

    -
    kubectl get serviceintegrationendpoints my-service-integration-endpoint
    +
    kubectl get serviceintegrationendpoints my-service-integration-endpoint
     

    The output is similar to the following: -

    Name                               Project               Endpoint Name             Endpoint Type          ID      
    -my-service-integration-endpoint    aiven-project-name    my-external-postgresql    external_postgresql    <id>    
    +
    Name                               Project               Endpoint Name    Endpoint Type    ID      
    +my-service-integration-endpoint    aiven-project-name    my-autoscaler    autoscaler       <id>    
     

    ServiceIntegrationEndpoint

    ServiceIntegrationEndpoint is the Schema for the serviceintegrationendpoints API.

    @@ -2134,6 +2224,7 @@

    spec&par

    Optional

    +

    autoscaler

    +

    Appears on spec.

    +

    Autoscaler configuration values.

    +

    Required

    +
      +
    • autoscaling (array of objects, MaxItems: 64). Configure autoscaling thresholds for a service. See below for nested schema.
    • +
    +

    autoscaling

    +

    Appears on spec.autoscaler.

    +

    Autoscaling properties for a service.

    +

    Required

    +
      +
    • cap_gb (integer, Minimum: 50, Maximum: 10000). The maximum total disk size (in gb) to allow autoscaler to scale up to.
    • +
    • type (string, Enum: autoscale_disk). Type of autoscale event.
    • +

    datadog

    Appears on spec.

    Datadog configuration values.

    @@ -2171,7 +2277,7 @@

    datadogkafka_consumer_check_instances (integer, Minimum: 1, Maximum: 100). Number of separate instances to fetch kafka consumer statistics with.

  • kafka_consumer_stats_timeout (integer, Minimum: 2, Maximum: 300). Number of seconds that datadog will wait to get consumer statistics from brokers.
  • max_partition_contexts (integer, Minimum: 200, Maximum: 200000). Maximum number of partition contexts to send.
  • -
  • site (string, Enum: datadoghq.com, datadoghq.eu, us3.datadoghq.com, us5.datadoghq.com, ddog-gov.com, ap1.datadoghq.com). Datadog intake site. Defaults to datadoghq.com.
  • +
  • site (string, Enum: ap1.datadoghq.com, datadoghq.com, datadoghq.eu, ddog-gov.com, us3.datadoghq.com, us5.datadoghq.com). Datadog intake site. Defaults to datadoghq.com.
  • datadog_tags

    Appears on spec.datadog.

    @@ -2244,7 +2350,7 @@

    externalKafkabootstrap_servers (string, MinLength: 3, MaxLength: 256). Bootstrap servers.

  • -
  • security_protocol (string, Enum: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL). Security protocol.
  • +
  • security_protocol (string, Enum: PLAINTEXT, SASL_PLAINTEXT, SASL_SSL, SSL). Security protocol.
  • Optional