Skip to content

Commit

Permalink
refactor: move the Deployments under the User Guide (#1185)
Browse files Browse the repository at this point in the history
  • Loading branch information
nicecui authored Sep 20, 2024
1 parent 20f0303 commit 0bb4dd9
Show file tree
Hide file tree
Showing 70 changed files with 159 additions and 94 deletions.
2 changes: 1 addition & 1 deletion docs/db-cloud-shared/clients/otlp-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ To send OpenTelemetry Metrics to GreptimeDB through OpenTelemetry SDK libraries,
* URL: `https://<host>/v1/otlp/v1/metrics`
* Headers:
* `X-Greptime-DB-Name`: `<dbname>`
* `Authorization`: `Basic` authentication, which is a Base64 encoded string of `<username>:<password>`. For more information, please refer to [Authentication](https://docs.greptime.com/user-guide/operations/authentication) and [HTTP API](https://docs.greptime.com/user-guide/protocols/http#authentication)
* `Authorization`: `Basic` authentication, which is a Base64 encoded string of `<username>:<password>`. For more information, please refer to [Authentication](https://docs.greptime.com/user-guide/deployments/authentication) and [HTTP API](https://docs.greptime.com/user-guide/protocols/http#authentication)

The request uses binary protobuf to encode the payload, so you need to use packages that support `HTTP/protobuf`. For example, in Node.js, you can use [`exporter-trace-otlp-proto`](https://www.npmjs.com/package/@opentelemetry/exporter-trace-otlp-proto); in Go, you can use [`go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp`](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp); in Java, you can use [`io.opentelemetry:opentelemetry-exporter-otlp`](https://mvnrepository.com/artifact/io.opentelemetry/opentelemetry-exporter-otlp); and in Python, you can use [`opentelemetry-exporter-otlp-proto-http`](https://pypi.org/project/opentelemetry-exporter-otlp-proto-http/).

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started/installation/greptimedb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ By default, the data will be stored in `/tmp/greptimedb-cluster-docker-compose`.

## Deploy the GreptimeDB cluster in Kubernetes

Please refer to [Deploy on Kubernetes](/user-guide/operations/deploy-on-kubernetes/overview.md).
Please refer to [Deploy on Kubernetes](/user-guide/deployments/deploy-on-kubernetes/overview.md).

## Next Steps

Expand Down
4 changes: 2 additions & 2 deletions docs/getting-started/installation/greptimedb-standalone.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# GreptimeDB Standalone

We use the simplest configuration for you to get started. For a comprehensive list of configurations available in GreptimeDB, see the [configuration documentation](/user-guide/operations/configuration.md).
We use the simplest configuration for you to get started. For a comprehensive list of configurations available in GreptimeDB, see the [configuration documentation](/user-guide/deployments/configuration.md).

## Binary

Expand Down Expand Up @@ -120,7 +120,7 @@ docker run -p 0.0.0.0:4000-4003:4000-4003 \
</TabItem>
</Tabs>

You can also refer to the [Configuration](/user-guide/operations/configuration.md) document to modify the bind address in the configuration file.
You can also refer to the [Configuration](/user-guide/deployments/configuration.md) document to modify the bind address in the configuration file.

## Next Steps

Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ In this quick start document, we use SQL for simplicity.
If your GreptimeDB instance is running on `127.0.0.1` with the MySQL client default port `4002` or the PostgreSQL client default port `4003`,
you can connect to GreptimeDB using the following commands.

By default, GreptimeDB does not have [authentication](/user-guide/operations/authentication.md) enabled.
By default, GreptimeDB does not have [authentication](/user-guide/deployments/authentication.md) enabled.
You can connect to the database without providing a username and password in this section.

```shell
Expand Down
2 changes: 1 addition & 1 deletion docs/greptimecloud/integrations/otlp.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ To send OpenTelemetry Metrics to GreptimeDB through OpenTelemetry SDK libraries,
* URL: `https://<host>/v1/otlp/v1/metrics`
* Headers:
* `X-Greptime-DB-Name`: `<dbname>`
* `Authorization`: `Basic` authentication, which is a Base64 encoded string of `<username>:<password>`. For more information, please refer to [Authentication](https://docs.greptime.com/user-guide/operations/authentication) and [HTTP API](https://docs.greptime.com/user-guide/protocols/http#authentication)
* `Authorization`: `Basic` authentication, which is a Base64 encoded string of `<username>:<password>`. For more information, please refer to [Authentication](https://docs.greptime.com/user-guide/deployments/authentication) and [HTTP API](https://docs.greptime.com/user-guide/protocols/http#authentication)

The request uses binary protobuf to encode the payload, so you need to use packages that support `HTTP/protobuf`. For example, in Node.js, you can use [`exporter-trace-otlp-proto`](https://www.npmjs.com/package/@opentelemetry/exporter-trace-otlp-proto); in Go, you can use [`go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp`](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp); in Java, you can use [`io.opentelemetry:opentelemetry-exporter-otlp`](https://mvnrepository.com/artifact/io.opentelemetry/opentelemetry-exporter-otlp); and in Python, you can use [`opentelemetry-exporter-otlp-proto-http`](https://pypi.org/project/opentelemetry-exporter-otlp-proto-http/).

Expand Down
4 changes: 2 additions & 2 deletions docs/reference/command-lines.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ greptime frontend start --help
- `--tls-cert-path <TLS_CERT_PATH>`: The TLS public key file path;
- `--tls-key-path <TLS_KEY_PATH>`: The TLS private key file path;
- `--tls-mode <TLS_MODE>`: TLS Mode;
- `--user-provider <USER_PROVIDER>`: You can refer [authentication](/user-guide/operations/authentication.md);
- `--user-provider <USER_PROVIDER>`: You can refer [authentication](/user-guide/deployments/authentication.md);

### Flownode subcommand options

Expand Down Expand Up @@ -149,7 +149,7 @@ Starts GreptimeDB in standalone mode with customized configurations:
greptime --log-dir=/tmp/greptimedb/logs --log-level=info standalone start -c config/standalone.example.toml
```

The `standalone.example.toml` configuration file comes from the `config` directory of the `[GreptimeDB](https://github.com/GreptimeTeam/greptimedb/)` repository. You can find more example configuration files there. The `-c` option specifies the configuration file, for more information check [Configuration](../user-guide/operations/configuration.md).
The `standalone.example.toml` configuration file comes from the `config` directory of the `[GreptimeDB](https://github.com/GreptimeTeam/greptimedb/)` repository. You can find more example configuration files there. The `-c` option specifies the configuration file, for more information check [Configuration](../user-guide/deployments/configuration.md).

To start GreptimeDB in distributed mode, you need to start each component separately. The following commands show how to start each component with customized configurations or command line arguments.

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/sql/create.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ Users can add table options by using `WITH`. The valid options contain the follo
| Option | Description | Value |
| ------------------- | --------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `ttl` | The storage time of the table data | String value, such as `'60m'`, `'1h'` for one hour, `'14d'` for 14 days etc. Supported time units are: `s` / `m` / `h` / `d` |
| `storage` | The name of the table storage engine provider | String value, such as `S3`, `Gcs`, etc. It must be configured in `[[storage.providers]]`, see [configuration](/user-guide/operations/configuration.md#storage-engine-provider). |
| `storage` | The name of the table storage engine provider | String value, such as `S3`, `Gcs`, etc. It must be configured in `[[storage.providers]]`, see [configuration](/user-guide/deployments/configuration.md#storage-engine-provider). |
| `compaction.type` | Compaction strategy of the table | String value. Only `twcs` is allowed. |
| `compaction.twcs.max_active_window_files` | Max num of files that can be kept in active writing time window | String value, such as '8'. Only available when `compaction.type` is `twcs`. You can refer to this [document](https://cassandra.apache.org/doc/latest/cassandra/managing/operating/compaction/twcs.html) to learn more about the `twcs` compaction strategy. |
| `compaction.twcs.max_inactive_window_files` | Max num of files that can be kept in inactive time window. | String value, such as '1'. Only available when `compaction.type` is `twcs`. |
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Create a cluster

Please refer to [Kubernetes](./operations/deploy-on-kubernetes/overview.md) to get the information about creating a Kubernetes cluster.
Please refer to [Kubernetes](./deployments/deploy-on-kubernetes/overview.md) to get the information about creating a Kubernetes cluster.

## Distributed Read/Write

Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/concepts/features-that-you-concern.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Since 0.8, GreptimeDB added a new function called `Flow`, which is used for cont
## Can I store data in object storage in the cloud?

Yes, GreptimeDB's data access layer is based on [OpenDAL](https://github.com/apache/incubator-opendal), which supports most kinds of object storage services.
The data can be stored in cost-effective cloud storage services such as AWS S3 or Azure Blob Storage, please refer to storage configuration guide [here](./../operations/configuration.md#storage-options).
The data can be stored in cost-effective cloud storage services such as AWS S3 or Azure Blob Storage, please refer to storage configuration guide [here](./../deployments/configuration.md#storage-options).

GreptimeDB also offers a fully-managed cloud service [GreptimeCloud](https://greptime.com/product/cloud) to help you manage data in the cloud.

Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide/concepts/storage-location.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,14 +24,14 @@ The storage file structure of GreptimeDB includes of the following:
```

- `metadata`: The internal metadata directory that keeps catalog, database and table info, procedure states, etc. In cluster mode, this directory does not exist, because all those states including region route info are saved in `Metasrv`.
- `data`: The files in data directory store time series data and index files of GreptimeDB. To customize this path, please refer to [Storage option](../operations/configuration.md#storage-options). The directory is organized in a two-level structure of catalog and schema.
- `data`: The files in data directory store time series data and index files of GreptimeDB. To customize this path, please refer to [Storage option](../deployments/configuration.md#storage-options). The directory is organized in a two-level structure of catalog and schema.
- `logs`: The log files contains all the logs of operations in GreptimeDB.
- `wal`: The wal directory contains the write-ahead log files.
- `index_intermediate`: the temporary intermediate data while indexing.

## Cloud storage

The `data` directory in the file structure can be stored in cloud storage. Please refer to [Storage option](../operations/configuration.md#storage-options) for more details.
The `data` directory in the file structure can be stored in cloud storage. Please refer to [Storage option](../deployments/configuration.md#storage-options) for more details.

Please note that only storing the data directory in object storage is not sufficient to ensure data reliability and disaster recovery. The `wal` and `metadata` also need to be considered for disaster recovery. Please refer to the [disaster recovery documentation](/user-guide/operations/disaster-recovery/overview.md).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -386,9 +386,9 @@ default_ratio = 1.0
- `enable_otlp_tracing`: whether to turn on tracing, not turned on by default.
- `otlp_endpoint`: Export the target endpoint of tracing using gRPC-based OTLP protocol, the default value is `localhost:4317`.
- `append_stdout`: Whether to append logs to stdout. Defaults to `true`.
- `tracing_sample_ratio`: This field can configure the sampling rate of tracing. How to use `tracing_sample_ratio`, please refer to [How to configure tracing sampling rate](./tracing.md#guide-how-to-configure-tracing-sampling-rate).
- `tracing_sample_ratio`: This field can configure the sampling rate of tracing. How to use `tracing_sample_ratio`, please refer to [How to configure tracing sampling rate](/user-guide/operations/tracing.md#guide-how-to-configure-tracing-sampling-rate).

How to use distributed tracing, please reference [Tracing](./tracing.md#tutorial-use-jaeger-to-trace-greptimedb)
How to use distributed tracing, please reference [Tracing](/user-guide/operations/tracing.md#tutorial-use-jaeger-to-trace-greptimedb)

### Region engine options

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@ helm search repo greptimedb-operator
```

You should see output similar to this:
```

```shell
NAME CHART VERSION APP VERSION DESCRIPTION
greptime/greptimedb-operator 0.2.3 0.1.0-alpha.29 The greptimedb-operator Helm chart for Kubernetes.
```
Expand Down
28 changes: 28 additions & 0 deletions docs/user-guide/deployments/overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Overview

## Configuration

Before deploying GreptimeDB, you need to [configure the server](configuration.md) to meet your requirements. This includes setting up protocol options, storage options, and more.

## Authentication

By default, GreptimeDB does not have authentication enabled. Learn how to [configure authentication](authentication.md) for your deployment manually.

## Deploy on Kubernetes

The step-by-step instructions for [deploying GreptimeDB on a Kubernetes cluster](./deploy-on-kubernetes/overview.md).

## Run on Android

Learn how to [run GreptimeDB on Android devices](run-on-android.md).

## Capacity plan

Understand how to [plan for capacity](/user-guide/operations/capacity-plan.md) to ensure your GreptimeDB deployment can handle your workload.

## GreptimeCloud

Instead of managing your own GreptimeDB cluster,
you can use [GreptimeCloud](https://greptime.cloud) to manage GreptimeDB instances, monitor metrics, and set up alerts.
GreptimeCloud is a cloud service powered by fully-managed serverless GreptimeDB, providing a scalable and efficient solution for time-series data platforms and Prometheus backends.
For more information, see the [GreptimeCloud documentation](/greptimecloud/overview.md).
2 changes: 1 addition & 1 deletion docs/user-guide/ingest-data/for-iot/grpc-sdks/template.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ For more information, refer to [Automatic Schema Generation](/user-guide/ingest-

## Connect to database

If you have set the [`--user-provider` configuration](/user-guide/operations/authentication.md) when starting GreptimeDB,
If you have set the [`--user-provider` configuration](/user-guide/deployments/authentication.md) when starting GreptimeDB,
you will need to provide a username and password to connect to GreptimeDB.
The following example shows how to set the username and password when using the library to connect to GreptimeDB.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ curl -i -XPOST "http://localhost:4000/v1/influxdb/write?db=public" \
#### Authentication

GreptimeDB is compatible with InfluxDB's line protocol authentication format, both V1 and V2.
If you have [configured authentication](/user-guide/operations/authentication.md) in GreptimeDB, you need to provide the username and password in the HTTP request.
If you have [configured authentication](/user-guide/deployments/authentication.md) in GreptimeDB, you need to provide the username and password in the HTTP request.

<Tabs>

Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide/ingest-data/for-observerbility/prometheus.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ remote_read:
# password: greptime_pwd
```

- The host and port in the URL represent the GreptimeDB server. In this example, the server is running on `localhost:4000`. You can replace it with your own server address. For the HTTP protocol configuration in GreptimeDB, please refer to the [protocol options](/user-guide/operations/configuration.md#protocol-options).
- The host and port in the URL represent the GreptimeDB server. In this example, the server is running on `localhost:4000`. You can replace it with your own server address. For the HTTP protocol configuration in GreptimeDB, please refer to the [protocol options](/user-guide/deployments/configuration.md#protocol-options).
- The `db` parameter in the URL represents the database to which we want to write data. It is optional. By default, the database is set to `public`.
- `basic_auth` is the authentication configuration. Fill in the username and password if GreptimeDB authentication is enabled. Please refer to the [authentication document](/user-guide/operations/authentication.md).
- `basic_auth` is the authentication configuration. Fill in the username and password if GreptimeDB authentication is enabled. Please refer to the [authentication document](/user-guide/deployments/authentication.md).

## Data Model

Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/ingest-data/for-observerbility/vector.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ password = "<password>"
```

GreptimeDB uses gRPC to communicate with Vector, so the default port for the Vector sink is `4001`.
If you have changed the default gRPC port when starting GreptimeDB with [custom configurations](/user-guide/operations/configuration.md#configuration-file), use your own port instead.
If you have changed the default gRPC port when starting GreptimeDB with [custom configurations](/user-guide/deployments/configuration.md#configuration-file), use your own port instead.

</div>

Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/integrations/metabase.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,6 @@ information.
- Use Greptime's Postgres protocol port `4003` as port. If you changed the
defaults, use you own settings.
- Username and password are optional if you didn't enable
[authentication](/user-guide/operations/authentication.md).
[authentication](/user-guide/deployments/authentication.md).
- Use `public` as default *Database name*. When using GreptimeCloud instance,
use the database name from your instance.
2 changes: 1 addition & 1 deletion docs/user-guide/integrations/superset.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ greptimedb://<username>:<password>@<host>:<port>/<database>
```

- Ignore `<username>:<password>@` if you don't have
[authentication](/user-guide/operations/authentication.md) enabled.
[authentication](/user-guide/deployments/authentication.md) enabled.
- Use `4003` for default port (this extension uses Postgres protocol).
- Use `public` as default `database`. When using GreptimeCloud instance, use the
database name from your instance.
6 changes: 3 additions & 3 deletions docs/user-guide/operations/admin.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ This document addresses strategies and practices used in the operation of Grepti
## Database/Cluster management

* [Installation](/getting-started/installation/overview.md) for GreptimeDB and the [g-t-control](/reference/gtctl.md) command line tool.
* Database Configuration, please read the [Configuration](./configuration.md) reference.
* [Monitoring](./monitoring.md) and [Tracing](./tracing.md) for GreptimeDB.
* GreptimeDB [Disaster Recovery](./disaster-recovery/overview.md).
* Database Configuration, please read the [Configuration](/user-guide/deployments/configuration.md) reference.
* [Monitoring](/user-guide/operations/monitoring.md) and [Tracing](/user-guide/operations/tracing.md) for GreptimeDB.
* GreptimeDB [Disaster Recovery](/user-guide/operations/disaster-recovery/overview.md).

### Runtime information

Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide/operations/capacity-plan.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ there are several key considerations:
- Data retention policy
- Hardware costs

To monitor the various metrics of GreptimeDB, please refer to [Monitoring](./monitoring.md).
To monitor the various metrics of GreptimeDB, please refer to [Monitoring](/user-guide/operations/monitoring.md).

## CPU

Expand Down Expand Up @@ -42,7 +42,7 @@ This allows GreptimeDB to store large amounts of data in a significantly smaller

Data can be stored either in a local file system or in cloud storage, such as AWS S3.
FOr more information on storage options,
please refer to the [storage configuration](./configuration.md#storage-options) documentation.
please refer to the [storage configuration](/user-guide/deployments/configuration.md#storage-options) documentation.

Cloud storage is highly recommended for data storage due to its simplicity in managing storage.
With cloud storage, only about 200GB of local storage space is needed for query-related caches and Write-Ahead Log (WAL).
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/operations/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ the `docker run` command.

You can also save metrics to GreptimeDB itself for convenient querying and analysis using SQL statements.
This section provides some configuration examples.
For more details about configuration, please refer to the [Monitor metrics options](./configuration.md#monitor-metrics-options).
For more details about configuration, please refer to the [Monitor metrics options](/user-guide/deployments/configuration.md#monitor-metrics-options).

### Standalone

Expand Down
Loading

0 comments on commit 0bb4dd9

Please sign in to comment.