Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: refactor GreptimeDB k8s deployment part #1187

Merged
merged 8 commits into from
Oct 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,99 +1,176 @@
# Deploy GreptimeDB Cluster

Before the deployment,
make sure you have already installed [GreptimeDB Operator](./manage-greptimedb-operator/deploy-greptimedb-operator.md) on your Kubernetes cluster.
This guide will walk you through deploying a GreptimeDB cluster on a Kubernetes environment. Make sure the [GreptimeDB Operator](./manage-greptimedb-operator/deploy-greptimedb-operator.md) is installed on your cluster before proceeding. We’ll cover everything from setting up an etcd cluster (optional) to configuring options and connecting to the database.

## Using Helm for deployment
## Prerequisites
- [Helm](https://helm.sh/docs/intro/install/) (Use the Version appropriate for your Kubernetes API version)
- [GreptimeDB Operator](./manage-greptimedb-operator/deploy-greptimedb-operator.md) (Assumes the local host has a matching installation of the GreptimeDB Operator)

### Create an etcd cluster
## Create an etcd cluster

First, establish an etcd cluster to support GreptimeDB by executing the following command:
To install the ectd cluster, run the following `helm install` command. This command also creates a dedicated namespace `etcd-cluster` for the installation.

```shell
helm upgrade \
--install etcd oci://registry-1.docker.io/bitnamicharts/etcd \
helm install etcd oci://registry-1.docker.io/bitnamicharts/etcd \
--set replicaCount=3 \
--set auth.rbac.create=false \
--set auth.rbac.token.enabled=false \
--create-namespace \
-n etcd-cluster
```

After the installation,
you can get the etcd cluster endpoints `etcd.etcd.svc.cluster.local:2379` from the installation logs.
The endpoints are required for deploying the GreptimeDB cluster in the subsequent step.
### Verify the etcd cluster installation
Once installed, verify the status of the etcd cluster by listing the pods in the `etcd-cluster` namespace:

### Create a GreptimeDB cluster
```bash
kubectl get pods -n etcd-cluster
```

You should see output similar to this:
```bash
NAME READY STATUS RESTARTS AGE
etcd-0 1/1 Running 0 80s
etcd-1 1/1 Running 0 80s
etcd-2 1/1 Running 0 80s
```

## Deploy a minimum GreptimeDB cluster

Next, deploy a minimum GreptimeDB cluster. The deployment will depend on an etcd cluster for coordination. If you already have one running, you can use its endpoint. If you followed the steps above, use `etcd.etcd-cluster.svc.cluster.local:2379` as the etcd endpoint.

Deploy the GreptimeDB cluster, ensuring it connects to the previously established etcd cluster:

Run this command to install the GreptimeDB cluster:
```shell
helm install greptimedb greptime/greptimedb-cluster \
--set meta.etcdEndpoints=etcd.etcd-cluster.svc.cluster.local:2379 \
--create-namespace \
-n greptimedb-cluster
```

### Setting Resource requests and limits
### Verify the GreptimeDB cluster installation
Check that all pods are running properly in the `greptimedb-cluster` namespace:

The GreptimeDB Helm charts enable you to specify resource requests and limits for each component within your deployment.
```bash
kubectl get pods -n greptimedb-cluster
```

You should see output similar to this:
```bash
NAME READY STATUS RESTARTS AGE
greptimedb-datanode-0 1/1 Running 0 30s
greptimedb-datanode-2 1/1 Running 0 30s
greptimedb-datanode-1 1/1 Running 0 30s
greptimedb-frontend-7476dcf45f-tw7mx 1/1 Running 0 16s
greptimedb-meta-689cb867cd-cwsr5 1/1 Running 0 31s
```

Here's how you can configure these settings:
## Advanced Configuration Options

### Use Object Storage as backend storage
To store data on Object Storage (here is example to store on Amazon S3), add the following configuration to your Helm command:

```bash
helm install greptimedb \
--set meta.etcdEndpoints=etcd.etcd-cluster.svc.cluster.local:2379 \
--set objectStorage.s3.bucket=<your-bucket> \
--set objectStorage.s3.region=<region-of-bucket> \
--set objectStorage.s3.root=<root-directory-of-data> \
--set objectStorage.credentials.accessKeyId=<your-access-key-id> \
--set objectStorage.credentials.secretAccessKey=<your-secret-access-key> \
greptime/greptimedb-cluster \
--create-namespace \
-n greptimedb-cluster
```

### Using Remote WAL and enable the Region Failover
If you want to enable Remote WAL and region failover, follow this configuration. You’ll need a Kafka cluster running, and you can use its endpoint like `kafka.kafka-cluster.svc.cluster.local:9092`:

```bash
helm install greptimedb \
--set meta.etcdEndpoints=etcd.etcd-cluster.svc.cluster.local:2379 \
--set meta.enableRegionFailover=true \
--set objectStorage.s3.bucket=<your-bucket> \
--set objectStorage.s3.region=<region-of-bucket> \
--set objectStorage.s3.root=<root-directory-of-data> \
--set objectStorage.credentials.accessKeyId=<your-access-key-id> \
WenyXu marked this conversation as resolved.
Show resolved Hide resolved
--set objectStorage.credentials.secretAccessKey=<your-secret-access-key> \
--set remoteWal.enable=true \
--set remoteWal.kafka.brokerEndpoints[0]=kafka.kafka-cluster.svc.cluster.local:9092 \
greptime/greptimedb-cluster \
--create-namespace \
-n greptimedb-cluster
```

### Static authentication
To enable static authentication for GreptimeDB cluster, you can configure user credentials during installation by specifying them with the auth settings. Here's an example of how to do this:

```bash
helm install greptimedb \
--set meta.etcdEndpoints=etcd.etcd-cluster.svc.cluster.local:2379 \
--set auth.enabled=true \
--set auth.users[0].username=admin \
--set auth.users[0].password=admin \
greptime/greptimedb-cluster \
--create-namespace \
-n greptimedb-cluster
```

### Resource requests and limits
To control resource allocation (CPU and memory), modify the Helm installation command as follows:

```shell
helm install greptimedb greptime/greptimedb-cluster \
helm install greptimedb \
--set meta.etcdEndpoints=etcd.etcd-cluster.svc.cluster.local:2379 \
--set meta.podTemplate.main.resources.requests.cpu=<cpu-resource> \
--set meta.podTemplate.main.resources.requests.memory=<mem-resource> \
--set datanode.podTemplate.main.resources.requests.cpu=<cpu-resource> \
--set datanode.podTemplate.main.resources.requests.memory=<mem-resource> \
--set frontend.podTemplate.main.resources.requests.cpu=<cpu-resource> \
--set frontend.podTemplate.main.resources.requests.memory=<mem-resource> \
greptime/greptimedb-cluster \
--create-namespace \
-n greptimedb-cluster
```

For a comprehensive list of configurable values via Helm,
please refer to the [value configuration](https://github.com/GreptimeTeam/helm-charts/blob/main/charts/greptimedb-cluster/README.md#values).

<!-- ### Use a different GreptimeDB version

### Specify the database configuration file -->

## Using kubectl for deployment

Alternatively, you can manually create a GreptimeDB cluster using `kubectl`.
Start by creating a configuration file named `greptimedb-cluster.yaml` with the following content:

```yml
apiVersion: greptime.io/v1alpha1
kind: GreptimeDBCluster
metadata:
name: greptimedb
namespace: greptimedb-cluster
spec:
base:
main:
image: greptime/greptimedb
frontend:
replicas: 1
meta:
replicas: 1
etcdEndpoints:
- "etcd.etcd-cluster.svc.cluster.local:2379"
datanode:
replicas: 3
### Complex configurations with values.yaml

For more intricate configurations, first download the default `values.yaml` file and modify it locally.

```bash
curl -sLo values.yaml https://raw.githubusercontent.com/GreptimeTeam/helm-charts/main/charts/greptimedb-cluster/values.yaml
```

Apply this configuration to instantiate the GreptimeDB cluster:
You can configure each component by specifying custom settings in the `configData` fields. For more details, refer to the [configuration](../configuration.md) documentation.

Below is an example of how to modify the settings in the `values.yaml` file. This example demonstrates how to configure specific components within GreptimeDB. It sets the metasrv's selector type to `round_robin`, adjusts the datanode’s Mito engine by setting `global_write_buffer_size` to `4GB`, and configures the frontend's meta client `ddl_timeout` to `60s`:
WenyXu marked this conversation as resolved.
Show resolved Hide resolved

```yaml
meta:
configData: |-
selector = "round_robin"
datanode:
configData: |-
[[region_engine]]
[region_engine.mito]
global_write_buffer_size = "4GB"
frontend:
configData: |-
[meta_client]
ddl_timeout = "60s"
```

```shell
kubectl apply -f greptimedb-cluster.yaml
Then, deploy using the modified values file:

```bash
helm install greptimedb greptime/greptimedb-cluster --values values.yaml
```

For a comprehensive list of configurable values via Helm,
please refer to the [configuration](../configuration.md), [value configuration](https://github.com/GreptimeTeam/helm-charts/blob/main/charts/greptimedb-cluster/README.md#values).

## Connect to the cluster

After the installation, you can use `kubectl port-forward` to forward the service ports of the GreptimeDB cluster:
After the installation is complete, use `kubectl port-forward` to expose the service ports so that you can connect to the cluster locally:


```shell
# You can use the MySQL or PostgreSQL client to connect the cluster, for example: 'mysql -h 127.0.0.1 -P 4002'.
Expand Down
15 changes: 1 addition & 14 deletions docs/user-guide/deployments/deploy-on-kubernetes/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,6 @@ This guide provides a walkthrough on how to deploy a GreptimeDB cluster on Kuber

- [Helm v3](https://helm.sh/docs/intro/install/): A package manager for Kubernetes.

- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl): A command-line tool for interacting with Kubernetes clusters.

## Add Helm repository

Use the command below to add the GreptimeDB Helm chart repository.

```shell
helm repo add greptime https://greptimeteam.github.io/helm-charts/
helm repo update
```

You can find the maintained [Helm charts](https://github.com/GreptimeTeam/helm-charts) in the GitHub repository.

## Components

The deployment on Kubernetes involves the following components:
Expand All @@ -37,6 +24,6 @@ The deployment on Kubernetes involves the following components:

To deploy GreptimeDB on Kubernetes, follow these steps:

- [GreptimeDB Operator](./manage-greptimedb-operator/deploy-greptimedb-operator.md): This section guides you on installing the GreptimeDB Operator.
- [Deploy GreptimeDB Operator](./manage-greptimedb-operator/deploy-greptimedb-operator.md): This section guides you on installing the GreptimeDB Operator.
- [Deploy GreptimeDB Cluster](deploy-greptimedb-cluster.md): This section provides instructions on how to deploy etcd cluster and GreptimeDB cluster on Kubernetes.
- [Destroy Cluster](destroy-cluster.md): This section describes how to uninstall the GreptimeDB Operator and the GreptimeDB cluster.
Loading
Loading