Skip to content

Terraform modules targeting AWS Red Hat OpenShift Cluster creation for Camunda 8 Self-Managed usage on AWS (experimental)

License

Notifications You must be signed in to change notification settings

camunda/camunda-tf-rosa

Repository files navigation

Camunda Terraform Red Hat OpenShift on AWS Modules

Camunda tests License

This module automates the creation of a ROSA HCP cluster with an opinionated configuration targeting Camunda 8 on AWS using Terraform.

⚠️ Warning: This project is not intended for production use but rather for demonstration purposes only. There are no guarantees or warranties provided.

For more detailed usage and configuration options, please refer to the module's inputs and outputs documentation below.

Requirements

To gather all specifics versions of this project, we use:

  • asdf version manager (see installation).
  • just as a command runner
    • install it using asdf: asdf plugin add just && asdf install just

Then we will install all the tooling listed in the .tool-versions of this root project using just:

just install-tooling

# list available recipes
just --list

Getting started : Create a ROSA HCP cluster

Base tutorial https://aws.amazon.com/blogs/containers/build-rosa-clusters-with-terraform/

I. Enable ROSA in AWS Marketplace

  1. Login onto AWS
  2. Check if ELB role exists
# To check if the role exists for your account, run this command in your terminal:
aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing"

# If the role doesn't exist, create it by running the following command:
aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
  1. Login onto Red Hat Hybrid Cloud Console
  2. Generate an Offline token, click on "Load Token"
export RHCS_TOKEN=yourToken
rosa login --token="$RHCS_TOKEN"

rosa whoami

rosa verify quota --region="$AWS_REGION"

# this may fail due to org policy
rosa verify permissions --region="$AWS_REGION"

rosa create account-roles --mode auto
  1. Enable HCP ROSA on AWS MarkePlace
    • Navigate to the ROSA console : https://console.aws.amazon.com/rosa
    • Choose Get started.
    • On the Verify ROSA prerequisites page, select I agree to share my contact information with Red Hat.
    • Choose Enable ROSA

Please note that Only a single AWS account that will be used for service billing can be associated with a Red Hat account.

II. Create the cluster

Terraform

To use this module with Terraform, follow these steps:

  1. Create a Terraform provider configuration file (e.g., provider.tf) and copy the content of [modules/fixtures/backend.tf](modules/fixtures/backend.tf), ensure that RHCS_URL is set as an environment variable with the token previously loaded from the console.
  2. Create a Terraform configuration file (e.g., main.tf).
  3. Include the ROSA HCP module in your configuration file.

Here's an example configuration:

module "rosa_hcp" {
  source = "github.com/camunda/camunda-tf-rosa.git//modules/rosa-hcp?ref=main"

  cluster_name           = "my-ocp-cluster"
  htpasswd_password      = "your_password"
  openshift_version      = "4.15.11"
  replicas               = "2"
}

For more details, refer to the Terraform module ROSA HCP README.

  1. Initialize Terraform by running:

    terraform init
  2. Review the execution plan with:

    terraform plan
  3. Apply the configuration to create the resources:

    terraform apply

GitHub Actions

You can automate the deployment and deletion of the ROSA HCP cluster using GitHub Actions. Below are examples of GitHub Actions workflows for deploying and deleting the cluster.

Deploy ROSA HCP Cluster

Create a file in your repository's .github/workflows directory, for example deploy-rosa-hcp.yml, with the following content:

name: Deploy ROSA HCP Cluster

on:
  pull_request:

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Add profile credentials to ~/.aws/credentials
        run: |
          aws configure set aws_access_key_id ${{ secrets.AWS_ACCESS_KEY }} --profile ${{ env.AWS_PROFILE }}
          aws configure set aws_secret_access_key ${{ secrets.AWS_SECRET_KEY }} --profile ${{ env.AWS_PROFILE }}
          aws configure set region ${{ env.AWS_REGION }} --profile ${{ env.AWS_PROFILE }}

      - name: Deploy ROSA HCP Cluster
        uses: camunda/camunda-tf-rosa/.github/actions/rosa-create-cluster@main
        id: create_cluster
        timeout-minutes: 125 # cluster creation can take up to 45 minutes
        with:
          rh-token: ${{ secrets.RH_OPENSHIFT_TOKEN }}
          cluster-name: "my-ocp-cluster"
          admin-username: "kube-admin"
          admin-password: ${{ secrets.CI_OPENSHIFT_MAIN_PASSWORD }}
          aws-region: "us-west-2"
          s3-backend-bucket: ${{ secrets.TF_S3_BUCKET }}

      - name: Use your created cluster
        shell: bash
        run: |
          oc new-project "myns"
          oc whoami
          oc get pods

For more details, refer to the Deploy ROSA HCP Cluster Action README.

Delete ROSA HCP Cluster

Create another file in your repository's .github/workflows directory, for example delete-rosa-hcp.yml, with the following content:

name: Delete ROSA HCP Cluster

on:
  workflow_dispatch:

jobs:
  delete:
    runs-on: ubuntu-latest
    steps:
      - name: Delete ROSA HCP Cluster
        uses: camunda/camunda-tf-rosa/.github/actions/rosa-delete-cluster@main
        timeout-minutes: 125 # cluster deletion can take some time
        with:
          rh-token: ${{ secrets.RH_OPENSHIFT_TOKEN }}
          cluster-name: "my-ocp-cluster"
          aws-region: "us-west-2"
          s3-backend-bucket: ${{ secrets.TF_S3_BUCKET }}

For more details, refer to the Delete ROSA HCP Cluster Action README.

III. Retrieve cluster informations

  1. In the output, you will have the created cluster id:
cluster_id = "2b3sq2r4geb7b6htaibb4uqk9qc9c3fa"
  1. Describe the cluster
export CLUSTER_ID="2b3sq2r4geb7b6htaibb4uqk9qc9c3fa"

rosa describe cluster --output=json -c $CLUSTER_ID
  1. Generate the kubeconfig:
export NAMESPACE="myNs"
export SERVER_API=$(rosa describe cluster --output=json -c "$CLUSTER_ID" | jq -r '.api.url')
oc login --username "$ADMIN_USER" --password "$ADMIN_PASS" --server=$SERVER_API

kubectl config rename-context $(oc config current-context) "$CLUSTER_NAME"
kubectl config use "$CLUSTER_NAME"

# create a new project
oc new-project "$NAMESPACE"

Support

Please note that the modules have been tested with Terraform in the version described in the .tool-versions of this project.

About

Terraform modules targeting AWS Red Hat OpenShift Cluster creation for Camunda 8 Self-Managed usage on AWS (experimental)

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published