Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[IBCDPE-938] Deploy Signoz (OTEL Visualization) to kubernetes cluster #35

Merged
merged 86 commits into from
Nov 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
86 commits
Select commit Hold shift + click to select a range
0f93061
Deploy signoz
BryanFauble Aug 29, 2024
73bb182
Deploy signoz
BryanFauble Aug 29, 2024
4e3c472
Update signoz readme
BryanFauble Sep 11, 2024
6b457dd
Disable a handful of items not needed
BryanFauble Sep 12, 2024
5b694bb
Try out kong-ingress
BryanFauble Sep 30, 2024
2f0636a
Correct chart name
BryanFauble Sep 30, 2024
a002873
Update values
BryanFauble Sep 30, 2024
b3768c5
Deploy oauth2 plugin
BryanFauble Sep 30, 2024
d525af3
Correct issuer
BryanFauble Sep 30, 2024
81a3adc
Correct indent
BryanFauble Sep 30, 2024
8c89bc6
Correct revision target
BryanFauble Sep 30, 2024
23ef64e
Deploy out cert-manager
BryanFauble Sep 30, 2024
ef8bd3f
Add oauth2 plugin
BryanFauble Sep 30, 2024
502276b
update plugin
BryanFauble Sep 30, 2024
7a864e7
set openid-connect
BryanFauble Sep 30, 2024
c0e2a4c
Try out envoy gateway
BryanFauble Sep 30, 2024
6bf8506
Disable service monitor
BryanFauble Sep 30, 2024
8b8be2e
Set argocd docker registry oci
BryanFauble Sep 30, 2024
4bf72d9
Point to local argo-cd
BryanFauble Sep 30, 2024
4983fb2
Deploy dex idp
BryanFauble Sep 30, 2024
81f24ef
Correct envoy-gateway chart repo
BryanFauble Sep 30, 2024
e4f3fbb
Set google conncetor
BryanFauble Sep 30, 2024
75a3063
Set storage
BryanFauble Sep 30, 2024
875010b
Set issuer
BryanFauble Sep 30, 2024
41e9147
Deploy DB for dex
BryanFauble Sep 30, 2024
8814411
Deploy DB operator
BryanFauble Sep 30, 2024
0eaafed
Set dex to use postgres
BryanFauble Sep 30, 2024
6c82806
Disable ssl
BryanFauble Sep 30, 2024
74c83f3
ssl
BryanFauble Sep 30, 2024
d4fae1f
Enable cert-manager gateway api support
BryanFauble Oct 1, 2024
1dfa08a
Deploy out ingress
BryanFauble Oct 1, 2024
8dfedba
Try out on-demand node lifecycle
BryanFauble Oct 1, 2024
be902b3
Correct path
BryanFauble Oct 1, 2024
ac6a1e3
Include gatway class
BryanFauble Oct 1, 2024
bf53696
Add some notes
BryanFauble Oct 1, 2024
19190c1
Set scaling back
BryanFauble Oct 1, 2024
fd8fe4f
Run 1 replica but on-demand
BryanFauble Oct 1, 2024
2f6bae7
Remove todo comment
BryanFauble Oct 1, 2024
5ed270d
Point to correct revision
BryanFauble Oct 1, 2024
a61bdd0
Correct comment
BryanFauble Oct 1, 2024
bec8d9d
Add to readme
BryanFauble Oct 1, 2024
5b0aa64
Leave at 2 az deployment
BryanFauble Oct 1, 2024
aefa2e1
Update modules/cluster-ingress/README.md
BryanFauble Oct 1, 2024
58b2de4
Update readme
BryanFauble Oct 2, 2024
e26e7e4
Set param
BryanFauble Oct 4, 2024
a501f32
no multiple sources
BryanFauble Oct 4, 2024
f284709
Set
BryanFauble Oct 4, 2024
5aa954b
Note that the admin password is randomized
BryanFauble Oct 4, 2024
eebfca1
Update modules/signoz/README.md
BryanFauble Oct 4, 2024
835de37
Enable replication for schema migrator
BryanFauble Oct 8, 2024
e8f989f
Set back to single replica for DB init
BryanFauble Oct 8, 2024
5ac8424
Bump replica back to 2
BryanFauble Oct 8, 2024
5947065
Envoy Gateway Minimum TLS (#36)
BryanFauble Oct 15, 2024
e457ef1
Merge branch 'main' into signoz-testing
BryanFauble Oct 17, 2024
139dd6a
Shrink VPC size and create subnets specifically for worker nodes that…
BryanFauble Oct 17, 2024
f3f7647
Add back var
BryanFauble Oct 17, 2024
1dab275
Correct cidr block
BryanFauble Oct 17, 2024
63a54ad
Update cidr blocks
BryanFauble Oct 17, 2024
d4c79d7
Correct node lengths
BryanFauble Oct 18, 2024
204b2ff
Correct array slicing
BryanFauble Oct 18, 2024
34e27cb
Correct indexing
BryanFauble Oct 18, 2024
373b800
Update default eks cluster version
BryanFauble Oct 18, 2024
1b6170e
Shrink EKS control plane subnet range
BryanFauble Oct 21, 2024
3482837
Set range back
BryanFauble Oct 21, 2024
5c5654f
[IBCDPE-1095] Setup TLS/Auth0 for cluster ingress with telemetry dat…
BryanFauble Nov 5, 2024
f314bde
Remove py file
BryanFauble Nov 5, 2024
d4bf895
Update readme note
BryanFauble Nov 5, 2024
ae8eacb
Remove comments about moving to provider
BryanFauble Nov 5, 2024
fc53860
Upgrade helmchart for signoz (#46)
BryanFauble Nov 6, 2024
a295e48
Deploy SES module only when emails are provided
BryanFauble Nov 6, 2024
1db7b42
Correct output conditional
BryanFauble Nov 6, 2024
7f652ec
Create moved blocks for resources
BryanFauble Nov 6, 2024
24bc617
Correct moved blocks (Bad AI)
BryanFauble Nov 6, 2024
b8509df
Remove moved blocks are they're not needed
BryanFauble Nov 6, 2024
fe1e37b
Conditionally deploy auth0 spacelift stack
BryanFauble Nov 6, 2024
4a4da49
Don't autodeploy admin stack and do deploy auth0 for dev
BryanFauble Nov 6, 2024
5774102
Point to specific resource instance
BryanFauble Nov 6, 2024
77f2c22
Move conditional check to for_each loop
BryanFauble Nov 6, 2024
767ac82
Try list instead of map
BryanFauble Nov 6, 2024
908201b
Try `tomap` conversion
BryanFauble Nov 6, 2024
afb6f6e
Try handling dependency with depends_on
BryanFauble Nov 6, 2024
2cfcaee
Add if check within for_each loop
BryanFauble Nov 6, 2024
436908f
Remove unused moved blocks
BryanFauble Nov 6, 2024
501b1d3
[IBCDPE-1095] Use scope based authorization on telemetry upload route…
BryanFauble Nov 19, 2024
74f33bf
[SCHEMATIC-138] SigNoz cold storage and backups (#47)
BryanFauble Nov 21, 2024
dbe7f70
Correction to namespace of where ingress resources are deployed
BryanFauble Nov 21, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
*.tfstate*
.terraform
terraform.tfvars
settings.json
settings.json
temporary_files*
43 changes: 34 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,24 +10,32 @@ This repo is used to deploy an EKS cluster to AWS. CI/CD is managed through Spac
│ └── policies: Rego policies that can be attached to 0..* spacelift stacks
├── dev: Development/sandbox environment
│ ├── spacelift: Terraform scripts to manage spacelift resources
│ │ └── dpe-sandbox: Spacelift specific resources to manage the CI/CD pipeline
│ │ └── dpe-k8s/dpe-sandbox: Spacelift specific resources to manage the CI/CD pipeline
│ └── stacks: The deployable cloud resources
│ ├── dpe-auth0: Stack used to provision and setup auth0 IDP (Identity Provider) settings
│ ├── dpe-sandbox-k8s: K8s + supporting AWS resources
│ └── dpe-sandbox-k8s-deployments: Resources deployed inside of a K8s cluster
└── modules: Templatized collections of terraform resources that are used in a stack
├── apache-airflow: K8s deployment for apache airflow
│ └── templates: Resources used during deployment of airflow
├── argo-cd: K8s deployment for Argo CD, a declarative, GitOps continuous delivery tool for Kubernetes.
│ └── templates: Resources used during deployment of this helm chart
├── trivy-operator: K8s deployment for trivy, along with a few supporting charts for security scanning
│ └── templates: Resources used during deployment of these helm charts
├── victoria-metrics: K8s deployment for victoria metrics, a promethus like tool for cluster metric collection
│ └── templates: Resources used during deployment of these helm charts
├── cert-manager: Handles provisioning TLS certificates for the cluster
── envoy-gateway: API Gateway for the cluster securing and providing secure traffic into the cluster
├── postgres-cloud-native: Used to provision a postgres instance
── postgres-cloud-native-operator: Operator that manages the lifecycle of postgres instances on the cluster
├── demo-network-policies: K8s deployment for a demo showcasing how to use network policies
├── demo-pod-level-security-groups-strict: K8s deployment for a demo showcasing how to use pod level security groups in strict mode
├── sage-aws-eks: Sage specific EKS cluster for AWS
├── sage-aws-eks-addons: Sets up additional resources that need to be installed post creation of the EKS cluster
├── sage-aws-k8s-node-autoscaler: K8s node autoscaler using spotinst ocean
└── sage-aws-vpc: Sage specific VPC for AWS
├── sage-aws-ses: AWS SES (Simple email service) setup
├── sage-aws-vpc: Sage specific VPC for AWS
├── signoz: SigNoz provides APM, logs, traces, metrics, exceptions, & alerts in a single tool
├── trivy-operator: K8s deployment for trivy, along with a few supporting charts for security scanning
│ └── templates: Resources used during deployment of these helm charts
├── victoria-metrics: K8s deployment for victoria metrics, a promethus like tool for cluster metric collection
│ └── templates: Resources used during deployment of these helm charts
```

This root `main.tf` contains all the "Things" that are going to be deployed.
Expand Down Expand Up @@ -164,7 +172,7 @@ allow us to review for any security advisories.

### Deploying an application to the kubernetes cluster
Deployment of applications to the kubernetes cluster is handled through the combination
of terraform (.tf) scripts, spacelift (CICD tool), and ArgoCd (Declarative definitions
of terraform (.tf) scripts, spacelift (CICD tool), and ArgoCd or Flux CD (Declarative definitions
for applications).

To start of the deployment journey the first step is to create a new terraform module
Expand Down Expand Up @@ -283,10 +291,27 @@ This document describes the abbreviated process below:
"iam:*PolicyVersion",
"iam:*OpenIDConnectProvider",
"iam:*InstanceProfile",
"iam:ListPolicyVersions"
"iam:ListPolicyVersions",
"iam:ListGroupsForUser",
"iam:ListAttachedUserPolicies"
],
"Resource": "*"
}
},
{
"Effect": "Allow",
"Action": [
"iam:CreateUser",
"iam:AttachUserPolicy",
"iam:ListPolicies",
"iam:TagUser",
"iam:GetUser",
"iam:DeleteUser",
"iam:CreateAccessKey",
"iam:ListAccessKeys",
"iam:DeleteAccessKeys"
],
"Resource": "arn:aws:iam::{{AWS ACCOUNT ID}}:user/smtp_user"
}
Comment on lines +300 to +314
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Required for the IaC to set up the SMTP user. Is this acceptable, or should this user be created manually?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My two cents: this is acceptable since it's scoped to a single user resource.

]
}
```
Expand Down
89 changes: 65 additions & 24 deletions deployments/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -31,24 +31,55 @@ module "dpe-sandbox-spacelift-development" {
k8s_stack_deployments_name = "DPE DEV Kubernetes Deployments"
k8s_stack_deployments_project_root = "deployments/stacks/dpe-k8s-deployments"

auth0_stack_name = "DPE DEV Auth0"
auth0_stack_project_root = "deployments/stacks/dpe-auth0"
auth0_domain = "dev-sage-dpe.us.auth0.com"
auth0_clients = [
{
name = "bfauble - automation"
description = "App for testing signoz"
app_type = "non_interactive"
scopes = ["write:telemetry"]
},
{
name = "schematic - Github Actions"
description = "Client for Github Actions to export telemetry data"
app_type = "non_interactive"
scopes = ["write:telemetry"]
},
{
name = "schematic - Dev"
description = "Client for schematic deployed to AWS DEV to export telemetry data"
app_type = "non_interactive"
scopes = ["write:telemetry"]
},
]
auth0_identifier = "https://dev.sagedpe.org"

aws_account_id = "631692904429"
region = "us-east-1"

cluster_name = "dpe-k8-sandbox"
vpc_name = "dpe-sandbox"

vpc_cidr_block = "10.51.0.0/16"
# public_subnet_cidrs = ["10.51.1.0/24", "10.51.2.0/24", "10.51.3.0/24"]
# private_subnet_cidrs = ["10.51.4.0/24", "10.51.5.0/24", "10.51.6.0/24"]
# azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
# For now, we are only using one public and one private subnet. This is due to how
# EBS can only be mounted to a single AZ. We will need to revisit this if we want to
# allow usage of EFS ($$$$), or add some kind of EBS volume replication.
# Note: EKS requires at least two subnets in different AZs. However, we are only using
# a single subnet for node deployment.
public_subnet_cidrs = ["10.51.1.0/24", "10.51.2.0/24"]
private_subnet_cidrs = ["10.51.4.0/24", "10.51.5.0/24"]
azs = ["us-east-1a", "us-east-1b"]
vpc_cidr_block = "10.52.16.0/20"
# A public subnet is required for each AZ in which the worker nodes are deployed
public_subnet_cidrs = ["10.52.16.0/24", "10.52.17.0/24", "10.52.19.0/24"]
private_subnet_cidrs_eks_control_plane = ["10.52.18.0/28", "10.52.18.16/28"]
azs_eks_control_plane = ["us-east-1a", "us-east-1b"]

private_subnet_cidrs_eks_worker_nodes = ["10.52.28.0/22", "10.52.24.0/22", "10.52.20.0/22"]
azs_eks_worker_nodes = ["us-east-1c", "us-east-1b", "us-east-1a"]
Comment on lines +65 to +72
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shrinks the VPC size. The order of the AZs also means:

  1. EKS Control plane is deployed to 2 AZs
  2. Public subnets are deployed to all 3 AZs
  3. Private subnets are deployed to all 3 AZs

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine as-is, but there are 6 availability zones in us-east-1 if you wanted to spread them out more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is something I saw mentioned online when starting to tune available spot instances, since each AZ gets their own pool. I hadn't seen any issues with our current setup, but will keep it in mind that we are able to expand out to 6 AZs if we ever need to.


enable_cluster_ingress = true
enable_otel_ingress = true
Comment on lines +74 to +75
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

enable_cluster_ingress controls if envoy gateway is going to be deployed to the cluster. enable_otel_ingress controls whether or not a route to submit telemetry data is going to be created.

ssl_hostname = "dev.sagedpe.org"
auth0_jwks_uri = "https://dev-sage-dpe.us.auth0.com/.well-known/jwks.json"
Comment on lines +76 to +77
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Used for SSL termination at the gateway and to validate incoming requests on routes that require a JWT auth header be provided.

deploy_auth0 = true

ses_email_identities = ["aws-dpe-dev@sagebase.org"]
# Defines the email address that will be used as the sender of the email alerts
smtp_from = "aws-dpe-dev@sagebase.org"
}

module "dpe-sandbox-spacelift-production" {
Expand All @@ -69,22 +100,32 @@ module "dpe-sandbox-spacelift-production" {
k8s_stack_deployments_name = "DPE Kubernetes Deployments"
k8s_stack_deployments_project_root = "deployments/stacks/dpe-k8s-deployments"

auth0_stack_name = "DPE Auth0"
auth0_stack_project_root = "deployments/stacks/dpe-auth0"
auth0_domain = ""
auth0_clients = []
auth0_identifier = ""

aws_account_id = "766808016710"
region = "us-east-1"

cluster_name = "dpe-k8"
vpc_name = "dpe-k8"

vpc_cidr_block = "10.52.0.0/16"
# public_subnet_cidrs = ["10.52.1.0/24", "10.52.2.0/24", "10.52.3.0/24"]
# private_subnet_cidrs = ["10.52.4.0/24", "10.52.5.0/24", "10.52.6.0/24"]
# azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
# For now, we are only using one public and one private subnet. This is due to how
# EBS can only be mounted to a single AZ. We will need to revisit this if we want to
# allow usage of EFS ($$$$), or add some kind of EBS volume replication.
# Note: EKS requires at least two subnets in different AZs. However, we are only using
# a single subnet for node deployment.
public_subnet_cidrs = ["10.52.1.0/24", "10.52.2.0/24"]
private_subnet_cidrs = ["10.52.4.0/24", "10.52.5.0/24"]
azs = ["us-east-1a", "us-east-1b"]
vpc_cidr_block = "10.52.0.0/20"
# A public subnet is required for each AZ in which the worker nodes are deployed
public_subnet_cidrs = ["10.52.0.0/24", "10.52.1.0/24", "10.52.3.0/24"]
private_subnet_cidrs_eks_control_plane = ["10.52.2.0/28", "10.52.2.16/28"]
azs_eks_control_plane = ["us-east-1a", "us-east-1b"]

private_subnet_cidrs_eks_worker_nodes = ["10.52.12.0/22", "10.52.8.0/22", "10.52.4.0/22"]
azs_eks_worker_nodes = ["us-east-1c", "us-east-1b", "us-east-1a"]

enable_cluster_ingress = false
enable_otel_ingress = false
ssl_hostname = ""
auth0_jwks_uri = ""
deploy_auth0 = false

ses_email_identities = []
}
121 changes: 77 additions & 44 deletions deployments/spacelift/dpe-k8s/main.tf
Original file line number Diff line number Diff line change
@@ -1,32 +1,51 @@
locals {
k8s_stack_environment_variables = {
aws_account_id = var.aws_account_id
region = var.region
pod_security_group_enforcing_mode = var.pod_security_group_enforcing_mode
cluster_name = var.cluster_name
vpc_name = var.vpc_name
vpc_cidr_block = var.vpc_cidr_block
public_subnet_cidrs = var.public_subnet_cidrs
private_subnet_cidrs = var.private_subnet_cidrs
azs = var.azs
aws_account_id = var.aws_account_id
region = var.region
pod_security_group_enforcing_mode = var.pod_security_group_enforcing_mode
cluster_name = var.cluster_name
vpc_name = var.vpc_name
vpc_cidr_block = var.vpc_cidr_block
public_subnet_cidrs = var.public_subnet_cidrs
private_subnet_cidrs_eks_control_plane = var.private_subnet_cidrs_eks_control_plane
private_subnet_cidrs_eks_worker_nodes = var.private_subnet_cidrs_eks_worker_nodes
azs_eks_control_plane = var.azs_eks_control_plane
azs_eks_worker_nodes = var.azs_eks_worker_nodes
ses_email_identities = var.ses_email_identities
}

k8s_stack_deployments_variables = {
spotinst_account = var.spotinst_account
vpc_cidr_block = var.vpc_cidr_block
spotinst_account = var.spotinst_account
vpc_cidr_block = var.vpc_cidr_block
cluster_name = var.cluster_name
auto_deploy = var.auto_deploy
auto_prune = var.auto_prune
git_revision = var.git_branch
aws_account_id = var.aws_account_id
enable_cluster_ingress = var.enable_cluster_ingress
enable_otel_ingress = var.enable_otel_ingress
ssl_hostname = var.ssl_hostname
auth0_jwks_uri = var.auth0_jwks_uri
smtp_from = var.smtp_from
auth0_identifier = var.auth0_identifier
}

auth0_stack_variables = {
cluster_name = var.cluster_name
auto_deploy = var.auto_deploy
auto_prune = var.auto_prune
git_revision = var.git_branch
aws_account_id = var.aws_account_id
auth0_domain = var.auth0_domain
auth0_clients = var.auth0_clients
auth0_identifier = var.auth0_identifier
}

# Variables to be passed from the k8s stack to the deployments stack
k8s_stack_to_deployment_variables = {
vpc_id = "TF_VAR_vpc_id"
private_subnet_ids = "TF_VAR_private_subnet_ids"
node_security_group_id = "TF_VAR_node_security_group_id"
pod_to_node_dns_sg_id = "TF_VAR_pod_to_node_dns_sg_id"
vpc_id = "TF_VAR_vpc_id"
private_subnet_ids_eks_worker_nodes = "TF_VAR_private_subnet_ids_eks_worker_nodes"
node_security_group_id = "TF_VAR_node_security_group_id"
pod_to_node_dns_sg_id = "TF_VAR_pod_to_node_dns_sg_id"
smtp_user = "TF_VAR_smtp_user"
smtp_password = "TF_VAR_smtp_password"
cluster_oidc_provider_arn = "TF_VAR_cluster_oidc_provider_arn"
}
}

Expand Down Expand Up @@ -160,31 +179,6 @@ resource "spacelift_stack_dependency_reference" "cluster-name" {
# stack_id = spacelift_stack.k8s-stack.id
# }

resource "spacelift_stack_destructor" "k8s-stack-deployments-destructor" {
depends_on = [
spacelift_stack.k8s-stack,
spacelift_aws_integration_attachment.k8s-deployments-aws-integration-attachment,
spacelift_context_attachment.k8s-kubeconfig-hooks,
spacelift_stack_dependency_reference.cluster-name,
spacelift_stack_dependency_reference.region-name,
spacelift_environment_variable.k8s-stack-deployments-environment-variables
]

stack_id = spacelift_stack.k8s-stack-deployments.id
}

resource "spacelift_stack_destructor" "k8s-stack-destructor" {
depends_on = [
spacelift_aws_integration_attachment.k8s-aws-integration-attachment,
spacelift_context_attachment.k8s-kubeconfig-hooks,
spacelift_stack_dependency_reference.cluster-name,
spacelift_stack_dependency_reference.region-name,
spacelift_environment_variable.k8s-stack-environment-variables
]

stack_id = spacelift_stack.k8s-stack.id
}

resource "spacelift_aws_integration_attachment" "k8s-aws-integration-attachment" {
integration_id = var.aws_integration_id
stack_id = spacelift_stack.k8s-stack.id
Expand All @@ -199,3 +193,42 @@ resource "spacelift_aws_integration_attachment" "k8s-deployments-aws-integration
read = true
write = true
}

resource "spacelift_stack" "auth0" {
count = var.deploy_auth0 ? 1 : 0
github_enterprise {
namespace = "Sage-Bionetworks-Workflows"
id = "sage-bionetworks-workflows-gh"
}

depends_on = [
spacelift_space.dpe-space
]

administrative = false
autodeploy = var.auto_deploy
branch = var.git_branch
description = "Stack used to create and manage Auth0 for authentication"
name = var.auth0_stack_name
project_root = var.auth0_stack_project_root
repository = "eks-stack"
terraform_version = var.opentofu_version
terraform_workflow_tool = "OPEN_TOFU"
space_id = spacelift_space.dpe-space.id
additional_project_globs = [
"deployments/"
]
}

resource "spacelift_environment_variable" "auth0-stack-environment-variables" {
depends_on = [
spacelift_stack.auth0
]

for_each = { for k, v in local.auth0_stack_variables : k => v if var.deploy_auth0 }

stack_id = spacelift_stack.auth0[0].id
name = "TF_VAR_${each.key}"
value = try(tostring(each.value), jsonencode(each.value))
write_only = false
}
Loading