-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[IBCDPE-1007] Templatizing items out to a root level #15
Conversation
…oviders themselves
} | ||
} | ||
} | ||
resource "kubectl_manifest" "vmservicescrape" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Swapping over to this is to resolve a race condition issue. Since the VMServiceScrape kind is a resource that depends on a CRD (Custom Resource Definition) being installed already on the cluster the terraform plan stage was failing with:
│ Error: API did not recognize GroupVersionKind from manifest (CRD may not be installed)
│
│ with module.trivy-operator.kubernetes_manifest.vmservicescrape,
│ on .terraform/modules/trivy-operator/main.tf line 20, in resource "kubernetes_manifest" "vmservicescrape":
│ 20: resource "kubernetes_manifest" "vmservicescrape" {
│
│ no matches for kind "VMServiceScrape" in group
│ "operator.victoriametrics.com"
The suggestion was to try out a different provider (https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs/resources/kubectl_manifest) to get around this issue. This allows the resources to be created as expected in the right order.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting that this resource has you pass a whole YAML configuration to the yaml_body
arg instead of something more similar to how kubernetes_manifest
is configured
@@ -16,7 +16,7 @@ victoria-metrics-operator: | |||
cleanupImage: | |||
repository: bitnami/kubectl | |||
# use image tag that matches k8s API version by default | |||
# tag: 1.29.6 | |||
tag: 1.30.2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is pinning this tag better than using the default as was done before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a bug that I had reported around this. Basically a '+' sign that was getting included in the tag image that prevented the cleanup job from running. I updated the helm chart to the newer version so we don't need to set this tag manually anymore. I will remove
Problem:
Solution:
main.tf
anddeployments/main.tf
deployments/main.tf
Testing: