You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Error: Unable to continue with install: ServiceAccount "zabbix" in namespace "monitoring" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "zabbix"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "monitoring"
helm.go:84: [debug] ServiceAccount "zabbix" in namespace "monitoring" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "zabbix"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "monitoring"
Unable to continue with install
Maybe what you did was wrong, wanting argocd and helm cli to maintain the same installation
May it be you just had a manually made (helm install...) installation in the same namespace before and after uninstalling you did not remove all the (probably remaining) manifests? I am not sure whether helm uninstalls Serviceaccounts by default.
Let's have a look at this in more detail..... in the next year :-)
Yes... 2024 finished :-)
But I recreated the scenario today.
I create a new kubernetes cluster using kind. I install zabbix using ArgoCD and a tried apply a new config using helm CLI.
Again, maybe this scenario is unreal in production environment. The focus of this issue is the value of the labels. We can ignore my installation scenario.
Happy new year my friend a thanks a lot for your help in this project.
OK, I see. We are using old label names from a very early version of this Helm Chart and have never updated / fixed them.
I can implement the labelling the way it is done when creating a brand new Helmchart with the helm create command, using Release.Service for the app.kubernetes.io/managed-by label, defining all base level labels in _helpers.tmpl and using them in the manifests.
I will include this change into the to-be-published pull request I am working on (postgres Access...) as this one will be a breaking change anyway, or do you think it would be better to create an additional PR for this?
Describe the bug
@fibbs I think that we use some labels wrong...
We have the label
app.kubernetes.io/managed-by
with service name, but must beHelm
For service name, we can use
app.kubernetes.io/part-of
orapp.kubernetes.io/component
Reference: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
What happened:
I install zabbix using ArgoCD described here: #140 (comment)
After this I tried change some config using helm cli using the command:
helm upgrade --install zabbix --dependency-update --create-namespace -f zabbix_values.yaml ~/git/my_git/helm-zabbix/charts/zabbix/ -n monitoring --debug --dry-run
But I receive the error:
Maybe what you did was wrong, wanting argocd and helm cli to maintain the same installation
But I found this doc: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/ and understood that we use the label wrong.
What you expected to happen:
The label
app.kubernetes.io/managed-by
must beHelm
We can use
app.kubernetes.io/part-of
orapp.kubernetes.io/component
for service name.The text was updated successfully, but these errors were encountered: