-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(controller): ignore category value delete error #470
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #470 +/- ##
=======================================
Coverage 31.14% 31.14%
=======================================
Files 14 14
Lines 1477 1477
=======================================
Hits 460 460
Misses 1017 1017 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Although in future, we should rethink approach to default categories altogether.
fce5054
to
b53b9d3
Compare
b53b9d3
to
f1de4e2
Compare
The CVE causing "Build, Lint, and Test / build-container" to fail is fixed in #472 |
575432d
to
e0f5a70
Compare
384e87a
to
825df94
Compare
deleting only 2nd cluster failed but with different issue. Nutanix categories Create and delete 2 clusters with same name with default cluster categories and should succeed [capx-feature-test, categories, same-name-clusters] /Users/deepak.muley/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/categories_test.go:179 STEP: Creating a namespace for hosting the "cluster-categories" test spec @ 08/13/24 11:40:45.992 INFO: Creating namespace cluster-categories-ah639c INFO: Creating event watcher for namespace "cluster-categories-ah639c" STEP: Creating a workload cluster 1 @ 08/13/24 11:40:46.007 INFO: Creating the workload cluster with name "cluster-categories-vyrr41" using the "(default)" template (Kubernetes v1.30.0, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster cluster-categories-vyrr41 --infrastructure (default) --kubernetes-version v1.30.0 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Creating the workload cluster with name "cluster-categories-vyrr41" from the provided yaml INFO: Applying the cluster template yaml of cluster cluster-categories-ah639c/cluster-categories-vyrr41 Running kubectl apply --kubeconfig /var/folders/l3/8dxq0z892n992ffkmb33003r0000gn/T/e2e-kind1553999536 -f - stdout: configmap/cluster-categories-vyrr41-pc-trusted-ca-bundle created configmap/nutanix-ccm created configmap/cni-cluster-categories-vyrr41-crs-cni created secret/cluster-categories-vyrr41 created secret/nutanix-ccm-secret created clusterresourceset.addons.cluster.x-k8s.io/nutanix-ccm-crs created clusterresourceset.addons.cluster.x-k8s.io/cluster-categories-vyrr41-crs-cni created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-categories-vyrr41-kcfg-0 created cluster.cluster.x-k8s.io/cluster-categories-vyrr41 created machinedeployment.cluster.x-k8s.io/cluster-categories-vyrr41-wmd created machinehealthcheck.cluster.x-k8s.io/cluster-categories-vyrr41-mhc created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-categories-vyrr41-kcp created nutanixcluster.infrastructure.cluster.x-k8s.io/cluster-categories-vyrr41 created nutanixmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-categories-vyrr41-mt-0 created INFO: Waiting for the cluster infrastructure of cluster cluster-categories-ah639c/cluster-categories-vyrr41 to be provisioned STEP: Waiting for cluster to enter the provisioned phase @ 08/13/24 11:40:47.502 INFO: Waiting for control plane of cluster cluster-categories-ah639c/cluster-categories-vyrr41 to be initialized INFO: Waiting for the first control plane machine managed by cluster-categories-ah639c/cluster-categories-vyrr41-kcp to be provisioned STEP: Waiting for one control plane node to exist @ 08/13/24 11:40:57.53 INFO: Waiting for control plane of cluster cluster-categories-ah639c/cluster-categories-vyrr41 to be ready INFO: Waiting for control plane cluster-categories-ah639c/cluster-categories-vyrr41-kcp to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready @ 08/13/24 11:42:27.676 STEP: Checking all the control plane machines are in the expected failure domains @ 08/13/24 11:42:27.711 INFO: Waiting for the machine deployments of cluster cluster-categories-ah639c/cluster-categories-vyrr41 to be provisioned STEP: Waiting for the workload nodes to exist @ 08/13/24 11:42:27.764 STEP: Checking all the machines controlled by cluster-categories-vyrr41-wmd are in the "" failure domain @ 08/13/24 11:42:27.785 INFO: Waiting for the machine pools of cluster cluster-categories-ah639c/cluster-categories-vyrr41 to be provisioned STEP: Checking cluster category condition is true @ 08/13/24 11:42:27.811 STEP: Checking if a category was created @ 08/13/24 11:42:27.825 STEP: Checking if there are VMs assigned to this category @ 08/13/24 11:42:28.162 STEP: Setting different Control plane endpoint IP for 2nd cluster @ 08/13/24 11:42:30.87 STEP: configure env for 2nd cluster @ 08/13/24 11:42:30.87 STEP: Creating a namespace for hosting the "cluster-categories-2" test spec @ 08/13/24 11:42:30.87 INFO: Creating namespace cluster-categories-2-fij9lt INFO: Creating event watcher for namespace "cluster-categories-2-fij9lt" STEP: Creating a workload cluster 2 with same name @ 08/13/24 11:42:30.904 INFO: Creating the workload cluster with name "cluster-categories-vyrr41" using the "(default)" template (Kubernetes v1.30.0, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster cluster-categories-vyrr41 --infrastructure (default) --kubernetes-version v1.30.0 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Creating the workload cluster with name "cluster-categories-vyrr41" from the provided yaml INFO: Applying the cluster template yaml of cluster cluster-categories-2-fij9lt/cluster-categories-vyrr41 Running kubectl apply --kubeconfig /var/folders/l3/8dxq0z892n992ffkmb33003r0000gn/T/e2e-kind1553999536 -f - stdout: configmap/cluster-categories-vyrr41-pc-trusted-ca-bundle created configmap/nutanix-ccm created configmap/cni-cluster-categories-vyrr41-crs-cni created secret/cluster-categories-vyrr41 created secret/nutanix-ccm-secret created clusterresourceset.addons.cluster.x-k8s.io/nutanix-ccm-crs created clusterresourceset.addons.cluster.x-k8s.io/cluster-categories-vyrr41-crs-cni created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-categories-vyrr41-kcfg-0 created cluster.cluster.x-k8s.io/cluster-categories-vyrr41 created machinedeployment.cluster.x-k8s.io/cluster-categories-vyrr41-wmd created machinehealthcheck.cluster.x-k8s.io/cluster-categories-vyrr41-mhc created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-categories-vyrr41-kcp created nutanixcluster.infrastructure.cluster.x-k8s.io/cluster-categories-vyrr41 created nutanixmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-categories-vyrr41-mt-0 created INFO: Waiting for the cluster infrastructure of cluster cluster-categories-2-fij9lt/cluster-categories-vyrr41 to be provisioned STEP: Waiting for cluster to enter the provisioned phase @ 08/13/24 11:42:32.915 INFO: Waiting for control plane of cluster cluster-categories-2-fij9lt/cluster-categories-vyrr41 to be initialized INFO: Waiting for the first control plane machine managed by cluster-categories-2-fij9lt/cluster-categories-vyrr41-kcp to be provisioned STEP: Waiting for one control plane node to exist @ 08/13/24 11:43:12.977 INFO: Waiting for control plane of cluster cluster-categories-2-fij9lt/cluster-categories-vyrr41 to be ready INFO: Waiting for control plane cluster-categories-2-fij9lt/cluster-categories-vyrr41-kcp to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready @ 08/13/24 11:44:33.144 STEP: Checking all the control plane machines are in the expected failure domains @ 08/13/24 11:45:03.261 INFO: Waiting for the machine deployments of cluster cluster-categories-2-fij9lt/cluster-categories-vyrr41 to be provisioned STEP: Waiting for the workload nodes to exist @ 08/13/24 11:45:03.284 STEP: Checking all the machines controlled by cluster-categories-vyrr41-wmd are in the "" failure domain @ 08/13/24 11:45:23.333 INFO: Waiting for the machine pools of cluster cluster-categories-2-fij9lt/cluster-categories-vyrr41 to be provisioned STEP: Checking cluster category condition is true @ 08/13/24 11:45:23.348 STEP: Checking if a category was created @ 08/13/24 11:45:23.358 STEP: Checking if there are VMs assigned to this category @ 08/13/24 11:45:23.689 STEP: Delete 2nd cluster and namespace2 @ 08/13/24 11:45:25.343 STEP: Deleting cluster cluster-categories-2-fij9lt/cluster-categories-vyrr41 @ 08/13/24 11:45:25.345 INFO: Waiting for the Cluster object to be deleted STEP: Waiting for cluster cluster-categories-2-fij9lt/cluster-categories-vyrr41 to be deleted @ 08/13/24 11:45:25.365 INFO: Check for all the Cluster API resources being deleted [FAILED] in [It] - /Users/deepak.muley/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.7.4/framework/cluster_helpers.go:243 @ 08/13/24 11:49:45.454 STEP: Dumping logs from the "cluster-categories-vyrr41" workload cluster @ 08/13/24 11:49:45.457 Failed to get logs for Machine cluster-categories-vyrr41-kcp-kgmt2, Cluster cluster-categories-ah639c/cluster-categories-vyrr41: [error creating container exec: Error response from daemon: No such container: cluster-categories-vyrr41-kcp-kgmt2, : error creating container exec: Error response from daemon: No such container: cluster-categories-vyrr41-kcp-kgmt2] Failed to get logs for Machine cluster-categories-vyrr41-wmd-brlqf-qt6rk, Cluster cluster-categories-ah639c/cluster-categories-vyrr41: [error creating container exec: Error response from daemon: No such container: cluster-categories-vyrr41-wmd-brlqf-qt6rk, : error creating container exec: Error response from daemon: No such container: cluster-categories-vyrr41-wmd-brlqf-qt6rk] Failed to get infrastructure logs for Cluster cluster-categories-ah639c/cluster-categories-vyrr41: failed to inspect container "cluster-categories-vyrr41-lb": Error response from daemon: No such container: cluster-categories-vyrr41-lb STEP: Dumping all the Cluster API resources in the "cluster-categories-ah639c" namespace @ 08/13/24 11:49:45.606 • [FAILED] [539.842 seconds] Nutanix categories [It] Create and delete 2 clusters with same name with default cluster categories and should succeed [capx-feature-test, categories, same-name-clusters] /Users/deepak.muley/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/categories_test.go:179 [FAILED] Timed out after 180.000s. There are still Cluster API resources in the "cluster-categories-2-fij9lt" namespace Expected <[]*unstructured.Unstructured | len:2, cap:2>: [ { Object: { "apiVersion": "addons.cluster.x-k8s.io/v1beta1", "kind": "ClusterResourceSet", "metadata": { "annotations": { "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"addons.cluster.x-k8s.io/v1beta1\",\"kind\":\"ClusterResourceSet\",\"metadata\":{\"annotations\":{},\"name\":\"cluster-categories-vyrr41-crs-cni\",\"namespace\":\"cluster-categories-2-fij9lt\"},\"spec\":{\"clusterSelector\":{\"matchLabels\":{\"cni\":\"cluster-categories-vyrr41-crs-cni\"}},\"resources\":[{\"kind\":\"ConfigMap\",\"name\":\"cni-cluster-categories-vyrr41-crs-cni\"}],\"strategy\":\"ApplyOnce\"}}\n", }, "creationTimestamp": "2024-08-13T18:42:32Z", "finalizers": <[]interface {} | len:1, cap:1>[ "addons.cluster.x-k8s.io", ], "name": "cluster-categories-vyrr41-crs-cni", "namespace": "cluster-categories-2-fij9lt", "uid": "15002714-8d14-4a66-bf1b-f1e90e9a0ecf", "generation": 1, "managedFields": <[]interface {} | len:3, cap:4>[ { "time": "2024-08-13T18:42:32Z", "apiVersion": "addons.cluster.x-k8s.io/v1beta1", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { "f:strategy": {}, ".": {}, "f:clusterSelector": {}, "f:resources": {}, }, "f:metadata": { "f:annotations": { ".": {}, "f:kubectl.kubernetes.io/last-applied-configuration": {}, }, }, }, "manager": "kubectl-client-side-apply", "operation": "Update", }, { "apiVersion": "addons.cluster.x-k8s.io/v1beta1", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:finalizers": { "v:\"addons.cluster.x-k8s.io\"": {}, ".": {}, }, }, }, "manager": "manager", "operation": "Update", "time": "2024-08-13T18:42:32Z", }, { "time": "2024-08-13T18:44:48Z", "apiVersion": "addons.cluster.x-... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to be empty In [It] at: /Users/deepak.muley/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.7.4/framework/cluster_helpers.go:243 @ 08/13/24 11:49:45.454 Full Stack Trace sigs.k8s.io/cluster-api/test/framework.DeleteClusterAndWait({0x117b8e40, 0xc0006ab770}, {{0x117cdb80?, 0xc0016ac3f0?}, 0xc00222c820?}, {0xc00204cb40, 0x2, 0x2}) /Users/deepak.muley/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.7.4/framework/cluster_helpers.go:243 +0x567 github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e.testHelper.deleteClusterAndWait({0xc000b7b100?, 0x0?}, {0x117b8e40, 0xc0006ab770}, {0x10c7088a, 0x12}, {0x117cb418?, 0xc00206cc60?}, 0xc00222c820, 0xc000b7b100) /Users/deepak.muley/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/test_helpers.go:479 +0xb1 github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e.init.func5.6.11() /Users/deepak.muley/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/categories_test.go:278 +0xbb github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e.init.func5.6() /Users/deepak.muley/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/categories_test.go:277 +0x765 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite] /Users/deepak.muley/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:118 STEP: Dumping logs from the bootstrap cluster @ 08/13/24 11:49:45.846 STEP: Tearing down the management cluster @ 08/13/24 11:49:46.905 [SynchronizedAfterSuite] PASSED [1.059 seconds] ------------------------------ [ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report autogenerated by Ginkgo [ReportAfterSuite] PASSED [0.013 seconds] ------------------------------ Summarizing 1 Failure: [FAIL] Nutanix categories [It] Create and delete 2 clusters with same name with default cluster categories and should succeed [capx-feature-test, categories, same-name-clusters] /Users/deepak.muley/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.7.4/framework/cluster_helpers.go:243 Ran 1 of 102 Specs in 686.908 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 101 Skipped --- FAIL: TestE2E (686.93s) FAIL Ginkgo ran 1 suite in 11m47.250120437s Test Suite Failed make[1]: *** [Makefile:354: test-e2e] Error 1 make[1]: Leaving directory '/Users/deepak.muley/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix' make: *** [Makefile:421: test-e2e-calico] Error 2 shows that we still has following objects present kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -o name -n cluster-categories-2-fij9lt configmap/cni-cluster-categories-vyrr41-crs-cni configmap/kube-root-ca.crt configmap/nutanix-ccm event/cluster-categories-vyrr41-kcp-9mb9x.17eb5df4000c08db event/cluster-categories-vyrr41-kcp-9mb9x.17eb5e083d06a47b event/cluster-categories-vyrr41-kcp-9mb9x.17eb5e0845b8991e event/cluster-categories-vyrr41-wmd-6jl7t-pkg6j.17eb5e01a62ed897 event/cluster-categories-vyrr41-wmd-6jl7t-pkg6j.17eb5e125d6594a4 event/cluster-categories-vyrr41-wmd-6jl7t-pkg6j.17eb5e1c50991ea5 event/cluster-categories-vyrr41-wmd-6jl7t.17eb5dec9aeaf3b5 event/cluster-categories-vyrr41-wmd.17eb5dec7d9189db event/cluster-categories-vyrr41-wmd.17eb5dec964bed1f event/cluster-categories-vyrr41.17eb5dec7784f174 event/cluster-categories-vyrr41.17eb5df3904b52a9 event/cluster-categories-vyrr41.17eb5df3905e5c4a event/cluster-categories-vyrr41.17eb5e0dc8699dfa event/cluster-categories-vyrr41.17eb5e14b1fe375a event/cluster-categories-vyrr41.17eb5e274200e750 secret/nutanix-ccm-secret serviceaccount/default clusterresourceset.addons.cluster.x-k8s.io/cluster-categories-vyrr41-crs-cni clusterresourceset.addons.cluster.x-k8s.io/nutanix-ccm-crs event.events.k8s.io/cluster-categories-vyrr41-kcp-9mb9x.17eb5df4000c08db event.events.k8s.io/cluster-categories-vyrr41-kcp-9mb9x.17eb5e083d06a47b event.events.k8s.io/cluster-categories-vyrr41-kcp-9mb9x.17eb5e0845b8991e event.events.k8s.io/cluster-categories-vyrr41-wmd-6jl7t-pkg6j.17eb5e01a62ed897 event.events.k8s.io/cluster-categories-vyrr41-wmd-6jl7t-pkg6j.17eb5e125d6594a4 event.events.k8s.io/cluster-categories-vyrr41-wmd-6jl7t-pkg6j.17eb5e1c50991ea5 event.events.k8s.io/cluster-categories-vyrr41-wmd-6jl7t.17eb5dec9aeaf3b5 event.events.k8s.io/cluster-categories-vyrr41-wmd.17eb5dec7d9189db event.events.k8s.io/cluster-categories-vyrr41-wmd.17eb5dec964bed1f event.events.k8s.io/cluster-categories-vyrr41.17eb5dec7784f174 event.events.k8s.io/cluster-categories-vyrr41.17eb5df3904b52a9 event.events.k8s.io/cluster-categories-vyrr41.17eb5df3905e5c4a event.events.k8s.io/cluster-categories-vyrr41.17eb5e0dc8699dfa event.events.k8s.io/cluster-categories-vyrr41.17eb5e14b1fe375a event.events.k8s.io/cluster-categories-vyrr41.17eb5e274200e750 |
8461c56
to
1912073
Compare
with DeleteAllClustersAndWait, test pass Nutanix categories Create and delete 2 clusters with same name with default cluster categories and should succeed [capx-feature-test, categories, same-name-clusters] /Users/deepak.muley/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/categories_test.go:179 STEP: Creating a namespace for hosting the "cluster-categories" test spec @ 08/13/24 12:18:23.924 INFO: Creating namespace cluster-categories-73xue6 INFO: Creating event watcher for namespace "cluster-categories-73xue6" STEP: Creating a workload cluster 1 @ 08/13/24 12:18:23.942 INFO: Creating the workload cluster with name "cluster-categories-xei72j" using the "(default)" template (Kubernetes v1.30.0, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster cluster-categories-xei72j --infrastructure (default) --kubernetes-version v1.30.0 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Creating the workload cluster with name "cluster-categories-xei72j" from the provided yaml INFO: Applying the cluster template yaml of cluster cluster-categories-73xue6/cluster-categories-xei72j Running kubectl apply --kubeconfig /var/folders/l3/8dxq0z892n992ffkmb33003r0000gn/T/e2e-kind2791819926 -f - stdout: configmap/cluster-categories-xei72j-pc-trusted-ca-bundle created configmap/nutanix-ccm created configmap/cni-cluster-categories-xei72j-crs-cni created secret/cluster-categories-xei72j created secret/nutanix-ccm-secret created clusterresourceset.addons.cluster.x-k8s.io/nutanix-ccm-crs created clusterresourceset.addons.cluster.x-k8s.io/cluster-categories-xei72j-crs-cni created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-categories-xei72j-kcfg-0 created cluster.cluster.x-k8s.io/cluster-categories-xei72j created machinedeployment.cluster.x-k8s.io/cluster-categories-xei72j-wmd created machinehealthcheck.cluster.x-k8s.io/cluster-categories-xei72j-mhc created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-categories-xei72j-kcp created nutanixcluster.infrastructure.cluster.x-k8s.io/cluster-categories-xei72j created nutanixmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-categories-xei72j-mt-0 created INFO: Waiting for the cluster infrastructure of cluster cluster-categories-73xue6/cluster-categories-xei72j to be provisioned STEP: Waiting for cluster to enter the provisioned phase @ 08/13/24 12:18:26.792 INFO: Waiting for control plane of cluster cluster-categories-73xue6/cluster-categories-xei72j to be initialized INFO: Waiting for the first control plane machine managed by cluster-categories-73xue6/cluster-categories-xei72j-kcp to be provisioned STEP: Waiting for one control plane node to exist @ 08/13/24 12:18:36.813 INFO: Waiting for control plane of cluster cluster-categories-73xue6/cluster-categories-xei72j to be ready INFO: Waiting for control plane cluster-categories-73xue6/cluster-categories-xei72j-kcp to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready @ 08/13/24 12:20:26.932 STEP: Checking all the control plane machines are in the expected failure domains @ 08/13/24 12:20:26.949 INFO: Waiting for the machine deployments of cluster cluster-categories-73xue6/cluster-categories-xei72j to be provisioned STEP: Waiting for the workload nodes to exist @ 08/13/24 12:20:26.974 STEP: Checking all the machines controlled by cluster-categories-xei72j-wmd are in the "" failure domain @ 08/13/24 12:20:26.996 INFO: Waiting for the machine pools of cluster cluster-categories-73xue6/cluster-categories-xei72j to be provisioned STEP: Checking cluster category condition is true @ 08/13/24 12:20:27.022 STEP: Checking if a category was created @ 08/13/24 12:20:27.034 STEP: Checking if there are VMs assigned to this category @ 08/13/24 12:20:27.409 STEP: Setting different Control plane endpoint IP for 2nd cluster @ 08/13/24 12:20:29.229 STEP: configure env for 2nd cluster @ 08/13/24 12:20:29.229 STEP: Creating a namespace for hosting the "cluster-categories-2" test spec @ 08/13/24 12:20:29.229 INFO: Creating namespace cluster-categories-2-t9qv7b INFO: Creating event watcher for namespace "cluster-categories-2-t9qv7b" STEP: Creating a workload cluster 2 with same name @ 08/13/24 12:20:29.242 INFO: Creating the workload cluster with name "cluster-categories-xei72j" using the "(default)" template (Kubernetes v1.30.0, 1 control-plane machines, 1 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster cluster-categories-xei72j --infrastructure (default) --kubernetes-version v1.30.0 --control-plane-machine-count 1 --worker-machine-count 1 --flavor (default) INFO: Creating the workload cluster with name "cluster-categories-xei72j" from the provided yaml INFO: Applying the cluster template yaml of cluster cluster-categories-2-t9qv7b/cluster-categories-xei72j Running kubectl apply --kubeconfig /var/folders/l3/8dxq0z892n992ffkmb33003r0000gn/T/e2e-kind2791819926 -f - stdout: configmap/cluster-categories-xei72j-pc-trusted-ca-bundle created configmap/nutanix-ccm created configmap/cni-cluster-categories-xei72j-crs-cni created secret/cluster-categories-xei72j created secret/nutanix-ccm-secret created clusterresourceset.addons.cluster.x-k8s.io/nutanix-ccm-crs created clusterresourceset.addons.cluster.x-k8s.io/cluster-categories-xei72j-crs-cni created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-categories-xei72j-kcfg-0 created cluster.cluster.x-k8s.io/cluster-categories-xei72j created machinedeployment.cluster.x-k8s.io/cluster-categories-xei72j-wmd created machinehealthcheck.cluster.x-k8s.io/cluster-categories-xei72j-mhc created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-categories-xei72j-kcp created nutanixcluster.infrastructure.cluster.x-k8s.io/cluster-categories-xei72j created nutanixmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-categories-xei72j-mt-0 created INFO: Waiting for the cluster infrastructure of cluster cluster-categories-2-t9qv7b/cluster-categories-xei72j to be provisioned STEP: Waiting for cluster to enter the provisioned phase @ 08/13/24 12:20:30.735 INFO: Waiting for control plane of cluster cluster-categories-2-t9qv7b/cluster-categories-xei72j to be initialized INFO: Waiting for the first control plane machine managed by cluster-categories-2-t9qv7b/cluster-categories-xei72j-kcp to be provisioned STEP: Waiting for one control plane node to exist @ 08/13/24 12:21:10.767 INFO: Waiting for control plane of cluster cluster-categories-2-t9qv7b/cluster-categories-xei72j to be ready INFO: Waiting for control plane cluster-categories-2-t9qv7b/cluster-categories-xei72j-kcp to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready @ 08/13/24 12:23:10.862 STEP: Checking all the control plane machines are in the expected failure domains @ 08/13/24 12:23:10.871 INFO: Waiting for the machine deployments of cluster cluster-categories-2-t9qv7b/cluster-categories-xei72j to be provisioned STEP: Waiting for the workload nodes to exist @ 08/13/24 12:23:10.883 STEP: Checking all the machines controlled by cluster-categories-xei72j-wmd are in the "" failure domain @ 08/13/24 12:23:50.945 INFO: Waiting for the machine pools of cluster cluster-categories-2-t9qv7b/cluster-categories-xei72j to be provisioned STEP: Checking cluster category condition is true @ 08/13/24 12:23:50.97 STEP: Checking if a category was created @ 08/13/24 12:23:50.979 STEP: Checking if there are VMs assigned to this category @ 08/13/24 12:23:51.39 STEP: unconfigure 2nd clusters @ 08/13/24 12:23:53.235 STEP: Set original Control plane endpoint IP @ 08/13/24 12:23:53.235 STEP: Dumping logs from the "cluster-categories-xei72j" workload cluster @ 08/13/24 12:23:53.236 Failed to get logs for Machine cluster-categories-xei72j-kcp-5cx5g, Cluster cluster-categories-73xue6/cluster-categories-xei72j: [error creating container exec: Error response from daemon: No such container: cluster-categories-xei72j-kcp-5cx5g, : error creating container exec: Error response from daemon: No such container: cluster-categories-xei72j-kcp-5cx5g] Failed to get logs for Machine cluster-categories-xei72j-wmd-ckg9b-clnhh, Cluster cluster-categories-73xue6/cluster-categories-xei72j: [error creating container exec: Error response from daemon: No such container: cluster-categories-xei72j-wmd-ckg9b-clnhh, : error creating container exec: Error response from daemon: No such container: cluster-categories-xei72j-wmd-ckg9b-clnhh] Failed to get infrastructure logs for Cluster cluster-categories-73xue6/cluster-categories-xei72j: failed to inspect container "cluster-categories-xei72j-lb": Error response from daemon: No such container: cluster-categories-xei72j-lb STEP: Dumping all the Cluster API resources in the "cluster-categories-73xue6" namespace @ 08/13/24 12:23:53.332 STEP: Deleting cluster cluster-categories-73xue6/cluster-categories-xei72j @ 08/13/24 12:23:53.657 STEP: Deleting cluster cluster-categories-73xue6/cluster-categories-xei72j @ 08/13/24 12:23:53.665 INFO: Waiting for the Cluster cluster-categories-73xue6/cluster-categories-xei72j to be deleted STEP: Waiting for cluster cluster-categories-73xue6/cluster-categories-xei72j to be deleted @ 08/13/24 12:23:53.68 STEP: Deleting namespace used for hosting the "cluster-categories" test spec @ 08/13/24 12:25:54.023 INFO: Deleting namespace cluster-categories-73xue6 • [449.877 seconds] ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite] /Users/deepak.muley/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:118 STEP: Dumping logs from the bootstrap cluster @ 08/13/24 12:25:54.043 STEP: Tearing down the management cluster @ 08/13/24 12:25:54.727 INFO: Error getting pod capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager-659b5fb778-m75fg, container manager: Get "https://127.0.0.1:61171/api/v1/namespaces/capi-kubeadm-bootstrap-system/pods/capi-kubeadm-bootstrap-controller-manager-659b5fb778-m75fg": dial tcp 127.0.0.1:61171: connect: connection refused - error from a previous attempt: EOF INFO: Error getting pod capx-system/capx-controller-manager-5799bf7876-55nw9, container kube-rbac-proxy: Get "https://127.0.0.1:61171/api/v1/namespaces/capx-system/pods/capx-controller-manager-5799bf7876-55nw9": dial tcp 127.0.0.1:61171: connect: connection refused - error from a previous attempt: EOF INFO: Error getting pod capx-system/capx-controller-manager-5799bf7876-55nw9, container manager: Get "https://127.0.0.1:61171/api/v1/namespaces/capx-system/pods/capx-controller-manager-5799bf7876-55nw9": dial tcp 127.0.0.1:61171: connect: connection refused - error from a previous attempt: EOF INFO: Error getting pod capi-system/capi-controller-manager-6f69847fd8-vmq4z, container manager: Get "https://127.0.0.1:61171/api/v1/namespaces/capi-system/pods/capi-controller-manager-6f69847fd8-vmq4z": dial tcp 127.0.0.1:61171: connect: connection refused - error from a previous attempt: EOF INFO: Error getting pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-696879d594-hztz8, container manager: Get "https://127.0.0.1:61171/api/v1/namespaces/capi-kubeadm-control-plane-system/pods/capi-kubeadm-control-plane-controller-manager-696879d594-hztz8": dial tcp 127.0.0.1:61171: connect: connection refused - error from a previous attempt: EOF [SynchronizedAfterSuite] PASSED [3.767 seconds] ------------------------------ [ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report autogenerated by Ginkgo [ReportAfterSuite] PASSED [0.029 seconds] ------------------------------ Ran 1 of 102 Specs in 585.225 seconds SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 101 Skipped PASS Ginkgo ran 1 suite in 10m0.831275189s Test Suite Passed make[1]: Leaving directory '/Users/deepak.muley/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix' |
1912073
to
66261fe
Compare
also ran following make test-e2e-calico LABEL_FILTERS="categories" ... [ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report autogenerated by Ginkgo [ReportAfterSuite] PASSED [0.015 seconds] ------------------------------ Ran 4 of 102 Specs in 1134.464 seconds SUCCESS! -- 4 Passed | 0 Failed | 0 Pending | 98 Skipped PASS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to fix the e2e test
eff4046
to
6491af4
Compare
6491af4
to
bd3988a
Compare
added basic test which creates 2 clusters with same name and deletes last one
bd3988a
to
f7cb2fc
Compare
What this PR does / why we need it:
It was observed that if we create 2 clusters with same name then single category key:value is used to tag the vms. and when we delete one cluster then it fails to delete category key:value as there are other resources still tagged. this PR ignores the error at this time till we come up with better strategy in future
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes ncn-101935
How Has This Been Tested?:
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration and test output
Special notes for your reviewer:
Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.
Release note: