Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(bug): Remove FIQL based filters from list calls #485

Merged
merged 2 commits into from
Oct 2, 2024

Conversation

thunderboltsid
Copy link
Contributor

@thunderboltsid thunderboltsid commented Oct 2, 2024

FIQL filtering has been deprecated by the APIs for some time now. We should avoid using it altogether. We were doing a 2-pass filtering earlier (FIQL + Client Side) and with this change we remove the first pass filtering and all filtering will be done by what was earlier the second pass client side filter.

How has this been tested?
QuickStart test with a problematic subnet name

  • Create a new subnet with escapable characters in the name: vlan173|ncn|(dev)|sandbox
  • Create a new cluster with a machine deployment pointing to that subnet
  • Observe the cluster come up successfully
$ LABEL_FILTERS="quickstart && clusterclass" make test-e2e-calico
CNI="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/cni/calico/calico.yaml" GIT_COMMIT="7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323" make test-e2e
make[1]: Entering directory '/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix'
echo "Git commit hash: 7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323"
Git commit hash: 7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323
KO_DOCKER_REPO=ko.local GOFLAGS="-ldflags=-X=main.gitCommitHash=7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323" ko build -B --platform=linux/amd64 -t e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323 .
2024/10/02 16:01:25 Using base cgr.dev/chainguard/static:latest@sha256:d2a76860057c1260ea5dc8ae4e18beff5ccfb1b67004295c9ab6951833e93de7 for github.com/nutanix-cloud-native/cluster-api-provider-nutanix
2024/10/02 16:01:26 Building github.com/nutanix-cloud-native/cluster-api-provider-nutanix for linux/amd64
2024/10/02 16:01:30 Loading ko.local/cluster-api-provider-nutanix:27c8174cb92bb1fc9399c2cfc00acc60460ad55f08b6cab8e3181e993a0c551e
2024/10/02 16:01:33 Loaded ko.local/cluster-api-provider-nutanix:27c8174cb92bb1fc9399c2cfc00acc60460ad55f08b6cab8e3181e993a0c551e
2024/10/02 16:01:33 Adding tag e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323
2024/10/02 16:01:33 Added tag e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323
ko.local/cluster-api-provider-nutanix:27c8174cb92bb1fc9399c2cfc00acc60460ad55f08b6cab8e3181e993a0c551e
docker tag ko.local/cluster-api-provider-nutanix:e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323 harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix:e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-secret.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nutanix-cluster --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nutanix-cluster.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-additional-categories.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-no-nmt.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-project.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-upgrades --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-upgrades.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-md-remediation --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-md-remediation.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-remediation --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-remediation.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-kcp-scale-in.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi3 --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-csi3.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-failure-domains --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-failure-domains.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-clusterclass --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-clusterclass.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-clusterclass --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/clusterclass-nutanix-quick-start.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1beta1/cluster-template-topology.yaml
kustomize build /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1.5.2/cluster-template --load-restrictor LoadRestrictionsNone > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/infrastructure-nutanix/v1.5.2/cluster-template.yaml
kustomize build templates/base > templates/cluster-template.yaml
kustomize build templates/csi > templates/cluster-template-csi.yaml
kustomize build templates/csi3 > templates/cluster-template-csi3.yaml
kustomize build templates/clusterclass > templates/cluster-template-clusterclass.yaml
kustomize build templates/topology > templates/cluster-template-topology.yaml
echo "Image tag for E2E test is e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323"
Image tag for E2E test is e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323
LOCAL_PROVIDER_VERSION=v1.5.99 \
	MANAGER_IMAGE=harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix:e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323 \
	envsubst < /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml > /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml.tmp
docker tag ko.local/cluster-api-provider-nutanix:e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323 harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix:e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323
docker push harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix:e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323
The push refers to repository [harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix]
1d25d48e5a62: Pushed
ffe56a1c5f38: Layer already exists
e9b47466a4e4: Layer already exists
e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323: digest: sha256:d730598b49ff53ba64cf6d7d831a2c1fc758d972331f6aeebdd43ab580271ddc size: 946
mkdir -p /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts
NUTANIX_LOG_LEVEL=debug ginkgo -v \
	--trace \
	--tags=e2e \
	--label-filter="!only-for-validation && quickstart && clusterclass" \
	--skip="" \
	--focus="" \
	--nodes=1 \
	--no-color=false \
	--output-dir="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" \
	--junit-report="junit.e2e_suite.1.xml" \
	--timeout="24h" \
	 \
	./test/e2e -- \
	-e2e.artifacts-folder="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" \
	-e2e.config="/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml.tmp" \
	-e2e.skip-resource-cleanup=false \
	-e2e.use-existing-cluster=false
Running Suite: capx-e2e - /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e
========================================================================================================================
Random Seed: 1727877712

Will run 1 of 102 specs
------------------------------
[SynchronizedBeforeSuite]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:63
  STEP: Initializing a runtime.Scheme with all the GVK relevant for this test @ 10/02/24 16:01:57.739
  STEP: Loading the e2e test configuration from "/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/config/nutanix.yaml.tmp" @ 10/02/24 16:01:57.739
  STEP: Creating a clusterctl local repository into "/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts" @ 10/02/24 16:01:57.74
  STEP: Reading the ClusterResourceSet manifest /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/data/cni/calico/calico.yaml @ 10/02/24 16:01:57.74
  STEP: Setting up the bootstrap cluster @ 10/02/24 16:02:03.759
  STEP: Creating the bootstrap cluster @ 10/02/24 16:02:03.759
  INFO: Creating a kind cluster with name "test-c0n360"
Creating cluster "test-c0n360" ...
 • Ensuring node image (kindest/node:v1.31.0) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.31.0) 🖼
 • Preparing nodes 📦   ...
 ✓ Preparing nodes 📦
 • Writing configuration 📜  ...
 ✓ Writing configuration 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
 • Installing CNI 🔌  ...
 ✓ Installing CNI 🔌
 • Installing StorageClass 💾  ...
 ✓ Installing StorageClass 💾
  INFO: The kubeconfig file for the kind cluster is /var/folders/g3/lb827bt96z10xz_m_c2xn93w0000gp/T/e2e-kind1902920900
  INFO: Loading image: "harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix:e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323"
  INFO: Image harbor.eng.nutanix.com/ncn-ci/cluster-api-provider-nutanix:e2e-7fd8c3bb8dedc105ce2fe9cb0bcd00c500a10323 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.7.6"
  INFO: Image registry.k8s.io/cluster-api/cluster-api-controller:v1.7.6 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.7.6"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.7.6 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.7.6"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.7.6 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/cluster-api-controller:v1.8.3"
  INFO: Image registry.k8s.io/cluster-api/cluster-api-controller:v1.8.3 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.8.3"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-bootstrap-controller:v1.8.3 is present in local container image cache
  INFO: Loading image: "registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.8.3"
  INFO: Image registry.k8s.io/cluster-api/kubeadm-control-plane-controller:v1.8.3 is present in local container image cache
  STEP: Initializing the bootstrap cluster @ 10/02/24 16:02:23.527
  INFO: clusterctl init --config /Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/_artifacts/repository/clusterctl-config.yaml --kubeconfig /var/folders/g3/lb827bt96z10xz_m_c2xn93w0000gp/T/e2e-kind1902920900 --wait-providers --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure nutanix
  INFO: Waiting for provider controllers to be running
  STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available @ 10/02/24 16:03:00.265
  INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-67dd7486c5-jzhb9, container manager
  STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available @ 10/02/24 16:03:00.391
  INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-8599fdf449-69cft, container manager
  STEP: Waiting for deployment capi-system/capi-controller-manager to be available @ 10/02/24 16:03:00.401
  INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-694d45cc55-d98j7, container manager
  STEP: Waiting for deployment capx-system/capx-controller-manager to be available @ 10/02/24 16:03:00.411
  INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-5bcff5b8c5-647fh, container kube-rbac-proxy
  INFO: Creating log watcher for controller capx-system/capx-controller-manager, pod capx-controller-manager-5bcff5b8c5-647fh, container manager
[SynchronizedBeforeSuite] PASSED [62.945 seconds]
------------------------------
SSSSSSSSS
------------------------------
When following the Cluster API quick-start with ClusterClass Should create a workload cluster [quickstart, clusterclass, capx-feature-test]
/Users/sid.shukla/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.8.1/e2e/quick_start.go:106
  STEP: Creating a namespace for hosting the "quick-start" test spec @ 10/02/24 16:03:00.685
  INFO: Creating namespace quick-start-cc3hpu
  INFO: Creating event watcher for namespace "quick-start-cc3hpu"
  STEP: Creating a workload cluster @ 10/02/24 16:03:00.697
  INFO: Creating the workload cluster with name "quick-start-lc9gox" using the "topology" template (Kubernetes v1.31.0, 1 control-plane machines, 1 worker machines)
  INFO: Getting the cluster template yaml
  INFO: clusterctl config cluster quick-start-lc9gox --infrastructure (default) --kubernetes-version v1.31.0 --control-plane-machine-count 1 --worker-machine-count 1 --flavor topology
  INFO: Creating the workload cluster with name "quick-start-lc9gox" from the provided yaml
  INFO: Applying the cluster template yaml of cluster quick-start-cc3hpu/quick-start-lc9gox
  INFO: Waiting for the cluster infrastructure of cluster quick-start-cc3hpu/quick-start-lc9gox to be provisioned
  STEP: Waiting for cluster to enter the provisioned phase @ 10/02/24 16:03:01.539
  INFO: Waiting for control plane of cluster quick-start-cc3hpu/quick-start-lc9gox to be initialized
  INFO: Waiting for the first control plane machine managed by quick-start-cc3hpu/quick-start-lc9gox-9p9r6 to be provisioned
  STEP: Waiting for one control plane node to exist @ 10/02/24 16:03:11.571
  INFO: Waiting for control plane of cluster quick-start-cc3hpu/quick-start-lc9gox to be ready
  INFO: Waiting for control plane quick-start-cc3hpu/quick-start-lc9gox-9p9r6 to be ready (implies underlying nodes to be ready as well)
  STEP: Waiting for the control plane to be ready @ 10/02/24 16:05:01.675
  STEP: Checking all the control plane machines are in the expected failure domains @ 10/02/24 16:05:01.684
  INFO: Waiting for the machine deployments of cluster quick-start-cc3hpu/quick-start-lc9gox to be provisioned
  STEP: Waiting for the workload nodes to exist @ 10/02/24 16:05:01.699
  STEP: Checking all the machines controlled by quick-start-lc9gox-md-0-8hf6q are in the "<None>" failure domain @ 10/02/24 16:05:01.705
  INFO: Waiting for the machine pools of cluster quick-start-cc3hpu/quick-start-lc9gox to be provisioned
  INFO: Calling PostMachinesProvisioned for cluster quick-start-cc3hpu/quick-start-lc9gox
  STEP: PASSED! @ 10/02/24 16:05:01.715
  STEP: Dumping logs from the "quick-start-lc9gox" workload cluster @ 10/02/24 16:05:01.715
Failed to get logs for Machine quick-start-lc9gox-9p9r6-74gpd, Cluster quick-start-cc3hpu/quick-start-lc9gox: [: error creating container exec: Error response from daemon: No such container: quick-start-lc9gox-9p9r6-74gpd, error creating container exec: Error response from daemon: No such container: quick-start-lc9gox-9p9r6-74gpd]
Failed to get logs for Machine quick-start-lc9gox-md-0-8hf6q-gz8xf-fzgfh, Cluster quick-start-cc3hpu/quick-start-lc9gox: [error creating container exec: Error response from daemon: No such container: quick-start-lc9gox-md-0-8hf6q-gz8xf-fzgfh, : error creating container exec: Error response from daemon: No such container: quick-start-lc9gox-md-0-8hf6q-gz8xf-fzgfh]
Failed to get infrastructure logs for Cluster quick-start-cc3hpu/quick-start-lc9gox: failed to inspect container "quick-start-lc9gox-lb": Error response from daemon: No such container: quick-start-lc9gox-lb
  STEP: Dumping all the Cluster API resources in the "quick-start-cc3hpu" namespace @ 10/02/24 16:05:01.747
  STEP: Dumping Pods and Nodes of Cluster quick-start-cc3hpu/quick-start-lc9gox @ 10/02/24 16:05:01.86
  STEP: Deleting cluster quick-start-cc3hpu/quick-start-lc9gox @ 10/02/24 16:05:03.468
  STEP: Deleting cluster quick-start-cc3hpu/quick-start-lc9gox @ 10/02/24 16:05:03.48
  INFO: Waiting for the Cluster quick-start-cc3hpu/quick-start-lc9gox to be deleted
  STEP: Waiting for cluster quick-start-cc3hpu/quick-start-lc9gox to be deleted @ 10/02/24 16:05:03.491
  STEP: Deleting namespace used for hosting the "quick-start" test spec @ 10/02/24 16:07:43.638
  INFO: Deleting namespace quick-start-cc3hpu
• [282.963 seconds]
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[SynchronizedAfterSuite]
/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix/test/e2e/e2e_suite_test.go:118
  STEP: Dumping logs from the bootstrap cluster @ 10/02/24 16:07:43.649
  STEP: Tearing down the management cluster @ 10/02/24 16:07:43.791
[SynchronizedAfterSuite] PASSED [0.628 seconds]
------------------------------
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
[ReportAfterSuite] PASSED [0.008 seconds]
------------------------------

Ran 1 of 102 Specs in 346.538 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 101 Skipped
PASS

Ginkgo ran 1 suite in 5m52.264835125s
Test Suite Passed
make[1]: Leaving directory '/Users/sid.shukla/go/src/github.com/nutanix-cloud-native/cluster-api-provider-nutanix'

FIQL filtering has been deprecated by the APIs for some time now.
We should avoid using it altogether.
Copy link

codecov bot commented Oct 2, 2024

Codecov Report

Attention: Patch coverage is 0% with 11 lines in your changes missing coverage. Please review.

Project coverage is 33.11%. Comparing base (78f61d1) to head (7fd8c3b).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
controllers/helpers.go 0.00% 11 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #485      +/-   ##
==========================================
+ Coverage   33.00%   33.11%   +0.10%     
==========================================
  Files          17       17              
  Lines        1809     1803       -6     
==========================================
  Hits          597      597              
+ Misses       1192     1186       -6     
  Partials       20       20              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@thunderboltsid thunderboltsid requested review from adiantum, dkoshkin and jimmidyson and removed request for adiantum October 2, 2024 14:09
Copy link
Contributor

@adiantum adiantum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm
but add unit tests in some next PR

@thunderboltsid thunderboltsid merged commit b983688 into main Oct 2, 2024
8 of 11 checks passed
@thunderboltsid thunderboltsid deleted the jira/NCN-102972-alternative branch October 2, 2024 15:34
@thunderboltsid thunderboltsid added the bug Something isn't working label Oct 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants