Skip to content

Commit

Permalink
Merge branch 'w/2.7/improvement/ZENKO-4919' into w/2.8/improvement/ZE…
Browse files Browse the repository at this point in the history
  • Loading branch information
francoisferrand committed Nov 18, 2024
2 parents 223afff + 9a8c939 commit 80c51a4
Show file tree
Hide file tree
Showing 14 changed files with 280 additions and 54 deletions.
13 changes: 8 additions & 5 deletions .github/actions/archive-artifacts/action.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -74,26 +74,29 @@ runs:
set -exu
KAFKA=$(kubectl get pods -n ${NAMESPACE} -lkafka_cr=${ZENKO_NAME}-base-queue -o jsonpath='{.items[0].metadata.name}')
KAFKA_PATH="/tmp/artifacts/data/${STAGE}/kafka"
mkdir -p ${KAFKA_PATH}
kubectl exec -in ${NAMESPACE} ${KAFKA} -c kafka -- \
env KAFKA_OPTS= kafka-topics.sh --bootstrap-server :9092 --list \
> /tmp/artifacts/data/${STAGE}/kafka-topics.log
> ${KAFKA_PATH}/kafka-topics.log
kubectl exec -in ${NAMESPACE} ${KAFKA} -c kafka -- \
env KAFKA_OPTS= kafka-consumer-groups.sh --bootstrap-server :9092 --list \
> /tmp/artifacts/data/${STAGE}/kafka-consumer-groups.log
> ${KAFKA_PATH}/kafka-consumer-groups.log
kubectl exec -in ${NAMESPACE} ${KAFKA} -c kafka -- \
env KAFKA_OPTS= kafka-consumer-groups.sh --bootstrap-server :9092 --describe --all-groups \
> /tmp/artifacts/data/${STAGE}/kafka-offsets.log
> ${KAFKA_PATH}/kafka-offsets.log
KAFKA_SERVICE=$(kubectl get services -n ${NAMESPACE} -lkafka_cr=${ZENKO_NAME}-base-queue -o jsonpath='{.items[0].metadata.name}')
kubectl run -n ${NAMESPACE} kcat --image=edenhill/kcat:1.7.1 --restart=Never --command -- sleep 300
kubectl wait -n ${NAMESPACE} pod kcat --for=condition=ready
cat /tmp/artifacts/data/${STAGE}/kafka-topics.log | grep -v '^__' | xargs -P 15 -I {} \
cat ${KAFKA_PATH}/kafka-topics.log | grep -v '^__' | xargs -P 15 -I {} \
sh -c "kubectl exec -i -n ${NAMESPACE} kcat -- \
kcat -L -b ${KAFKA_SERVICE} -t {} -C -o beginning -e -q -J \
> /tmp/artifacts/data/${STAGE}/kafka-messages-{}.log"
> ${KAFKA_PATH}/kafka-messages-{}.log"
env:
STAGE: ${{ inputs.stage }}
NAMESPACE: ${{ inputs.zenko-namespace }}
Expand Down
5 changes: 3 additions & 2 deletions .github/actions/deploy/action.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -64,9 +64,10 @@ runs:
docker pull ${OPERATOR_IMAGE_NAME}:${OPERATOR_IMAGE_TAG}
kind load docker-image ${OPERATOR_IMAGE_NAME}:${OPERATOR_IMAGE_TAG}
cd ./.github/scripts/end2end
git clone https://${GIT_ACCESS_TOKEN}@github.com/scality/zenko-operator.git operator
git init operator
cd operator
git checkout ${OPERATOR_IMAGE_TAG}
git fetch --depth 1 --no-tags https://${GIT_ACCESS_TOKEN}@github.com/scality/zenko-operator.git ${OPERATOR_IMAGE_TAG}
git checkout FETCH_HEAD
tilt ci
env:
OPERATOR_IMAGE_TAG: ${{ inputs.zkop_tag }}
Expand Down
6 changes: 6 additions & 0 deletions .github/scripts/end2end/configs/prometheus.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,19 @@ spec:
evaluationInterval: 30s
logFormat: logfmt
logLevel: info
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
metalk8s.scality.com/monitor: ""
podMonitorNamespaceSelector: {}
podMonitorSelector:
matchLabels:
metalk8s.scality.com/monitor: ""
probeNamespaceSelector: {}
probeSelector:
matchLabels:
metalk8s.scality.com/monitor: ""
ruleNamespaceSelector: {}
ruleSelector:
matchLabels:
metalk8s.scality.com/monitor: ""
17 changes: 9 additions & 8 deletions .github/scripts/end2end/install-kind-dependencies.sh
Original file line number Diff line number Diff line change
Expand Up @@ -58,14 +58,15 @@ kubectl rollout status -n ingress-nginx deployment/ingress-nginx-controller --ti

# cert-manager
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/${CERT_MANAGER_VERSION}/cert-manager.yaml --wait
# kubectl apply --validate=false -f - <<EOF
# apiVersion: cert-manager.io/v1
# kind: ClusterIssuer
# metadata:
# name: artesca-root-ca-issuer
# spec:
# selfSigned: {}
# EOF
kubectl rollout status -n cert-manager deployment/cert-manager-webhook --timeout=10m
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: artesca-root-ca-issuer
spec:
selfSigned: {}
EOF

# prometheus
# last-applied-configuration can end up larger than 256kB which is too large for an annotation
Expand Down
1 change: 1 addition & 0 deletions .github/scripts/end2end/patch-coredns.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ corefile="
rewrite name exact s3.dr.zenko.local ingress-nginx-controller.ingress-nginx.svc.cluster.local
rewrite name exact sts.dr.zenko.local ingress-nginx-controller.ingress-nginx.svc.cluster.local
rewrite name exact iam.dr.zenko.local ingress-nginx-controller.ingress-nginx.svc.cluster.local
rewrite name exact prom.dr.zenko.local ingress-nginx-controller.ingress-nginx.svc.cluster.local
rewrite name exact shell-ui.dr.zenko.local ingress-nginx-controller.ingress-nginx.svc.cluster.local
rewrite name exact website.mywebsite.com ingress-nginx-controller.ingress-nginx.svc.cluster.local
kubernetes cluster.local in-addr.arpa ip6.arpa {
Expand Down
9 changes: 5 additions & 4 deletions .github/workflows/alerts.yaml
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
name: Test alerts

on:
push:
branches-ignore:
- 'development/**'
- 'q/*'
workflow_call:
secrets:
GIT_ACCESS_TOKEN:
description: 'GitHub token'
required: true

jobs:
run-alert-tests:
Expand Down
15 changes: 10 additions & 5 deletions .github/workflows/end2end.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,10 @@ jobs:
- name: Verify monitoring dashboard versions
run: bash ./.github/scripts/check_versions.sh

check-alerts:
uses: ./.github/workflows/alerts.yaml
secrets: inherit

check-workflows:
runs-on: ubuntu-22.04
steps:
Expand Down Expand Up @@ -388,7 +392,7 @@ jobs:
cache-to: type=gha,mode=max,scope=end2end-ctst

end2end-http:
needs: [build-kafka, build-test-image, check-dashboard-versions]
needs: [build-kafka, build-test-image]
runs-on:
- ubuntu
- focal
Expand Down Expand Up @@ -437,7 +441,7 @@ jobs:
run: kind delete cluster

end2end-pra:
needs: [build-kafka, check-dashboard-versions, lint-and-build-ctst]
needs: [build-kafka, lint-and-build-ctst]
runs-on: ubuntu-22.04-16core
env:
GIT_ACCESS_TOKEN: ${{ secrets.GIT_ACCESS_TOKEN }}
Expand Down Expand Up @@ -497,7 +501,7 @@ jobs:
run: kind delete cluster

end2end-https:
needs: [build-kafka, build-test-image, check-dashboard-versions]
needs: [build-kafka, build-test-image]
runs-on:
- ubuntu
- focal
Expand Down Expand Up @@ -549,7 +553,7 @@ jobs:
run: kind delete cluster

end2end-sharded:
needs: [build-kafka, build-test-image, check-dashboard-versions]
needs: [build-kafka, build-test-image]
runs-on:
- ubuntu-22.04-8core
# Enable this for Ring-based tests
Expand Down Expand Up @@ -589,7 +593,7 @@ jobs:
run: kind delete cluster

ctst-end2end-sharded:
needs: [build-kafka, lint-and-build-ctst, check-dashboard-versions]
needs: [build-kafka, lint-and-build-ctst]
runs-on:
- ubuntu-22.04-8core
steps:
Expand Down Expand Up @@ -638,6 +642,7 @@ jobs:
write-final-status:
runs-on: ubuntu-latest
needs:
- check-alerts
- check-dashboard-versions
- check-workflows
- build-doc
Expand Down
8 changes: 4 additions & 4 deletions solution/deps.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ backbeat:
dashboard: backbeat/backbeat-dashboards
image: backbeat
policy: backbeat/backbeat-policies
tag: 8.6.49
tag: 8.6.51
envsubst: BACKBEAT_TAG
busybox:
image: busybox
Expand All @@ -16,7 +16,7 @@ cloudserver:
sourceRegistry: ghcr.io/scality
dashboard: cloudserver/cloudserver-dashboards
image: cloudserver
tag: 8.8.35
tag: 8.8.36
envsubst: CLOUDSERVER_TAG
drctl:
sourceRegistry: ghcr.io/scality
Expand Down Expand Up @@ -113,7 +113,7 @@ sorbet:
policy: sorbet/sorbet-policies
dashboard: sorbet/sorbet-dashboards
image: sorbet
tag: v1.1.12
tag: v1.1.13
envsubst: SORBET_TAG
stern: # tail any pod logs with pattern matchin
tag: 1.30.0
Expand All @@ -136,7 +136,7 @@ vault:
zenko-operator:
sourceRegistry: ghcr.io/scality
image: zenko-operator
tag: v1.6.3
tag: v1.6.5
envsubst: ZENKO_OPERATOR_TAG
zenko-ui:
sourceRegistry: ghcr.io/scality
Expand Down
35 changes: 30 additions & 5 deletions tests/ctst/common/common.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ import assert from 'assert';
import { Admin, Kafka } from 'kafkajs';
import {
createBucketWithConfiguration,
putMpuObject,
copyObject,
putObject,
runActionAgainstBucket,
getObjectNameWithBackendFlakiness,
Expand Down Expand Up @@ -62,7 +64,7 @@ export async function cleanS3Bucket(
}

async function addMultipleObjects(this: Zenko, numberObjects: number,
objectName: string, sizeBytes: number, userMD?: string) {
objectName: string, sizeBytes: number, userMD?: string, parts?: number) {
let lastResult = null;
for (let i = 1; i <= numberObjects; i++) {
this.resetCommand();
Expand All @@ -74,7 +76,9 @@ async function addMultipleObjects(this: Zenko, numberObjects: number,
if (userMD) {
this.addToSaved('userMetadata', userMD);
}
lastResult = await putObject(this, objectNameFinal);
lastResult = parts === undefined
? await putObject(this, objectNameFinal)
: await putMpuObject(this, parts, objectNameFinal);
}
return lastResult;
}
Expand Down Expand Up @@ -144,7 +148,20 @@ Given('an existing bucket {string} {string} versioning, {string} ObjectLock {str

Given('{int} objects {string} of size {int} bytes',
async function (this: Zenko, numberObjects: number, objectName: string, sizeBytes: number) {
await addMultipleObjects.call(this, numberObjects, objectName, sizeBytes);
const result = await addMultipleObjects.call(this, numberObjects, objectName, sizeBytes);
assert.ifError(result?.stderr || result?.err);
});

Given('{int} mpu objects {string} of size {int} bytes',
async function (this: Zenko, numberObjects: number, objectName: string, sizeBytes: number) {
const result = await addMultipleObjects.call(this, numberObjects, objectName, sizeBytes, undefined, 1);
assert.ifError(result?.stderr || result?.err);
});

Given('{string} is copied to {string}',
async function (this: Zenko, sourceObject: string, destinationObject: string) {
const result = await copyObject(this, sourceObject, destinationObject);
assert.ifError(result?.stderr || result?.err);
});

Given('{int} objects {string} of size {int} bytes on {string} site',
Expand All @@ -156,12 +173,20 @@ Given('{int} objects {string} of size {int} bytes on {string} site',
} else {
Identity.useIdentity(IdentityEnum.ACCOUNT, Zenko.sites['source'].accountName);
}
await addMultipleObjects.call(this, numberObjects, objectName, sizeBytes);
const result = await addMultipleObjects.call(this, numberObjects, objectName, sizeBytes);
assert.ifError(result?.stderr || result?.err);
});

Given('{int} objects {string} of size {int} bytes with user metadata {string}',
async function (this: Zenko, numberObjects: number, objectName: string, sizeBytes: number, userMD: string) {
await addMultipleObjects.call(this, numberObjects, objectName, sizeBytes, userMD);
const result = await addMultipleObjects.call(this, numberObjects, objectName, sizeBytes, userMD);
assert.ifError(result?.stderr || result?.err);
});

Given('{int} mpu objects {string} of size {int} bytes with user metadata {string}',
async function (this: Zenko, numberObjects: number, objectName: string, sizeBytes: number, userMD: string) {
const result = await addMultipleObjects.call(this, numberObjects, objectName, sizeBytes, userMD);
assert.ifError(result?.stderr || result?.err);
});

Given('a tag on object {string} with key {string} and value {string}',
Expand Down
83 changes: 82 additions & 1 deletion tests/ctst/features/dmf.feature
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,87 @@ Feature: DMF
| Non versioned | 1 | 100 |
| Suspended | 1 | 100 |

@2.7.0
@PreMerge
@Dmf
@ColdStorage
Scenario Outline: Overwriting of a cold object with mpu
Given a "<versioningConfiguration>" bucket
And a transition workflow to "e2e-cold" location
And <objectCount> objects "obj" of size <objectSize> bytes
Then object "obj-1" should be "transitioned" and have the storage class "e2e-cold"
And dmf volume should contain <objectCount> objects
Given <objectCount> mpu objects "obj" of size <objectSize> bytes
Then object "obj-1" should be "transitioned" and have the storage class "e2e-cold"
And dmf volume should contain 1 objects

Examples:
| versioningConfiguration | objectCount | objectSize |
| Non versioned | 1 | 100 |
| Suspended | 1 | 100 |

@2.7.0
@PreMerge
@Dmf
@ColdStorage
Scenario Outline: Overwriting of a cold object with copyObject
Given a "<versioningConfiguration>" bucket
And a transition workflow to "e2e-cold" location
And 2 objects "obj" of size <objectSize> bytes
Then object "obj-1" should be "transitioned" and have the storage class "e2e-cold"
And object "obj-2" should be "transitioned" and have the storage class "e2e-cold"
And dmf volume should contain 2 objects
When i restore object "obj-1" for 5 days
Then object "obj-1" should be "restored" and have the storage class "e2e-cold"
Given "obj-1" is copied to "obj-2"
Then object "obj-2" should be "transitioned" and have the storage class "e2e-cold"
And dmf volume should contain 2 objects

Examples:
| versioningConfiguration | objectSize |
| Non versioned | 100 |
| Suspended | 100 |

@2.7.0
@PreMerge
@Dmf
@ColdStorage
Scenario Outline: Overwriting of a cold object with mpu
Given a "<versioningConfiguration>" bucket
And a transition workflow to "e2e-cold" location
And <objectCount> objects "obj" of size <objectSize> bytes
Then object "obj-1" should be "transitioned" and have the storage class "e2e-cold"
And dmf volume should contain <objectCount> objects
Given <objectCount> mpu objects "obj" of size <objectSize> bytes
Then object "obj-1" should be "transitioned" and have the storage class "e2e-cold"
And dmf volume should contain 1 objects

Examples:
| versioningConfiguration | objectCount | objectSize |
| Non versioned | 1 | 100 |
| Suspended | 1 | 100 |

@2.7.0
@PreMerge
@Dmf
@ColdStorage
Scenario Outline: Overwriting of a cold object with copyObject
Given a "<versioningConfiguration>" bucket
And a transition workflow to "e2e-cold" location
And 2 objects "obj" of size <objectSize> bytes
Then object "obj-1" should be "transitioned" and have the storage class "e2e-cold"
And object "obj-2" should be "transitioned" and have the storage class "e2e-cold"
And dmf volume should contain 2 objects
When i restore object "obj-1" for 5 days
Then object "obj-1" should be "restored" and have the storage class "e2e-cold"
Given "obj-1" is copied to "obj-2"
Then object "obj-2" should be "transitioned" and have the storage class "e2e-cold"
And dmf volume should contain 2 objects

Examples:
| versioningConfiguration | objectSize |
| Non versioned | 100 |
| Suspended | 100 |

@2.7.0
@PreMerge
Expand Down Expand Up @@ -125,4 +206,4 @@ Feature: DMF
| versioningConfiguration | objectCount | objectSize | restoreDays |
| Non versioned | 2 | 100 | 15 |
| Versioned | 2 | 100 | 15 |
| Suspended | 2 | 100 | 15 |
| Suspended | 2 | 100 | 15 |
3 changes: 2 additions & 1 deletion tests/ctst/features/pra.feature
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,8 @@ Feature: PRA operations
Given a DR installed
Then the DR source should be in phase "Running"
And the DR sink should be in phase "Running"
Then the kafka DR volume exists
And the kafka DR volume exists
And prometheus should scrap federated metrics from DR sink

# Check that objects are transitioned in the DR site
Given access keys for the replicated account
Expand Down
Loading

0 comments on commit 80c51a4

Please sign in to comment.