Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update common #90

Merged
merged 184 commits into from
Aug 1, 2023
Merged

Update common #90

merged 184 commits into from
Aug 1, 2023

Conversation

day0hero
Copy link
Collaborator

@day0hero day0hero commented Aug 1, 2023

updated common and tests

claudiol and others added 30 commits March 21, 2023 10:06
This change adds an experimental letsencrypt chart that allows a pattern
user/developer to have all routes and the API endpoint use signed
certificates by letsencrypt.

At this stage only AWS is supported. The full documentation is contained
in the chart's README.md file
In the same vein as Industrial Edge 57f41dc135f72011d3796fe42d9cbf05d2b82052
we call kustomize build.

Newer gitops versions dropped the openshift-clients rpm by default which
contained kubectl. Let's just invoke "kustomize" directly as the binary
is present in both old and new gitops versions

Since "kubectl kustomize" builds the set of resources by default, we
need to switch to "kubectl build" by default

We also use the same naming conventions used in Industrial Edge while
we're at it.
Tested on MCG with hub and spoke
Just a simple example that reads a helm value and puts it in a configmap
Avoid checking those two playbooks the action seems to be too limited
to understand where the ansible.cfg is
This allows argo to continue rolling out the rest of the applications.
Without the health check the application is stuck in a progressing state
and will not continue thus preventing any downstream application from
deploying.
Update super-linter image to latest
Add dependabot settings for github actions
beekhof and others added 28 commits July 14, 2023 15:21
Sanely handle cluster pools with no clusters (yet)
We currently have a small inconsistency where we use common/clustergroup
in order to point Argo CD to this chart, but the name inside the chart
is 'pattern-clustergroup'.

This inconsistency is currently irrelevant, but in the future when
migrating to helm charts inside proper helm repos, this becomes
problematic. So let's fix the name to be the same as the folder.

Tested on MCG successfully.
Check if the KUBECONFIG file is inside /tmp
Currently with the following values snippet:

  managedClusterGroups:
    exampleRegion:
      name: group-one
      acmlabels:
      - name: clusterGroup
        value: group-one
      helmOverrides:
      - name: clusterGroup.isHubCluster
        value: false
      clusterPools:
        exampleAWSPool:
          size: 1
          name: aws-ap-bandini
          openshiftVersion: 4.12.24
          baseDomain: blueprints.rhecoeng.com
          controlPlane:
            count: 1
            platform:
              aws:
                type: m5.2xlarge
          workers:
            count: 0
          platform:
            aws:
              region: ap-southeast-2
          clusters:
          - One

You will get a clusterClaim that is pointing to the wrong Pool:
NAMESPACE                 NAME                       POOL
open-cluster-management   one-group-one              aws-ap-bandini

This is wrong because the clusterPool name will be generated using the
pool name + "-" group name:

  {{- $pool := . }}
  {{- $poolName := print .name "-" $group.name }}

But the clusterPoolName inside the clusterName is only using the
"$pool.name" which will make the clusterClaim ineffective as the pool
does not exist.

Switch to using the same poolName that is being used when creating the
clusterPool.
Fix the clusterPoolName in clusterClaims
Let's improve readability by adding some comments to point out which
flow constructs are being ended.
Add some comments to make if/else and loops clearer
Just like we did for the clustergroup chart, let's split the values
file list into a dedicated helper. This time since there are no global
variables we include it with the current context and not with the '$'
context.

Tested with MCG: hub and spoke. Correctly observed all the applications
running on the spoke.
They changed because we made the list indentation more correct (two
extra spaces to the left)
Fix sa/namespace mixup in vault_spokes_init
Also set seccompProfile to null to make things work on OCP 4.10
Tested the ESO upgrade on MCG on both 4.10 and 4.13
If you call the load-iib target you *must* set INDEX_IMAGES, so
let's error out properly if you do not.

Tested as:

        $ unset INDEX_IMAGES
        $ make load-iib
        make -f common/Makefile load-iib
        make[1]: Entering directory '/home/michele/Engineering/cloud-patterns/multicloud-gitops'
        No INDEX_IMAGES defined. Bailing out

        $ export INDEX_IMAGES=foo
        make load-iib
        make -f common/Makefile load-iib
        make[1]: Entering directory '/home/michele/Engineering/cloud-patterns/multicloud-gitops'

        PLAY [IIB CI playbook] ***
Error out from load-iib when INDEX_IMAGES is undefined
git-subtree-dir: common
git-subtree-mainline: eb45d81
git-subtree-split: 35e64a1
@mbaldessari mbaldessari merged commit a246609 into main Aug 1, 2023
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants