Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Common #87

Merged
merged 180 commits into from
Jul 31, 2023
Merged

Update Common #87

merged 180 commits into from
Jul 31, 2023

Conversation

day0hero
Copy link
Collaborator

Updated common repository to fix issues lurking with CI due to external secrets operator SECCOMP profile that was being applied.

claudiol and others added 30 commits March 21, 2023 10:06
This change adds an experimental letsencrypt chart that allows a pattern
user/developer to have all routes and the API endpoint use signed
certificates by letsencrypt.

At this stage only AWS is supported. The full documentation is contained
in the chart's README.md file
In the same vein as Industrial Edge 57f41dc135f72011d3796fe42d9cbf05d2b82052
we call kustomize build.

Newer gitops versions dropped the openshift-clients rpm by default which
contained kubectl. Let's just invoke "kustomize" directly as the binary
is present in both old and new gitops versions

Since "kubectl kustomize" builds the set of resources by default, we
need to switch to "kubectl build" by default

We also use the same naming conventions used in Industrial Edge while
we're at it.
Tested on MCG with hub and spoke
Just a simple example that reads a helm value and puts it in a configmap
Avoid checking those two playbooks the action seems to be too limited
to understand where the ansible.cfg is
This allows argo to continue rolling out the rest of the applications.
Without the health check the application is stuck in a progressing state
and will not continue thus preventing any downstream application from
deploying.
Update super-linter image to latest
Add dependabot settings for github actions
mbaldessari and others added 29 commits July 14, 2023 10:31
If it is somewhere under /tmp or out of the HOME folder, bail out
explaining why. This has caused a few silly situations where the user
would save the KUBECONFIG file under /tmp. Since bind-mounting /tmp
seems like a wrong thing to do in general, we at least bail out with a
clear error message. To do this we rely on a bash functionality so let's
just switch the script to use that.

Tested as follows:
export KUBECONFIG=/tmp/kubeconfig
./scripts/pattern-util.sh make help
/tmp/kubeconfig is pointing outside of the HOME folder, this will make it unavailable from the container.
Please move it somewhere inside your /home/michele folder, as that is what gets bind-mounted inside the container

export KUBECONFIG=~/kubeconfig
./scripts/pattern-util.sh make help
Pattern: common

Usage:
  make <target>
...
Sanely handle cluster pools with no clusters (yet)
We currently have a small inconsistency where we use common/clustergroup
in order to point Argo CD to this chart, but the name inside the chart
is 'pattern-clustergroup'.

This inconsistency is currently irrelevant, but in the future when
migrating to helm charts inside proper helm repos, this becomes
problematic. So let's fix the name to be the same as the folder.

Tested on MCG successfully.
Check if the KUBECONFIG file is inside /tmp
Currently with the following values snippet:

  managedClusterGroups:
    exampleRegion:
      name: group-one
      acmlabels:
      - name: clusterGroup
        value: group-one
      helmOverrides:
      - name: clusterGroup.isHubCluster
        value: false
      clusterPools:
        exampleAWSPool:
          size: 1
          name: aws-ap-bandini
          openshiftVersion: 4.12.24
          baseDomain: blueprints.rhecoeng.com
          controlPlane:
            count: 1
            platform:
              aws:
                type: m5.2xlarge
          workers:
            count: 0
          platform:
            aws:
              region: ap-southeast-2
          clusters:
          - One

You will get a clusterClaim that is pointing to the wrong Pool:
NAMESPACE                 NAME                       POOL
open-cluster-management   one-group-one              aws-ap-bandini

This is wrong because the clusterPool name will be generated using the
pool name + "-" group name:

  {{- $pool := . }}
  {{- $poolName := print .name "-" $group.name }}

But the clusterPoolName inside the clusterName is only using the
"$pool.name" which will make the clusterClaim ineffective as the pool
does not exist.

Switch to using the same poolName that is being used when creating the
clusterPool.
Fix the clusterPoolName in clusterClaims
Let's improve readability by adding some comments to point out which
flow constructs are being ended.
Add some comments to make if/else and loops clearer
Just like we did for the clustergroup chart, let's split the values
file list into a dedicated helper. This time since there are no global
variables we include it with the current context and not with the '$'
context.

Tested with MCG: hub and spoke. Correctly observed all the applications
running on the spoke.
They changed because we made the list indentation more correct (two
extra spaces to the left)
Fix sa/namespace mixup in vault_spokes_init
Also set seccompProfile to null to make things work on OCP 4.10
Tested the ESO upgrade on MCG on both 4.10 and 4.13
git-subtree-dir: common
git-subtree-mainline: 109efce
git-subtree-split: 15363f6
@day0hero day0hero merged commit fc060b8 into validatedpatterns:main Jul 31, 2023
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants