Skip to content

Commit

Permalink
Minor doc fixes (RHsyseng#76)
Browse files Browse the repository at this point in the history
Signed-off-by: Alberto Losada <alosadag@redhat.com>
  • Loading branch information
alosadagrande authored Dec 7, 2023
1 parent 5b47c50 commit 0d87d70
Show file tree
Hide file tree
Showing 6 changed files with 18 additions and 14 deletions.
1 change: 1 addition & 0 deletions documentation/modules/ROOT/pages/_attributes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@
:talm-recovery-from-failed-upgrade-doc: https://docs.openshift.com/container-platform/4.14/scalability_and_performance/ztp_far_edge/cnf-talm-for-cluster-upgrades.html#talo-backup-recovery_cnf-topology-aware-lifecycle-manager
:talm-cluster-upgrades-doc: https://docs.openshift.com/container-platform/4.14/scalability_and_performance/ztp_far_edge/cnf-talm-for-cluster-upgrades.html
:talm-upstream-project: https://github.com/openshift-kni/cluster-group-upgrades-operator
:talm-precachingconfig-doc:https://docs.openshift.com/container-platform/4.14/scalability_and_performance/ztp_far_edge/ztp-talm-updating-managed-policies.html#talm-prechache-user-specified-images-concept_ztp-talm
:workload-hints-doc: https://docs.openshift.com/container-platform/4.14/scalability_and_performance/cnf-low-latency-tuning.html#cnf-understanding-workload-hints_cnf-master
:ztp-gitops-docs: https://docs.openshift.com/container-platform/4.14/scalability_and_performance/ztp_far_edge/ztp-preparing-the-hub-cluster.html#installing-disconnected-rhacm_ztp-preparing-the-hub-cluster
:nto-docs: https://docs.openshift.com/container-platform/4.14/scalability_and_performance/using-node-tuning-operator.html
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
include::_attributes.adoc[]
:profile: 5g-ran-lab

Now that we have deployed our cluster, let's add our automation. Recall that this process includes creating supporting resources, checking our AnsibleJob CRD into a hosted git repository, and deploying our automation Application.
Now that we have deployed our cluster, let's add our automation. Recall that this process includes creating supporting resources, checking our AnsibleJob CRD into a hosted Git repository, and deploying our automation Application.

First, we will add some supporting OpenShift resources for automation. We will add a namespace `ran-lab-automation` which our automation resources will belong to, along with a Secret configuring ACM's authorization to access the AAP Controller through the controller's auth token.

Expand Down
10 changes: 5 additions & 5 deletions documentation/modules/ROOT/pages/crafting-deployments-iac.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@ include::_attributes.adoc[]

In RAN environments we will be managing thousands of Single Node OpenShift (SNO) instances, and as such, a scalable and manageable way of defining our infrastructure is required.

By describing our infrastructure as code (IaC) the git repository holds declarative state of the fleet.
By describing our infrastructure as code (IaC) the Git repository holds declarative state of the fleet.

[#introduction-to-siteconfig]
== Introduction to the SiteConfig

The `SiteConfig` is an abstraction layer on top of the different components that are used to deploy an OpenShift cluster using the `Assisted Service`. For the ones familiar with the `Assisted Service`, you will know that in order to deploy a cluster using this service there are several Kubernetes objects that need to be created like: `ClusterDeployment`, `InfraEnv`, `AgentClusterInstall`, etc.
The `SiteConfig` is an abstraction layer on top of the different components that are used to deploy an OpenShift cluster using the `Assisted Service`. For the ones familiar with the `Assisted Service`, you will know that in order to deploy a cluster using this service there are several Kubernetes objects that need to be created like: `ClusterDeployment`, `InfraEnv`, `AgentClusterInstall`, etc. Notice that `Assisted Service` now it is part of the MultiCluster Engine (MCE) operator.

The SiteConfig simplifies this process by providing a unified structure to describe the cluster deployment configuration in a single place.

Expand All @@ -34,7 +34,7 @@ image::gitea-repository.png[Gitea Repository]
+
3. Click on the "+" next to `Repositories`.
4. Use `ztp-repository` as `Repository Name` and click on `Create Repository`.
5. You will get redirect to the new repository page.
5. You will get redirected to the new repository page.
Now that we have a repository ready to be used we will clone it to our workstation.

Expand Down Expand Up @@ -101,9 +101,9 @@ The details for our baremetal node that we want to provision as SNO2 are the one
[#pre-reqs]
=== Deployment Prerequisites

Before we start working on the SiteConfig, let's add some information required for the deployment into the git repository.
Before we start working on the SiteConfig, let's add some information required for the deployment into the Git repository.

CAUTION: In a production environment you don't want to add sensitive information in plain text in your git repository, for the sake of simplicity for this lab we are adding this information in plain text to the git repo, so you don't have to care about it. This applies to things like pull secrets or bmc credentials.
CAUTION: In a production environment you don't want to add sensitive information in plain text in your Git repository, for the sake of simplicity for this lab we are adding this information in plain text to the Git repo, so you don't have to care about it. This applies to things like pull secrets or BMC credentials.

1. BMC credentials file.
+
Expand Down
11 changes: 7 additions & 4 deletions documentation/modules/ROOT/pages/lab-environment.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@ RHEL 8.X box with access to the Internet. This lab relies on KVM, so you need to
* 200GiB Memory.
* 1 TiB storage.
IMPORTANT: These instructions have been tested in a RHEL 8.8, we cannot guarantee that other operating systems (even RHEL-based) will work. We won't be providing support out of RHEL 8.
IMPORTANT: These instructions have been tested in a RHEL 8.9, we cannot guarantee that other operating systems (even RHEL-based) will work. We won't be providing support out of RHEL 8.

These are the steps to install the required packages on a RHEL 8.8 server:
These are the steps to install the required packages on a RHEL 8.9 server:

[.console-input]
[source,bash,subs="attributes+,+macros"]
Expand Down Expand Up @@ -148,6 +148,9 @@ kcli create sushy-service --ssl --port 9000
systemctl enable ksushy --now
-----

WARNING: A message like this `The unit files have no installation config (WantedBy, RequiredBy, Also, Alias settings in the [Install] section, and DefaultInstance for template units).` can be shown in the terminal after creating or starting the service. It is just a warning, please check that the ksushy service has started successfully.


[#configure-disconnected-registry]
=== Configure Disconnected Registry

Expand Down Expand Up @@ -224,12 +227,12 @@ curl -L https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab
systemctl enable haproxy --now
-----

After that you need to add the following entries to your /etc/hosts file:
After that you need to add the following entries to your local /etc/hosts file:

[.console-input]
[source,bash,subs="attributes+,+macros"]
-----
<HYPERVISOR_REACHABLE_IP> infra.5g-deployment.lab api.hub.5g-deployment.lab multicloud-console.apps.hub.5g-deployment.lab console-openshift-console.apps.hub.5g-deployment.lab oauth-openshift.apps.hub.5g-deployment.lab openshift-gitops-server-openshift-gitops.apps.hub.5g-deployment.lab assisted-service-multicluster-engine.apps.hub.5g-deployment.lab api.sno1.5g-deployment.lab api.sno2.5g-deployment.lab
<HYPERVISOR_REACHABLE_IP> infra.5g-deployment.lab api.hub.5g-deployment.lab multicloud-console.apps.hub.5g-deployment.lab console-openshift-console.apps.hub.5g-deployment.lab oauth-openshift.apps.hub.5g-deployment.lab openshift-gitops-server-openshift-gitops.apps.hub.5g-deployment.lab assisted-service-multicluster-engine.apps.hub.5g-deployment.lab automation-hub-aap.apps.hub.5g-deployment.lab automation-aap.apps.hub.5g-deployment.lab api.sno1.5g-deployment.lab api.sno2.5g-deployment.lab
-----

[#create-openshift-nodes-vms]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ oc --kubeconfig ~/5g-deployment-lab/hub-kubeconfig login -u admin -p <admin_pass
2. Modify the ZTP GitOps Pipeline configuration to match our environment configuration.
.. Change the repository url for the two ArgoApps:
+
NOTE: If you're using MacOS and you're getting errors while running `sed -i` commands, make sure you are using `gsed`: `brew install gnu-sed`.
NOTE: If you're using MacOS and you're getting errors while running `sed -i` commands, make sure you are using `gsed`. If you do not have it available, please install it: `brew install gnu-sed`.
+
[.console-input]
[source,bash,subs="attributes+,+macros"]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ EOF

Modify the kustomization.yaml inside the site-policies folder, so it includes this new PGT and eventually will be applied by ArgoCD.

NOTE: If you're using MacOS and you're getting errors while running `sed -i` commands, make sure you are using `gnu-sed`: `brew install gnu-sed`.
NOTE: If you're using MacOS and you're getting errors while running `sed -i` commands, make sure you are using `gsed`. If you do not have it available, please install it: `brew install gnu-sed`.

[.console-input]
[source,bash,subs="attributes+,+macros"]
Expand Down Expand Up @@ -187,9 +187,9 @@ Remember that several gigabytes of artifacts needs to be downloaded to the spoke

image::timing.png[TALM maintenance window concept]

Let's apply the PreCachingConfig and the CGU to our hub cluster:
In OCP 4.14+, there is a new CRD called `PreCachingConfig` that will allow us to be more precise on the container images that we need for our cluster to upgrade. We must apply the {talm-precachingconfig-doc}[PreCachingConfig CR] before or concurrently with the CGU to our hub cluster:

NOTE: You can get the required list for `excludePrecachePattern` for each upgrade by following https://access.redhat.com/articles/7046378[this KCS].
NOTE: You can obtain a more detailed list for `excludePrecachePattern` for each upgrade by following https://access.redhat.com/articles/7046378[this KCS].

[.console-input]
[source,bash,subs="attributes+,+macros"]
Expand Down

0 comments on commit 0d87d70

Please sign in to comment.