Skip to content

Commit

Permalink
Adds clarifications based on jherr's comments
Browse files Browse the repository at this point in the history
Signed-off-by: Alberto Losada <alosadag@redhat.com>
  • Loading branch information
alosadagrande committed Feb 7, 2024
1 parent b2b5e58 commit ada68fa
Showing 1 changed file with 59 additions and 8 deletions.
67 changes: 59 additions & 8 deletions documentation/modules/ROOT/pages/lab-environment.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,9 @@ systemctl daemon-reload
systemctl enable podman-webcache --now
-----
Verify that the webcache container has started successfully by executing `podman ps`. If it is not running, check the status of the podman-webcache systemd unit.
[#install-ksushytool]
=== Install Ksushy Tool
Expand All @@ -179,6 +182,7 @@ systemctl enable ksushy --now
WARNING: A message like this `The unit files have no installation config (WantedBy, RequiredBy, Also, Alias settings in the [Install] section, and DefaultInstance for template units).` can be shown in the terminal after creating or starting the service. It is just a warning, please check that the ksushy service has started successfully.
Verify that the ksushy service is running. Furthermore, you can check that the TCP port number 9000 is being used by a python application (kushy) by executing `netstat -lntp | grep 9000`.
[#configure-disconnected-registry]
=== Configure Disconnected Registry
Expand All @@ -198,6 +202,7 @@ systemctl daemon-reload
systemctl enable podman-registry --now
cp /opt/registry/certs/registry-cert.pem /etc/pki/ca-trust/source/anchors/
update-ca-trust
sleep 10
podman login --authfile auth.json -u admin infra.5g-deployment.lab:8443 -p r3dh4t1!
-----
Expand All @@ -214,7 +219,7 @@ chown -R 1000:1000 /opt/gitea/
curl -sL https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/{branch}/lab-materials/lab-env-data/gitea/podman-gitea.service -o /etc/systemd/system/podman-gitea.service
systemctl daemon-reload
systemctl enable podman-gitea --now
sleep 10
sleep 20
podman exec --user 1000 gitea /bin/sh -c 'gitea admin user create --username student --password student --email student@5g-deployment.lab --must-change-password=false --admin'
curl -u 'student:student' -H 'Content-Type: application/json' -X POST --data '{"service":"2","clone_addr":"https://github.com/RHsyseng/5g-ran-deployments-on-ocp-lab.git","uid":1,"repo_name":"5g-ran-deployments-on-ocp-lab"}' http://infra.5g-deployment.lab:3000/api/v1/repos/migrate
curl -u 'student:student' -H 'Content-Type: application/json' -X POST --data '{"service":"2","clone_addr":"https://github.com/RHsyseng/5g-ran-lab-aap-integration-tools.git","uid":1,"repo_name":"aap-integration-tools"}' http://infra.5g-deployment.lab:3000/api/v1/repos/migrate
Expand Down Expand Up @@ -257,7 +262,32 @@ curl -L https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab
systemctl enable haproxy --now
-----
After that you need to add the following entries to your local /etc/hosts file:
Verify that the haproxy systemd unit started successfully. After that, you need to add the following entries to your **laptop's local /etc/hosts file**. This line will help you to connect to the different exposed services that are being set in the lab host. Notice that:
* *HYPERVISOR_REACHABLE_IP* is the IP address of the lab server you are configuring. It must be an IP address that you can connect from your laptop, usually the IP address you are using to connect via SSH to the lab server.

For example, your lab server now should have similar interfaces (podman, virbr0 and 5gdeploymentlab) as my lab:

```
ip -o a
1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever
1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever
2: ens1f0 inet 10.19.32.199/26 brd 10.19.32.255 scope global dynamic noprefixroute ens1f0\ valid_lft 13481sec preferred_lft 13481sec
2: ens1f0 inet6 2620:52:0:1343::8d/128 scope global dynamic noprefixroute \ valid_lft 13481sec preferred_lft 13481sec
2: ens1f0 inet6 2620:52:0:1343:e643:4bff:febd:9046/64 scope global dynamic noprefixroute \ valid_lft 2591777sec preferred_lft 604577sec
2: ens1f0 inet6 fe80::e643:4bff:febd:9046/64 scope link noprefixroute \ valid_lft forever preferred_lft forever
6: virbr0 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0\ valid_lft forever preferred_lft forever
7: 5gdeploymentlab inet 192.168.125.1/24 brd 192.168.125.255 scope global 5gdeploymentlab\ valid_lft forever preferred_lft forever
8: podman0 inet 10.88.0.1/16 brd 10.88.255.255 scope global podman0\ valid_lft forever preferred_lft forever
8: podman0 inet6 fe80::b85e:c8ff:feb9:e105/64 scope link \ valid_lft forever preferred_lft forever
9: veth0 inet6 fe80::5828:77ff:fe5d:869f/64 scope link \ valid_lft forever preferred_lft forever
```

Then, obtain the IP you are connecting to via SSH. In my case it is 10.19.32.199. Finally append this entry to your laptop's local /etc/hosts:

```
10.19.32.199 infra.5g-deployment.lab api.hub.5g-deployment.lab multicloud-console.apps.hub.5g-deployment.lab console-openshift-console.apps.hub.5g-deployment.lab oauth-openshift.apps.hub.5g-deployment.lab openshift-gitops-server-openshift-gitops.apps.hub.5g-deployment.lab assisted-service-multicluster-engine.apps.hub.5g-deployment.lab automation-hub-aap.apps.hub.5g-deployment.lab automation-aap.apps.hub.5g-deployment.lab api.sno1.5g-deployment.lab api.sno2.5g-deployment.lab
```

[.console-input]
[source,bash,subs="attributes+,+macros"]
Expand All @@ -268,7 +298,15 @@ After that you need to add the following entries to your local /etc/hosts file:
[#create-openshift-nodes-vms]
=== Create SNO Nodes VMs

Before running the following commands, make sure you have generated a SSH key pair in your default location `~/.ssh/`. That SSH key will allow you to connect to the VMs you are about to create:
Before running the following commands, make sure you have generated a SSH key pair in your default location `~/.ssh/`.

[.console-input]
[source,bash,subs="attributes+,+macros"]
-----
ssh-keygen -t rsa -b 2048
-----

That SSH key will allow you to connect to the VMs you are about to create:

[.console-input]
[source,bash,subs="attributes+,+macros"]
Expand All @@ -278,7 +316,7 @@ kcli create vm -P start=False -P uefi_legacy=true -P plan=hub -P memory=24000 -P
kcli create vm -P start=False -P uefi_legacy=true -P plan=hub -P memory=24000 -P numcpus=12 -P disks=[200,200] -P nets=['{"name": "5gdeploymentlab", "mac": "aa:aa:aa:aa:03:01"}'] -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0301 -P name=sno2
-----

If you need or want to connect to any of the VMs you can do so by just executing:
If you need or want to connect to any of the VMs, once they are started, you can do so by just executing:

[.console-input]
[source,bash,subs="attributes+,+macros"]
Expand All @@ -292,7 +330,7 @@ kcli ssh <VM_name>

IMPORTANT: This step requires a valid OpenShift Pull Secret placed in /root/openshift_pull.json. Notice that you can replace the admin or developer's password shown below for any other.

NOTE: If you're using MacOS and you're getting errors while running `sed -i` commands, make sure you are using `gnu-sed`: `brew install gnu-sed`.
NOTE: If you're using MacOS and you're getting errors while running `sed -i` commands, make sure you are using `gnu-sed` by executing `brew install gnu-sed`.

[.console-input]
[source,bash,subs="attributes+,+macros"]
Expand Down Expand Up @@ -368,7 +406,7 @@ chmod 400 /root/.ssh/snokey
oc apply -f https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/{branch}/lab-materials/lab-env-data/hub-cluster/sno1-argoapp.yaml
-----

Once the cluster is deployed, the kubeconfig can be gathered as follows:
Once the cluster is deployed:

[.console-input]
[source,bash,subs="attributes+,+macros"]
Expand All @@ -379,8 +417,21 @@ agentclusterinstall.extensions.hive.openshift.io/sno1 sno1 adding-hosts

NAME CLUSTER APPROVED ROLE STAGE
agent.agent-install.openshift.io/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0201 sno1 true master Done
-----

The kubeconfig can be gathered as follows:

[.console-input]
[source,bash,subs="attributes+,+macros"]
-----
oc extract secret/sno1-admin-kubeconfig --to=- -n sno1 > /root/sno1kubeconfig
-----

Now, with the proper credentials you can check the status of the SNO1 cluster:

[.console-input]
[source,bash,subs="attributes+,+macros"]
-----
oc --kubeconfig /root/sno1kubeconfig get nodes,clusterversion

NAME STATUS ROLES AGE VERSION
Expand Down Expand Up @@ -411,13 +462,13 @@ ansible-galaxy collection install kubernetes.core

Next, let's create a playbook that will configure the automation controller for us:

NOTE: Change the `aap_manifest_file_path` var value to match the path where you stored the manifest in the hypervisor host and change the value for the `student` user.
NOTE: Change the `aap_manifest_file_path` var value to match the path where you stored the manifest in the hypervisor host and change the `strong_student_password` var to set a password for the AAP `student` user.

[.console-input]
[source,bash,subs="attributes+,+macros"]
-----
curl -L https://raw.githubusercontent.com/RHsyseng/5g-ran-deployments-on-ocp-lab/{branch}/lab-materials/lab-env-data/aap2/configure-aap.yaml -o /root/configure-aap.yaml
ansible-playbook /root/configure-aap.yaml -e strong_student_password=yourstrongstudentpassword -e aap_manifest_file_path=/path/to/your/manifest
ansible-playbook /root/configure-aap.yaml -e strong_student_password=yourstrongstudentpassword -e aap_manifest_file_path=/path/to/your/manifest -e ansible_python_interpreter=/usr/bin/python3.11
-----

Once the playbook finishes we should have access to the AAP Controller at https://automation-aap.apps.hub.5g-deployment.lab with the student user and the password you configured.

0 comments on commit ada68fa

Please sign in to comment.