JINX is an end-to-end automated deployment of OpenShift 4.x UPI using Ansible. JINX will create a bastion to be used to interface with cluster resources, a registry to serve cluster resources in a restricted fastion, all the other resources required to create an OpenShift 4.x cluster deployment, and MOST IMPORTANTLY the OpenShift 4.x cluster itself!
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
sudo cat << EOF >> /etc/yum.repos.d/epel.repo [epel] name= epel baseurl=https://mirrors.sonic.net/epel/8/Everything/x86_64/ gpgcheck=0 enabled=1
Ensure Python (supported version of 2.7.x or 3.x) is installed.
sudo dnf install unzip -y
sudo dnf install git -y
git clone https://gitlab.consulting.redhat.com/navy-black-pearl/jinx
cd jinx
On Fedora:
sudo dnf install ansible -y
On RHEL and CentOS:
sudo yum install ansible -y
ansible-galaxy collection install amazon.aws community.aws community.crypto
On MacOS: Installation
NOTE: AWS CLI version 1.18.198 is the only version of the AWS CLI that has been tested
aws configure
AWS Access Key ID [****************YYYY]: <aws-access-key-id>
AWS Secret Access Key [****************ZZZZ]: <aws-secret-access-key>
Default region name [us-gov-west-1]: us-gov-west-1
Default output format [None]:
pip3 install pyOpenSSL --user
pip3 install boto --user
pip3 install botocore boto3 --user
- Edit the file located at
inventory/group_vars/all/vars.yml
to include the necessary values such as the following:
Variable | Use | Type | Example Value(s) |
---|---|---|---|
openshift_version |
Version of OpenShift to deploy | String | 4.6.15 |
aws_ssh_key |
Preferred name for AWS SSH KeyPair | String | rh-dev-blackpearl-us |
cluster_name |
Preferred name of OpenShift cluster (and existing VPC name) | String | vpc01 |
cluster_domain |
Subdomain for cluster to be hosted on | String | rh.dev.blackpearl.us |
cluster_subnet_ids |
Three subnets for cluster resources to be deployed on | Array | - subnet-xxxxxxxxxxxxxxxxx - subnet-xxxxxxxxxxxxxxxxx - subnet-xxxxxxxxxxxxxxxxx |
bastion_subnet_id |
Subnet ID for bastion to be deployed to | String | subnet-xxxxxxxxxxxxxxxxx |
vpc_cidr_block |
VPC CIDR block for the cluster | String | 10.126.126.0/23 |
vpc_id |
AWS ID for VPC for cluster | String | vpc-xxxxxxxxxxxxxxxxx |
- Create an Ansible vault file at
inventory/group_vars/all/vault.yml
as follows: - Create file:
ansible-vault create inventory/group_vars/all/vault.yml
NOTE: Use the EDITOR
environment variable to use a different CLI editor than the default, for example:
EDITOR=vim ansible-vault create inventory/group_vars/all/vault.yml
-
Once prompted, enter your desired password for the vault file and confirm it
-
Enter the following required variables and close the file: (using variable: value format)
Variable | Use | Type | Example Value(s) |
---|---|---|---|
quay_pull_secret |
The quay pull secret to pull the necessary openshift images | String | The value located here under step 1 What you need to get started in the Pull secret section. For example:'{"auths":{"cloud.openshift.com":{"auth":"secret","email":"example@example.com"},"quay.io":{"auth":"secret","email":"example@example.com"},"registry.connect.redhat.com":{"auth":"secret","email":"example@example.com"},"registry.redhat.io":{"auth":"secret","email":"example@example.com"}}}' NOTE: Make note of the surrounding single quotes |
aws_access_key_id |
AWS access key ID | String | AXXXXXXXXXXXXXXXXXXXX |
aws_secret_access_key |
AWS secret access key | String | BXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX |
- Run the playbook.yml file as follows:
ansible-playbook -i inventory playbook.yml --ask-vault-pass -vvv
-
SSH to registry box from the bastion via the SSH keys located on both machines, under .ssh/{aws_ssh_key}
-
Enter the konductor container:
podman exec -it konductor connect
- Wait for the Apps ELB to come up:
watch -d -n 5 -c "oc get svc -n openshift-ingress | awk '/router-default/{print $4}'"
-
Once this command prints a value for the ELB that appears like this, internal-xxxxxxxxxx.us-gov-w est-1.elb.amazonaws.com, create a Route53 DNS CNAME record for
*.apps.cluster.domain.com
with this ELB name as the value. -
To watch the cluster finish coming up:
watch oc get co
- Celebrate because you have a new cluster!
We have not tested these possible issues with the automation yet, so they may help to know:
- The playbook may fail if the registry or bastion EC2 instances are in the shutting down state when being run
- 2 or less subnets for cluster
- AWS CLI version 2 (we are very sure this should work)
- Ansible >= 2.10
- A cluster and VPC name that are different
- Griffin College - Red Hat
- Sean Nelson - Red Hat
- Jonny Rickard - Red Hat
- James Radtke - Red Hat
- Kevin O'Donnell - Red Hat
- Kat Morgan - Red Hat
- Jon Hultz - Red Hat