Skip to content

HCLTech-SSW/openstack-automation-saltstack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Title Description Owner of the Project Contributor Mail To Tags Created Modified
openstack-automation-saltstack
This project is owned by HCL Tech System Software team to support the automated installation of OpenStack using SaltStack.
HCL Tech System Software Team
HCL Tech System Software Team
hcl_ss_oss@hcl.com
Automation of OpenStack, OpenStack automation using saltstack, HCL in OpenStack automation, Installation support for OpenStack
2016 Sep 20
2018 Jun 27

openstack-automation-saltstack

Overview of the Project

This open source project is to support the automated installation of OpenStack (for Liberty, Mitaka and Queens release) using SaltStack.

The reason of using SaltStack here is:

  1. It provides an infrastructure management framework which makes installation task pretty easy.

  2. SaltStack maintains a repository of formulas (which are plain .sls files having information about steps involved in installation/execution). These sls files contain definite set of formulas for installation and configuration of different OpenStack packages.

  3. Salt gives a list of formulae and every 6 months there is a release of OpenStack. In this project, only the formulae for the ever growing components of OpenStack have to be changed/added/modified.

The installation gets completed in a very short span of time.

OpenStack Components which are installed and configured

This project will setup the OpenStack in three node architecture environment by installing and configuring the following components:

On Controller node:
1)	Installation and configuration of MariaDB server
2)	Installation and configuration of RabbitMQ server
3)	Installation and configuration of Apache server
4)	Installation and configuration of Memcached
5)	Installation and configuration of Etcd
6)	Installation and configuration of identity service (i.e. Keystone)
7)	Installation and configuration of image  service (i.e. Glance)
8)	Installation and configuration of compute service (i.e. Nova)
9)	Installation and configuration of networking service (i.e. Neutron)
10)	Installation and configuration of dashboard service (i.e. Horizon)
11)	Installation and configuration of block storage service (i.e. Cinder)

On Compute node:
1)	Installation and configuration of compute service (i.e. Nova)
2)	Installation and configuration of networking service (i.e. Neutron)

On Block storage node:
1)	Installation and configuration of block storage service (i.e. Cinder)

Environment / Hardware requirement

In order to install OpenStack in three node architecture the following environment / hardware requirement should be met:

1)	Four Physical / Virtual machines having Ubuntu 16.04 LTS x86-64 operating system installed.

	a)	Salt-Master Machine: This is the First Machine (i.e. Machine-1) which will be used as Salt-Master machine and will invoke / perform the installation of OpenStack on the other 3 machines (listed in next step).
	b)	Salt-Minion Machine(s) - The other three machines (i.e. Machine-2, 3, 4) will be used as Salt-Minion machine on which OpenStack would be configured using this project.

2)	In order to allow download of the packages in Openstack installation, internet access should be available on all the machines.

3)	For smooth installation, minimum 4 GB of RAM is preferable.

Steps to Configure this Project

The following steps will be required by end users to their workstations to configure/use this project:

  1. Configure the Machine-1 (Salt-Master machine) and which is responsible to invoke the installation of OpenStack on other three machines (Salt-Minion machines), follow the steps as mentioned below:
a)	Install the latest version 2017.7.5 of salt-master.

	•	Run the following command to import the SaltStack repository key:
		wget -O - https://repo.saltstack.com/apt/ubuntu/16.04/amd64/archive/2017.7.5/SALTSTACK-GPG-KEY.pub | sudo apt-key add -

	•	Save the following line 
		deb http://repo.saltstack.com/apt/ubuntu/16.04/amd64/archive/2017.7.5 xenial main
		to /etc/apt/sources.list.d/saltstack.list

	•	Run the following command:
		sudo apt-get update

	•	Run the following command to install salt-master
      		sudo apt-get install salt-master

b)	Clone the project from git to local machine.

c)	Update the salt-master configuration file in Salt-Master machine located at "/etc/salt/master" which would hold the below contents:

	pillar_roots:
	  queens:
	    - /openstack_queens/data_root
	file_roots:
	  queens:
	    - /openstack_queens/component_root
  1. Configure the Salt-Minion machines on which the OpenStack would be installed, follow the steps as mentioned below:
a)	Install the latest version 2017.7.5 of salt-minion on all three Machines/Nodes.

	•	Run the following command to import the SaltStack repository key:
		wget -O - https://repo.saltstack.com/apt/ubuntu/16.04/amd64/archive/2017.7.5/SALTSTACK-GPG-KEY.pub | sudo apt-key add -

	•	Save the following line 
		deb http://repo.saltstack.com/apt/ubuntu/16.04/amd64/archive/2017.7.5 xenial main
		to /etc/apt/sources.list.d/saltstack.list
		
	•	Run the following command:
		sudo apt-get update

	•	Run the following command to install salt-master
      		sudo apt-get install salt-minion

b)	On every Salt-Minion machine, update the “/etc/hosts” file on every minion by adding the IP address of Salt-Master machine.

c)	On every Salt-Minion machine, update the “/etc/salt/minion” file with the IP address of Salt-Master machine against “master:” field.
  1. In order to start salt-master, execute the following command in terminal on Salt-Master machine
salt-master –l debug
  1. Update the name of all three Salt-Minion machines by executing the following commands on respective Salt-Minion machine:
a)	On Salt-Minion machine for Controller Node:
	echo “controller.queens” > /etc/salt/minion_id

b)	On Salt-Minion machine for Compute Node:
	echo “compute.queens” > /etc/salt/minion_id

c)	On Salt-Minion machine for Block Storage Node:
	echo “blockstorage.queens” > /etc/salt/minion_id

d)	For each Salt-Minion machine (OpenStack nodes), the same name should be updated into the “/etc/hostname”.

e)	Reboot all three Salt-Minion machines.

Please note:

The names like “controller.queens”, “compute.queens” and “blockstorage.queens” as mentioned above could be anything as per the user(s) choice, as we have considered the above mentioned name to easily visualize/identify the OpenStack nodes.

  1. In order to start salt-minion, execute the following command in terminal on each Salt-Minion machine (OpenStack nodes):
salt-minion –l debug
  1. Every Salt-Minion machine should be registered on Salt-Master machine, in order to register the minion execute the following command on Salt-Master machine:
salt-key –a “controller.queens”
salt-key –a “compute.queens”
salt-key –a “blockstorage.queens”
  1. In order to verify the status of Salt-Minion machine registration with master, execute the following command on Salt-Master machine:
salt ‘*.queens’ test.ping (which displays all 3 Salt-Minion will be shown in green color.)
  1. Updated the file “data_root/openstack_cluster.sls” located in Salt-Master machine. The fields which are highlighted in the below image should be provided by the user:

Image1

  1. Verify the following values in “data_root/openstack_cluster_resources.sls” the file is located in Salt-Master machine.

Image2

  1. The following file as displayed in below image contains the value for the parameters which would be specified while executing the commands for every service to create users, services and endpoints etc. Before proceeding to the installation, please review and update the values as per your preferences, the file “data_root/openstack_access_resources.sls” located in Salt-Master machine.

Image3

  1. Verify the following values in “data_root/openstack_network_resources.sls” the file is located in Salt-Master machine.

Image4

Now Let’s Start the OpenStack Installation

We are done with configuring salt-master and salt-minion machines, now let’s start the OpenStack installation. In order to start the installation, execute the following command from terminal on Salt-Master machine:

salt ‘*.queens’ state.highstate

After successful installation, all three Salt-Minion machines turned into OpenStack cluster environment with the following components installed on respective nodes:

On Salt-Minion for Controller node: 
1)	Installation and configuration of MariaDB server
2)	Installation and configuration of RabbitMQ server
3)	Installation and configuration of Apache server
4)	Installation and configuration of Memcached
5)	Installation and configuration of Etcd
6)	Installation and configuration of identity service (i.e. Keystone)
7)	Installation and configuration of image  service (i.e. Glance)
8)	Installation and configuration of compute service (i.e. Nova)
9)	Installation and configuration of networking service (i.e. Neutron)
10)	Installation and configuration of dashboard service (i.e. Horizon)
11)	Installation and configuration of block storage service (i.e. Cinder)

On Salt-Minion for Compute node: 
1)	Installation and configuration of compute service (i.e. Nova)
2)	Installation and configuration of networking service (i.e. Neutron)

On Salt-Minion for Block storage node:
1)	Installation and configuration of block storage service (i.e. Cinder)

Additionally the installation would make the following common changes on all the three OpenStack nodes:
1)	Update the host file. (On Controller, Compute and Block Storage machines)
2)	Network interface related changes in interfaces file (On Controller and Compute machines).

How to Replicate the OpenStack Deployment (On Need basis)

The above configurations as mentioned in Step 8 & Step 9 would create one set of OpenStack environment. If there is need to setup more than one replica of three node architecture OpenStack environment then following changes which are highlighted in below images would require to be made in the respective files on Salt-Master machine.

Image5

Image6

By making the above changes into the sls files, in one go of installation the OpenStack cluster will be ready with One Controller node, two Compute nodes and two Block Storage nodes.

Image7

Image8

Troubleshooting Steps

Following are the commonly faced problems along with the troubleshooting steps:

Sl. No.

Troubleshooting problems

Resolution Steps

1

During the installation of states, sometimes a message comes from the salt-master saying "Minion did not return."

1. This message occurs mainly due to the change in the IP address of salt-minion with respect to salt-master.
2. To resolve this issue, we need to reconfigure IP address of the salt-minion, and update the host file.
3. Execute the test.ping command from salt-master which results in true.
salt 'controller.mitaka' test.ping
4. And then update the openstack_cluster_resources file with the new IP so that after the re run of the states, the minion automatically gets the updated IP entry in its host file.

2

When salt command starts execution from salt-master, sometimes a message saying "The Salt state.highstate command is already running at pid 9080" comes on command prompt.

1. This message occurs when a salt-minion is stopped by user by pressing Ctrl+C command to terminate it, and eventually it is still running.
2. In that case when salt commands are re-executed from salt-master so this message comes and no states completion summary has been displayed on salt-master.
3 To avoid such a situation it is better to kill the salt-master and salt-minion daemon by executing following command on salt-master and salt-minion.
pkill salt-master(From salt-master node)
pkill salt-minion(From salt-minion node)
4. Restart the salt-master and salt-minion daemon in the debug mode, and wait for connection to be established.
salt-master -l debug
salt-minion -l debug

5.
As the connection gets established, just execute the salt command by specifying the minion.
salt '*.mitaka' state.highstate

3

When salt-minion is showing infinite jobs sequence for an individual states and is acquiring more time to complete the process.

1. Terminate the salt-minion by Ctrl+C command.
2. Check the individual states files, as if there is some logical error which causes salt-minion to start the infinite loop of jobs.
3. After revalidating the states files for any logical errors, restart the salt-minion daemon.
 salt-minion -l debug
4. And then start the execution of salt command from salt-master for a particular or for all the minions.

4

When there is compile time error saying NoneType object is not iterate-able comes on salt-master node.

1. This problem occurs when there is no element in the dict to iterate.
2. To resolve this error make sure, no sls is commented under a specific role in openstack_cluster_resources.sls file.
3. Even if a user wants to run the individual state on individual minion, he/she needs to specify at least one state under each role.
4. And can execute the salt-command by specifying that minion node.
5. Let’s say he/she wants to install MariaDB on controller node, then in cluster resources he needs to at least specify an individual sls under every node either it is compute or storage, but can execute the salt-command for an individual node.
salt 'controller.mitaka' state.highstate

5

If a user wants to install an individual package on an individual minion.

1. In such a case he/she needs to comment out all other states in the openstack _cluster_resources.sls file for that particular minion.
2. And then execute the following command from the salt-master for a specific node.
3. Let say for a controller node, apache module is needed.
3.1. So he/she should comment out other sls in openstack _cluster_resources.sls file.
3.2. And then execute the following command from salt-master.
salt 'controller.mitaka' state.highstate.

6

If a user wants to install all services on all nodes.

1. In that case user needs to remove all the commented lines (i.e. a # sign in the beginning of line) form the openstack _cluster_resources.sls if there is any.
2. And execute the following command from salt-master.
salt '*.mitaka' state.highstate
3. As the command execution begins, he/she further cross validates the responses of installation on all salt-minions node by seeing the salt-minion daemon.
4. All the states summary will be displayed on salt-master after the installation process has been completed on each minion.

Version Information

This project is based and tested on the following versions:

  1. SaltStack Master and Minion version 2017.7.5 (Nitrogen)
  2. OpenStack Queens Release
  3. Ubuntu 16.04 LTS x86-64 operating system

Current Limitations

  1. For OpenStack Neutron Networking, this project supports only installation and configuration of [Networking Option 2: Self-service networks] (http://docs.openstack.org/mitaka/install-guide-ubuntu/neutron-controller-install-option2.html).
  2. The Cinder Backup Service is not supported in this openstack deployment automation project.