Skip to content

sergevs/ansible-openstack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ansible Playbook: Openstack

An ansible playbook to deploy openstack components to a cluster.

Overview

Initially the playbook was composed with a primary purpose to learn openstack deployment in a nutshell.

As the project succesfully passes test environment the goal is changed to fill the gap between install from source and docker deployment , i.e. to create a deployment on bare metal hosts from official packages repository without containers and therfore eliminate addition level of related complexity.

At the current state, the playbook is able to deploy a fully functional openstack cluster(see below). Also it's possible to deploy everything on a single(VM) host.

You are welcomed to read the playbook and feedback pull requests and suggestions :)

Basic high availability features implemented for controller/infrastructure services:

  • MariaDB galera cluster
  • RabbitMQ cluster
  • Memcached service
  • VIP cluster address managed by keepalived

So if more than one controller node configured, seemless failover is expected for:

  • keystone
  • glance
  • cinder controller
  • nova controller
  • neutron controller
  • heat
  • horizon

Description

The playbook is able to setup the core services described in the official guide:

The configuration is very simple:

It’s only required to place hostname(s) to the controller and compute groups in hosts file and carefully fill the required group_vars parameters.

The playbook contains configuration files in roles directories. If you need to add or change any parameter you can edit the configuration file which can be found in roles/service/[files|templates] directory.

Besides of cluster( or single host ) setup, the playbook also generates cluster manager configuration file located at workdir/services.xml. Please visit clinit manager home page and see manual. The rpm package can be downloaded from clinit-1.0-ssv1.el6.noarch.rpm. After clinit package installed you’ll be able to stop, start and see status of services on the cluster.

Configuration

Services configuration performed using the hosts and variables files.

Hosts file:

The empty file is supplied with the playbook. Please examine hosts and supply appropriate host names. You must not remove any existing group. Leave the group empty if you don't need services the group configures. The same hostname can be placed to any hosts group. As an instance if you want setup everything on one host, just put the same hostname to each hosts group. As far, only controller and compute groups are well tested and supported.

Variables file:

Please examine group_vars/all and supply appropriate configuration for your network environment and disk storage parameters.

Usage

Prepare,verifty repositories configuration and perform a basic check:

ansible-playbook -i hosts -t prepare site.yml
ansible-playbook -i hosts -t check site.yml

Deployment:

ansible-playbook -i hosts site.yml

if you have installed clinit, after deployment you can also run:

clinit -S workdir/services.xml status
clinit -S workdir/services.xml tree

Tags used in playbook:

  • package : install rpm packages
  • config : deploy configuration files, useful if you want just change configuration on hosts.
  • test : run test actions

Also most hostgroups have the tag with similar name.

Requirements

  • Ansible >= 2.2.0.0 is required.
  • Openstack version: liberty, mitaka, ocata, pike - please use appropriate branch, pike is currently at the master branch.
  • remote_user = root must be configured for ansible.

Target host(s) requirements

  • At least 8 Gb RAM, 4 CPU cores, 10Gb HDD (5Gb for root FS + 5GB cinder partition) is required for minimal single host test installation.
  • OS version: Redhat/CentOS 7.4 with current updates.(updates are important).
  • The required for Openstack repositories have to be properly configured.
  • SSH key passwordless authentication must be configured for root account.
  • se_linux must be disabled.
  • requiretty should be switched off in /etc/sudoers file.
  • 2 interfaces must be present: one for private and one for provider(public) network.
  • at least one spare partition must be available for cinder( block storage ) service.

License

MIT