Skip to content

Nuage OSP Director 13 Integration

Sunny Verma edited this page Mar 13, 2019 · 37 revisions

Note: This document is deprecated and will be removed starting from 5.4.2/6.0 release. Please start using

Documentation folder under https://github.com/nuagenetworks/nuage-ospdirector/tree/OSPD13

Introduction

This document outlines the architecture of the integration project of using Nuage with OSP Director 13

Nuage OSP Director 13 Integration

This document will focus on providing the information required to add and configure Nuage.

The OSP Director is an image based installer. It uses a single image (named overcloud-full.qcow2) that is deployed on the Controller and Compute machines belonging to the overcloud OpenStack cluster. The deployment will also pull the required container images, create configuration files and database entries for OpenStack services. Typically, there is no new software installation during the deployment phase.

The OSP Director architecture allows partners to create new templates to expose parameters specific to their modules and then the templates can be passed to the openstack overcloud deploy command during the deployment. Additionally, changes to the puppet manifests are required to handle the new values in the Hiera database and act on them to deploy the partner software. ML2 options will be added to the existing Nuage templates.

The OSP Director also allows partners to build their own custom container images and use their images instead of default container images in the overcloud deployment.

Integration of Nuage VSP with OSP Director

For OSP Director 13 multiple changes are required, for undercloud codebase, overcloud-full.qcow2 image and the container images. These changes are mentioned below and have been separated out in terms of being patched by script, dockerfiles or to be done manually based on the diff provided.

In the near future, this will be simplified, with container images including Nuage components delivered directly from the Red Hat container registry.

Undercloud codebase changes

Heat and Neutron requires some code to be present in tripleo-heat-templates. These changes are not present in OSP-Director 13 yet. These changes are not present in OSP-Director 13 yet. The patch file will add the missing code to the tripleo-heat-templates.

Overcloud image changes

Nuage VRS and metadata agent configuration files need to be created and populated with the required parameters. This can be achieved by a new puppet module, nuage-puppet-modules, that needs to be included on the overcloud image along with other required Nuage packages.

Since the typical deployment scenario of OSP Director assumes that all the packages are installed on the overcloud-full image, we need to patch the overcloud-full image with the following RPMs:

  • nuage-metadata-agent
  • selinux-policy-nuage
  • nuage-puppet-modules-5.0 (link)
  • nuage-openvswitch (Nuage VRS)
  • nuage-bgp
  • nuage-openstack-neutronclient

Also, we need to un-install OVS and Install VRS

  • Un-install OVS
  • Install VRS (nuage-openvswitch)

Changes to Docker Images

Since Nuage docker images are not present in Red Hat registry, required dockerfiles and instructions are provided to build Nuage docker images. RPMs that are required for building Nuage docker images are

  • nuage-openstack-neutron
  • nuage-openstack-neutronclient
  • nuage-openstack-horizon
  • nuage-openstack-heat

Changes to openstack-tripleo-heat-templates

Some of the generic neutron.conf and nova.conf parameters need to be configured in the heat templates. Also, the metadata agent needs to be configured. The values of these parameters are dependent on the configuration of Nuage VSP. The "Sample Templates" section contains some 'probable' values for these parameters in files neutron-nuage-config.yaml and nova-nuage-config.yaml.

Deployment steps

Deploy undercloud

The undercloud deployment should proceed as per the OSP Director documentation. Follow all these steps before the openstack overcloud deploy command.

Note: If planning on using remote-registry for overcloud container images, you have to NAT the brctl-plane cidr on the undercloud as shown below.

If undercloud-ip is 192.168.24.1 then you can directly run the below command to add the iptable rule to NAT that overcloud IP's

sudo iptables -A POSTROUTING -t nat -s 192.168.24.1/24 -j MASQUERADE

Updating Undercloud codebase

The upstream changes that are not in OSP Director 13 yet, need to be applied to the undercloud codebase. These changes are provided in the diff file at link. This contains multiple diff_OSPD13 files containing the differences that need to be applied. The steps for applying this patch are provided in the README here.

Modify overcloud-full.qcow2 to include Nuage components

The installation of packages and un-installation of OVS can be done via this script.

Since the file required for configuring vlan_transparent is not in the OSP-Director codebase the changes can be added to the image using the same script. Copy the directory containing the 13_files and the script at link.

The customer will receive all the RPMs and the script to patch the overcloud-full image with the RPMs. The user needs to create a local repo that is accessible from the machine that the script will run on and add all the RPMs to that repo. Please look online on how to create repo. The steps for modifying overcloud-full.qcow2 are provided in the README here

Building Docker Images

On the undercloud, follow Red Hat documentation and configure overcloud to use a registry. Using the registry, generate /home/stack/templates/overcloud_images.yaml environment file, which contains container images location. Using this environment file, get the tag for all heat services, horizon and neutron-server container images are pointed to.

DockerHeatApiCfnImage: registry.access.redhat.com/rhosp13/openstack-heat-api-cfn:<tag>
example:
DockerHeatApiCfnImage: registry.access.redhat.com/rhosp13/openstack-heat-api-cfn:13.0-60.1543534138

DockerHeatApiImage: registry.access.redhat.com/rhosp13/openstack-heat-api:<tag>
example:
DockerHeatApiImage: registry.access.redhat.com/rhosp13/openstack-heat-api:13.0-61.1543534111

DockerHeatEngineImage: registry.access.redhat.com/rhosp13/openstack-heat-engine:<tag>
example:
DockerHeatEngineImage: registry.access.redhat.com/rhosp13/openstack-heat-engine:13.0-59.1543534131

DockerHorizonImage: registry.access.redhat.com/rhosp13/openstack-horizon:<tag>
example:
DockerHorizonImage: registry.access.redhat.com/rhosp13/openstack-horizon:13.0-60.1543534103

DockerNeutronConfigImage: registry.access.redhat.com/rhosp13/openstack-neutron-server:<tag>
example:
DockerNeutronConfigImage: registry.access.redhat.com/rhosp13/openstack-neutron-server:13.0-64.1543534105

On Undercloud, create a directory named Nuage-OSPD-Dockerfiles on undercloud and copy all dockerfiles and nuage.repo file from this link to Nuage-OSPD-Dockerfiles.

For all dockerfiles in Nuage-OSPD-Dockerfiles, change the tag of the docker base image, to point to the same tag as in /home/stack/templates/overcloud_images.yaml. An example is shown below

FROM <docker-image-name>:<tag>
example:
FROM registry.access.redhat.com/rhosp13/openstack-heat-api-cfn:13.0-60.1543534138

Also for all dockerfiles, provide the that is being used on your setup

LABEL name="<undercloud-ip>:8787/rhosp13/openstack-nuage-neutron-server"
example:
LABEL name="192.168.24.1:8787/rhosp13/openstack-nuage-neutron-server"

Point the baseurl in nuage.repo to the url of Nuage repo which hosts all the required Nuage packages.

baseurl = <baseurl>

Run the below commands to build the Nuage docker images

By default on undercloud, local registry will be listening on port 8787.
Let us consider Undercloud IP as 192.168.24.1

#For Nuage Heat Engine
docker build -t <undercloud-ip>:8787/rhosp13/openstack-nuage-heat-engine:<tag> -f nuage-heat-engine-dockerfile .

Example:
docker build -t 192.168.24.1:8787/rhosp13/openstack-nuage-heat-engine:<tag> -f nuage-heat-engine-dockerfile .

#For Nuage Heat API and Heat API Cron because both these services point to the same docker image
docker build -t <undercloud-ip>:8787/rhosp13/openstack-nuage-heat-api:<tag> -f nuage-heat-api-dockerfile .

Example:
docker build -t 192.168.24.1:8787/rhosp13/openstack-nuage-heat-api:<tag> -f nuage-heat-api-dockerfile .

#For Nuage Heat API-CFN
docker build -t <undercloud-ip>:8787/rhosp13/openstack-nuage-heat-api-cfn:<tag> -f nuage-heat-api-cfn-dockerfile .

Example:
docker build -t 192.168.24.1:8787/rhosp13/openstack-nuage-heat-api-cfn:<tag> -f nuage-heat-api-cfn-dockerfile .

#For Nuage Horizon
docker build -t <undercloud-ip>:8787/rhosp13/openstack-nuage-horizon:<tag> -f nuage-horizon-dockerfile .

Example:
docker build -t 192.168.24.1:8787/rhosp13/openstack-nuage-horizon:<tag> -f nuage-horizon-dockerfile .

#For Nuage Neutron
docker build -t <undercloud-ip>:8787/rhosp13/openstack-nuage-neutron-server:<tag> -f nuage-neutron-server-dockerfile .

Example:
docker build -t 192.168.24.1:8787/rhosp13/openstack-nuage-neutron-server:<tag> -f nuage-neutron-server-dockerfile .
  1. During the deployment, configure the Overcloud to use the Nuage container images instead of the Red Hat registry images by pushing the build Nuage container images to local registry.
docker push 192.168.24.1:8787/rhosp13/openstack-nuage-heat-engine:<tag>
docker push 192.168.24.1:8787/rhosp13/openstack-nuage-heat-api:<tag>
docker push 192.168.24.1:8787/rhosp13/openstack-nuage-heat-api-cfn:<tag>
docker push 192.168.24.1:8787/rhosp13/openstack-nuage-horizon:<tag>
docker push 192.168.24.1:8787/rhosp13/openstack-nuage-neutron-server:<tag>
  1. Change the file /home/stack/templates/overcloud_images.yaml to point the heat, horizon, neutron and their config docker images to their respective nuage-docker images that were built in the above step.
DockerHeatApiCfnConfigImage: 192.168.24.1:8787/rhosp13/openstack-nuage-heat-api-cfn:<tag>
DockerHeatApiCfnImage: 192.168.24.1:8787/rhosp13/openstack-nuage-heat-api-cfn:<tag>
DockerHeatApiConfigImage: 192.168.24.1:8787/rhosp13/openstack-nuage-heat-api:<tag>
DockerHeatApiImage: 192.168.24.1:8787/rhosp13/openstack-nuage-heat-api:<tag>
DockerHeatConfigImage: 192.168.24.1:8787/rhosp13/openstack-nuage-heat-api:<tag>
DockerHeatEngineImage: 192.168.24.1:8787/rhosp13/openstack-nuage-heat-engine:<tag>
DockerHorizonConfigImage: 192.168.24.1:8787/rhosp13/openstack-nuage-horizon:<tag>
DockerHorizonImage: 192.168.24.1:8787/rhosp13/openstack-nuage-horizon:<tag>
DockerNeutronApiImage: 192.168.24.1:8787/rhosp13/openstack-nuage-neutron-server:<tag>
DockerNeutronConfigImage: 192.168.24.1:8787/rhosp13/openstack-nuage-neutron-server:<tag>

Generate CMS ID

For an Openstack installation, a CMS (Cloud Management System) ID needs to be generated to configure with Nuage VSD installation. The assumption is that Nuage VSD and Nuage VSC are already running before overcloud is deployed. Nuage VSD should be licensed.

Steps to generate it:

  • Copy the folder to a machine that can reach VSD (typically the undercloud node)
  • From the folder run the following command to generate CMS_ID:
python configure_vsd_cms_id.py --server <vsd-ip-address>:<vsd-port> --serverauth <vsd-username>:<vsd-password> --organization <vsd-organization> --auth_resource /me --serverssl True --base_uri /nuage/api/<vsp-version>"  
example command : 
python configure_vsd_cms_id.py --server 0.0.0.0:0 --serverauth username:password --organization organization --auth_resource /me --serverssl True --base_uri "/nuage/api/v5_0"
  • The CMS ID will be displayed on the terminal as well as a copy of it will be stored in a file "cms_id.txt" in the same folder.
  • This generated cms_id needs to be added to neutron-nuage-config.yaml template file for the parameter NeutronNuageCMSId.

Optional Features

ML2 and SRIOV

This feature allows an OpenStack installation to support Single Root I/O Virtualization (SR-IOV)-attached VMs (https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking) with VSP-managed VMs on the same KVM hypervisor cluster. It provides a Nuage ML2 mechanism driver that coexists with the sriovnicswitch mechanism driver.

Neutron ports attached through SR-IOV are configured by the sriovnicswitch mechanism driver. Neutron ports attached to Nuage VSD-managed networks are configured by the Nuage ML2 mechanism driver.

To enable it, the following steps need to be executed:

  1. For "Updating Undercloud codebase" changes, nothing additional is required
  2. For "Modification of overcloud-full image", the script will take care of updating the image. Nothing additional is required
  3. Create a new roles_sriov.yaml file to deploy SR-IOV compute nodes. The command used to create this file is given below
openstack overcloud roles generate Controller Compute ComputeSriov -o roles_sriov.yaml
  1. Discover the tag for neutron-sriov-agent image using below command
openstack overcloud container image tag discover --image registry.access.redhat.com/rhosp13/openstack-neutron-sriov-agent:latest --tag-from-label {version}-{release}
  1. Add SR-IOV image name to overcloud_images.yaml, replace with the one got from previous step
  DockerNeutronSriovImage: registry.access.redhat.com/rhosp13/openstack-neutron-sriov-agent:<tag>
  1. Tag the overcloud nodes by following Red Hat documentation, so deployment will pick the right nodes for controller and compute.

  2. Create a flavor and profile for computesriov

openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 computesriov

openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="computesriov" computesriov
  1. Assign sriov nodes with computesriov profile
openstack baremetal node set --property capabilities='profile:computesriov,boot_option:local' <node-uuid>
  1. For "Deploy Overcloud", few additional parameters and template files are required.
  • The heat template neutron-nuage-config.yaml needs to include the following additional/modified parameter values:
  NeutronServicePlugins: 'NuagePortAttributes,NuageAPI,NuageL3,trunk,NuageNetTopology'
  NeutronTypeDrivers: "vlan,vxlan,flat"
  NeutronMechanismDrivers: ['nuage','nuage_sriov','sriovnicswitch']
  NeutronFlatNetworks: '*'
  NeutronTunnelIdRanges: "1:1000"
  NeutronNetworkVLANRanges: "physnet1:2:100,physnet2:2:100"
  NeutronVniRanges: "1001:2000"
  • The heat template nova-nuage-config.yaml needs to include the following additional parameter value:
NovaPCIPassthrough: "[{\"devname\":\"eno2\",\"physical_network\":\"physnet1\"},{\"devname\":\"eno3\",\"physical_network\":\"physnet2\"}]"
  • Lastly, the overcloud deployment command needs to include "neutron-sriov.yaml" file. A sample is provided in the "Sample Templates" section.

Linux bonding with VLANs

Edit network-environment.j2.yaml file in /usr/share/openstack-tripleo-heat-templates/environments/. The sample is provided in the "Sample Templates" section

Nuage uses default linux bridge and linux bonds. For this to take effect, following network file is changed.

/usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/role.role.j2.yaml

The changes that are required are:

  1. Remove ovs_bridge and move the containing members one level up
  2. Change ovs_bond to linux_bond with the right bonding_options (For example, bonding_options: 'mode=active-backup')
  3. Change the interface names under network_config and linux_bond to reflect the interface names of the baremetal machines that are being used.
  4. Add "device" option to the vlans
Example for /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/role.role.j2.yaml

========
Original
========
resources:
  OsNetConfigImpl:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      config:
        str_replace:
          template:
            get_file: ../../scripts/run-os-net-config.sh
          params:
            $network_config:
              network_config:
              - type: interface
                name: nic1
                use_dhcp: false
                addresses:
                - ip_netmask:
                    list_join:
                    - /
                    - - get_param: ControlPlaneIp
                      - get_param: ControlPlaneSubnetCidr
                routes:
                - ip_netmask: 169.254.169.254/32
                  next_hop:
                    get_param: EC2MetadataIp
{%- if role.default_route_networks is not defined or 'ControlPlane' in role.default_route_networks %}
                - default: true
                  next_hop:
                    get_param: ControlPlaneDefaultRoute
{%- endif %}
{%- if role.name != 'ComputeOvsDpdk' %}
              - type: ovs_bridge
                name: bridge_name
                dns_servers:
                  get_param: DnsServers
                members:
                - type: ovs_bond
                  name: bond1
                  ovs_options:
                    get_param: BondInterfaceOvsOptions
                  members:
                  - type: interface
                    name: nic2
                    primary: true
                  - type: interface
                    name: nic3
{%- for network in networks if network.enabled|default(true) and network.name in role.networks %}
                - type: vlan
                  vlan_id:
                    get_param: {{network.name}}NetworkVlanID
                  addresses:
                  - ip_netmask:
                      get_param: {{network.name}}IpSubnet
{%- if network.name in role.default_route_networks %}
                  routes:
                  - default: true
                    next_hop:
                      get_param: {{network.name}}InterfaceDefaultRoute
{%- endif %}

==================================
Modified (changes are **marked**)
==================================
resources:
  OsNetConfigImpl:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      config:
        str_replace:
          template:
            get_file: ../../scripts/run-os-net-config.sh
          params:
            $network_config:
              network_config:
              - type: interface
                name: **eno1**
                use_dhcp: false
                addresses:
                - ip_netmask:
                    list_join:
                    - /
                    - - get_param: ControlPlaneIp
                      - get_param: ControlPlaneSubnetCidr
                routes:
                - ip_netmask: 169.254.169.254/32
                  next_hop:
                    get_param: EC2MetadataIp
{%- if role.default_route_networks is not defined or 'ControlPlane' in role.default_route_networks %}
                - default: true
                  next_hop:
                    get_param: ControlPlaneDefaultRoute
{%- endif %}
{%- if role.name != 'ComputeOvsDpdk' %}
              - type: **linux_bond**
                name: bond1
                dns_servers:
                  get_param: DnsServers
              **bonding_options: 'mode=active-backup'**
                members:
                - type: interface
                  name: **eno2**
                  primary: true
                - type: interface
                  name: **eno3**
{%- for network in networks if network.enabled|default(true) and network.name in role.networks %}
              - type: vlan
              **device: bond1**
                vlan_id:
                  get_param: {{network.name}}NetworkVlanID
                addresses:
                - ip_netmask:
                    get_param: {{network.name}}IpSubnet
{%- if network.name in role.default_route_networks %}
                routes:
                - default: true
                  next_hop:
                    get_param: {{network.name}}InterfaceDefaultRoute
{%- endif %}
{%- endfor %}

From OSPD 9 onwards, a verification step was added where the overcloud nodes will ping the gateway to verify connectivity on the external network vlan. Without this verification step the deployment would fail. This is the case in deployments with linux bonding and network isolation. For this verification step, the ExternalInterfaceDefaultRoute IP configured in the template network-environment.yaml should be reachable from the overcloud controller node(s) on the External API VLAN. This gateway can reside on the Undercloud as well. The gateway needs to be tagged with the same VLAN ID as that of the External API network of the controller.

From OSPD 13 onwards, /usr/share/openstack-tripleo-heat-templates/environments/network-environment.j2.yaml gets the Network information for all the networks from /usr/share/openstack-tripleo-heat-templates/network_data.yaml file.

Note: ExternalInterfaceDefaultRoute IP should be able to reach outside because overcloud controller uses this IP as default route to reach Red Hat Registry to pull the overcloud container images.

Overcloud Deployment commands

For OSP Director, tuskar deployment commands are recommended. But as part of Nuage integration effort, it was found that heat-templates provide more options and customization to overcloud deployment. The templates can be passed in "openstack overcloud deploy" command line options and can create or update an overcloud deployment.

Non-HA

For non-HA overcloud deployment, following command was used for deploying with Nuage:

openstack overcloud deploy --templates -e /home/stack/templates/overcloud_images.yaml -e /home/stack/templates/node-info.yaml -e /home/stack/templates/docker-insecure-registry.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-nuage-config.yaml --ntp-server ntp-server

For Virtual deployment, need to add --libvirt-type parameter as:

openstack overcloud deploy --templates --libvirt-type qemu -e /home/stack/templates/overcloud_images.yaml -e /home/stack/templates/node-info.yaml -e /home/stack/templates/docker-insecure-registry.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-nuage-config.yaml --ntp-server ntp-server

HA

For HA overcloud deployment, following command was used for deploying with Nuage:

openstack overcloud deploy --templates -e /home/stack/templates/overcloud_images.yaml -e /home/stack/templates/node-info.yaml -e /home/stack/templates/docker-insecure-registry.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-nuage-config.yaml --ntp-server ntp-server

For Virtual deployment, need to add --libvirt-type parameter as:

openstack overcloud deploy --templates --libvirt-type qemu -e /home/stack/templates/overcloud_images.yaml -e /home/stack/templates/node-info.yaml -e /home/stack/templates/docker-insecure-registry.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-nuage-config.yaml --ntp-server ntp-server

Linux bonding HA with Nuage

For linux bonding deployment with VLANs, following command was used for deploying with Nuage:

openstack overcloud deploy --templates -e /home/stack/templates/overcloud_images.yaml -e /home/stack/templates/docker-insecure-registry.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /home/stack/templates/node-info.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-bond-with-vlans.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-nuage-config.yaml --ntp-server ntp-server

Including SR-IOV

For including SRIOV in the overcloud deployment, following command can be used to include neutron-sriov environment file:

openstack overcloud deploy --templates -r /home/stack/roles_sriov.yaml -e /home/stack/templates/overcloud_images.yaml -e /home/stack/templates/docker-insecure-registry.yaml -e /home/stack/templates/node-info.yaml -e /home/stack/templates/neutron-sriov.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-nuage-config.yaml --ntp-server ntp-server

where:
neutron-nuage-config.yaml is Controller specific parameter values
nova-nuage-config.yaml is Compute specific parameter values
docker-insecure-registry.yaml contains all local registry IPs and Ports for Insecure Registry parameter
node-info.yaml is Information specifies count and flavor for Controller and Compute nodes. network-environment.yaml: Configures additional network environment variables
network-isolation.yaml: Enables creation of networks for isolated overcloud traffic
net-bond-with-vlans.yaml: Configures an IP address and a pair of bonded nics on each network
roles_sriov.yaml: Enables services required for Compute Sriov role
neutron-sriov.yaml: Neutron SRIOV specific parameter values

Sample Templates

network-environment.j2.yaml

#This file is an example of an environment file for defining the isolated
#networks and related parameters.
resource_registry:
  # Network Interface templates to use (these files must exist). You can
  # override these by including one of the net-*.yaml environment files,
  # such as net-bond-with-vlans.yaml, or modifying the list here.
{%- for role in roles %}
  # Port assignments for the {{role.name}}
  OS::TripleO::{{role.name}}::Net::SoftwareConfig:
    ../network/config/bond-with-vlans/{{role.deprecated_nic_config_name|default(role.name.lower() ~ ".yaml")}}
{%- endfor %}

parameter_defaults:
  # This section is where deployment-specific configuration is done
  # CIDR subnet mask length for provisioning network
  ControlPlaneSubnetCidr: '24'
  # Gateway router for the provisioning network (or Undercloud IP)
  ControlPlaneDefaultRoute: 192.168.24.1
  EC2MetadataIp: 192.168.24.1  # Generally the IP of the Undercloud
  # Customize the IP subnets to match the local environment
{%- for network in networks if network.enabled|default(true) %}
{%- if network.ipv6|default(false) %}
  {{network.name}}NetCidr: '{{network.ipv6_subnet}}'
{%- else %}
  {{network.name}}NetCidr: '{{network.ip_subnet}}'
{%- endif %}
{%- endfor %}
  # Customize the VLAN IDs to match the local environment
{%- for network in networks if network.enabled|default(true) %}
{%- if network.vlan is defined %}
  {{network.name}}NetworkVlanID: {{network.vlan}}
{%- endif %}
{%- endfor %}
{%- for network in networks if network.enabled|default(true) %}
{%- if network.name == 'External' %}
  # Leave room if the external network is also used for floating IPs
{%- endif %}
{%- if network.ipv6|default(false) %}
  {{network.name}}AllocationPools: {{network.ipv6_allocation_pools}}
{%- else %}
  {{network.name}}AllocationPools: {{network.allocation_pools}}
{%- endif %}
{%- endfor %}
  # Gateway routers for routable networks
{%- for network in networks if network.enabled|default(true) %}
{%- if network.ipv6|default(false) and network.gateway_ipv6|default(false) %}
  {{network.name}}InterfaceDefaultRoute: '{{network.gateway_ipv6}}'
{%- elif network.gateway_ip|default(false) %}
  {{network.name}}InterfaceDefaultRoute: '{{network.gateway_ip}}'
{%- endif %}
{%- endfor %}
{#- FIXME: These global parameters should be defined in a YAML file, e.g. network_data.yaml. #}
  # Define the DNS servers (maximum 2) for the overcloud nodes
  DnsServers: ["135.1.1.111","135.227.146.166"]
  # List of Neutron network types for tenant networks (will be used in order)
  NeutronNetworkType: 'vxlan,vlan'
  # The tunnel type for the tenant network (vxlan or gre). Set to '' to disable tunneling.
  NeutronTunnelTypes: 'vxlan'
  # Neutron VLAN ranges per network, for example 'datacentre:1:499,tenant:500:1000':
  NeutronNetworkVLANRanges: 'datacentre:1:1000'
  # Customize bonding options, e.g. "mode=4 lacp_rate=1 updelay=1000 miimon=100"
  # for Linux bonds w/LACP, or "bond_mode=active-backup" for OVS active/backup.
  BondInterfaceOvsOptions: "bond_mode=active-backup"

neutron-nuage-config.yaml

# A Heat environment file which can be used to enable a
# a Neutron Nuage backend on the controller, configured via puppet
resource_registry:
  OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None
  OS::TripleO::Services::NeutronL3Agent: OS::Heat::None
  OS::TripleO::Services::NeutronMetadataAgent: OS::Heat::None
  OS::TripleO::Services::NeutronOvsAgent: OS::Heat::None
  OS::TripleO::Services::ComputeNeutronOvsAgent: OS::Heat::None
  # Override the NeutronCorePlugin to use Nuage
  OS::TripleO::Docker::NeutronMl2PluginBase: OS::TripleO::Services::NeutronCorePluginML2Nuage

parameter_defaults:
  NeutronNuageNetPartitionName: 'Nuage_Partition_13'
  NeutronNuageVSDIp: '192.168.24.118:8443'
  NeutronNuageVSDUsername: 'csproot'
  NeutronNuageVSDPassword: 'csproot'
  NeutronNuageVSDOrganization: 'csp'
  NeutronNuageBaseURIVersion: 'v5_0'
  NeutronNuageCMSId: 'a91a28b8-28de-436b-a665-6d08a9346464'
  UseForwardedFor: true
  NeutronPluginMl2PuppetTags: 'neutron_plugin_ml2,neutron_plugin_nuage'
  NeutronServicePlugins: 'NuagePortAttributes,NuageAPI,NuageL3'
  NeutronDBSyncExtraParams: '--config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/nuage/plugin.ini'
  NeutronTypeDrivers: 'vxlan'
  NeutronNetworkType: 'vxlan'
  NeutronMechanismDrivers: 'nuage'
  NeutronPluginExtensions: 'nuage_subnet,nuage_port,port_security'
  NeutronFlatNetworks: '*'
  NeutronTunnelIdRanges: ''
  NeutronNetworkVLANRanges: ''
  NeutronVniRanges: '1001:2000'
  NovaOVSBridge: 'alubr0'
  NeutronMetadataProxySharedSecret: 'NuageNetworksSharedSecret'
  InstanceNameTemplate: 'inst-%08x'
  HeatEnginePluginDirs: ['/usr/lib/python2.7/site-packages/nuage-heat/']
  HorizonCustomizationModule: 'nuage_horizon.customization'
  HorizonVhostExtraParams:
    add_listen: true
    priority: 10
    access_log_format: '%a %l %u %t \"%r\" %>s %b \"%%{}{Referer}i\" \"%%{}{User-Agent}i\"'
    aliases: [{'alias': '%{root_url}/static/nuage', 'path': '/usr/lib/python2.7/site-packages/nuage_horizon/static'}, {'alias': '%{root_url}/static', 'path': '/usr/share/openstack-dashboard/static'}]
    directories: [{'path': '/usr/lib/python2.7/site-packages/nuage_horizon', 'options': ['FollowSymLinks'], 'allow_override': ['None'], 'require': 'all granted'}]

nova-nuage-config.yaml for Virtual Setup

# A Heat environment file which can be used to enable
# Nuage backend on the compute, configured via puppet
resource_registry:
  OS::TripleO::Services::ComputeNeutronCorePlugin: OS::TripleO::Services::ComputeNeutronCorePluginNuage

parameter_defaults:
  NuageActiveController: '192.168.24.119'
  NuageStandbyController: '0.0.0.0'
  NovaPCIPassthrough: ""
  NovaOVSBridge: 'alubr0'
  NovaComputeLibvirtType: 'qemu'
  NovaIPv6: True
  NuageMetadataProxySharedSecret: 'NuageNetworksSharedSecret'
  NuageNovaApiEndpoint: 'internalURL'

nova-nuage-config.yaml for KVM Setup

# A Heat environment file which can be used to enable
# Nuage backend on the compute, configured via puppet
resource_registry:
  OS::TripleO::Services::ComputeNeutronCorePlugin: OS::TripleO::Services::ComputeNeutronCorePluginNuage

parameter_defaults:
  NuageActiveController: '192.168.24.119'
  NuageStandbyController: '0.0.0.0'
  NovaPCIPassthrough: ""
  NovaOVSBridge: 'alubr0'
  NovaComputeLibvirtType: 'kvm'
  NovaIPv6: True
  NuageMetadataProxySharedSecret: 'NuageNetworksSharedSecret'
  NuageNovaApiEndpoint: 'internalURL'

neutron-sriov.yaml

## A Heat environment that can be used to deploy SR-IOV
resource_registry:
  OS::TripleO::Services::NeutronSriovAgent: /usr/share/openstack-tripleo-heat-templates/docker/services/neutron-sriov-agent.yaml
  OS::TripleO::Services::NeutronSriovHostConfig: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-sriov-host-config.yaml

parameter_defaults:
  # Add PciPassthroughFilter to the scheduler default filters
  NovaSchedulerDefaultFilters: ['RetryFilter','AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter']
  NovaSchedulerAvailableFilters: ['nova.scheduler.filters.all_filters']

  NeutronPhysicalDevMappings: "physnet1:eno2,physnet2:eno3"

  # Number of VFs that needs to be configured for a physical interface
  NeutronSriovNumVFs: "eno2:5,eno3:7"

docker-insecure-registry.yaml for local registry

parameter_defaults:
  DockerInsecureRegistryAddress: ['192.168.24.1:8787']

node-info.yaml for Non-HA

# Compute and Controller count can be set here

parameter_defaults:
  ControllerCount: 1
  ComputeCount: 1

node-info.yaml for HA and Linux-bond HA

# Compute and Controller count can be set here

parameter_defaults:
  ControllerCount: 3
  ComputeCount: 1

node-info.yaml for SR-IOV

parameter_defaults:
  OvercloudControllerFlavor: control
  OvercloudComputeFlavor: compute
  # OvercloudComputeSriovFlavor is the flavor to use for Compute Sriov nodes
  OvercloudComputeSriovFlavor: computesriov
  ControllerCount: 1 
  ComputeCount: 1
  # ComputeSriovCount is number of Compute Sriov nodes
  ComputeSriovCount: 1

Parameter details

This section described the details of the parameters specified in the template files. Also, the configuration files where these parameters are set and used. See OpenStack Queens user guide install section for more details.

Parameters on the Neutron Controller

The following parameters are mapped to values in /etc/neutron/plugins/nuage/plugin.ini file on the neutron controller

NeutronNuageNetPartitionName
Maps to default_net_partition_name parameter
NeutronNuageVSDIp
Maps to server parameter
NeutronNuageVSDUsername
NeutronNuageVSDPassword
Maps to serverauth as username:password
NeutronNuageVSDOrganization
Maps to organization parameter
NeutronNuageBaseURIVersion
Maps to the version in base_uri as /nuage/api/<version>
NeutronNuageCMSId
Maps to the cms_id parameter

The following parameters are mapped to values in /etc/neutron/neutron.conf file on the neutron controller

NeutronCorePlugin
Maps to core_plugin parameter in [DEFAULT] section
NeutronServicePlugins
Maps to service_plugins parameter in [DEFAULT] section

The following parameters are mapped to values in /etc/nova/nova.conf file on the neutron controller

UseForwardedFor
Maps to use_forwarded_for parameter in [DEFAULT] section
NeutronMetadataProxySharedSecret
Maps to metadata_proxy_shared_secret parameter in [neutron] section
InstanceNameTemplate
Maps to instance_name_template parameter in [DEFAULT] section

The following parameters are mapped to values in /etc/neutron/plugins/ml2/ml2_conf.ini file on the neutron controller

NeutronNetworkType
Maps to tenant_network_types in [ml2] section
NeutronPluginExtensions
Maps to extension_drivers in [ml2] section
NeutronTypeDrivers
Maps to type_drivers in [ml2] section
NeutronMechanismDrivers
Maps to mechanism_drivers in [ml2] section
NeutronFlatNetworks
Maps to flat_networks parameter in [ml2_type_flat] section
NeutronTunnelIdRanges
Maps to tunnel_id_ranges in [ml2_type_gre] section
NeutronNetworkVLANRanges
Maps to network_vlan_ranges in [ml2_type_vlan] section
NeutronVniRanges
Maps to vni_ranges in [ml2_type_vxlan] section

The following parameters are used for setting/disabling services in undercloud's puppet code

OS::TripleO::Services::NeutronDHCPAgent
OS::TripleO::Services::NeutronL3Agent
OS::TripleO::Services::NeutronMetadataAgent
OS::TripleO::Services::NeutronOVSAgent
These parameters are used to disable the OpenStack default services as these are not used with Nuage integrated OpenStack cluster

The following parameter is used for setting values on the Controller using puppet code

NeutronNuageDBSyncExtraParams
String of extra command line parameters to append to the neutron-db-manage upgrade head command

Parameters on the Nova Compute

The following parameters are mapped to values in /etc/default/openvswitch file on the nova compute

NuageActiveController
Maps to ACTIVE_CONTROLLER parameter
NuageStandbyController
Maps to STANDBY_CONTROLLER parameter

The following parameters are mapped to values in /etc/nova/nova.conf file on the nova compute

NovaOVSBridge
Maps to ovs_bridge parameter in [neutron] section
NovaComputeLibvirtType
Maps to virt_type parameter in [libvirt] section
NovaIPv6
Maps to use_ipv6 in [DEFAULT] section

The following parameters are mapped to values in /etc/default/nuage-metadata-agent file on the nova compute

NuageMetadataProxySharedSecret
Maps to METADATA_PROXY_SHARED_SECRET parameter. This need to match the setting in neutron controller above
NuageNovaApiEndpoint
Maps to NOVA_API_ENDPOINT_TYPE parameter. This needs to correspond to the setting for the Nova API endpoint as configured by OSP Director

Parameter required for docker

DockerInsecureRegistryAddress
The IP Address and Port of an insecure docker namespace that will be configured in /etc/sysconfig/docker.
The value can be multiple addresses separated by commas.

Appendix

Issues and Resolution

1. In case one or more of the overcloud deployed nodes is stopped

Then for the node that was shutdown

nova start <node_name> as in overcloud-controller-0 

Once the node is up, execute the following on the node

pcs cluster start --all
pcs status

If the services do not come up, then try

pcs resource cleanup

2. If the following issue is hit while running the patching script


virt-customize: error: libguestfs error: could not create appliance through 
libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Run the following command before executing the script

export LIBGUESTFS_BACKEND=direct

3. No valid host found error while registering nodes

openstack baremetal import --json instackenv.json
No valid host was found. Reason: No conductor service registered which supports driver pxe_ipmitool. (HTTP 404)

Workaround: Install python package python-dracclient and restart ironic-conductor service. Then try the command again

sudo yum install -y python-dracclient
exit (go to root user)
systemctl restart openstack-ironic-conductor
su - stack (switch to stack user)
source stackrc (source stackrc)

4. ironic nde-list shows Instance UUID even after deleting the stack

[stack@instack ~]$ heat stack-list
WARNING (shell) "heat stack-list" is deprecated, please use "openstack stack list" instead
+----+------------+--------------+---------------+--------------+
| id | stack_name | stack_status | creation_time | updated_time |
+----+------------+--------------+---------------+--------------+
+----+------------+--------------+---------------+--------------+
[stack@instack ~]$ nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
[stack@instack ~]$ ironic node-list
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+
| UUID                                 | Name | Instance UUID                        | Power State | Provisioning State | Maintenance |
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+
| 9e57d620-3ec5-4b5e-96b1-bf56cce43411 | None | 1b7a6e50-3c15-4228-85d4-1f666a200ad5 | power off   | available          | False       |
| 88b73085-1c8e-4b6d-bd0b-b876060e2e81 | None | 31196811-ee42-4df7-b8e2-6c83a716f5d9 | power off   | available          | False       |
| d3ac9b50-bfe4-435b-a6f8-05545cd4a629 | None | 2b962287-6e1f-4f75-8991-46b3fa01e942 | power off   | available          | False       |
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+

Workaround: Manually remove instance_uuid reference

ironic node-update <node_uuid> remove instance_uuid
E.g. ironic node-update 9e57d620-3ec5-4b5e-96b1-bf56cce43411 remove instance_uuid
Clone this wiki locally