-
Notifications
You must be signed in to change notification settings - Fork 33
Nuage OSP Director 10 integration with ML2
This document outlines the architecture of the integration project of using ML2 with Nuage as mechanism driver with OSP Director 10.
This document will focus on providing the information required to add and configure ML2 and Nuage.
The OSP Director is an image based installer. It uses a single image (named overcloud-full.qcow2) that is deployed on the Controller and Compute machines belonging to the overcloud OpenStack cluster. This image contains all the packages that are needed during the deployment. The deployment only creates the configuration files and databases required by the different services and starts the services in the correct order. Typically, there is no new software installation during the deployment phase. The packages/files required by ML2 will be added to this image as well.
The OSP Director architecture allows partners to create new templates to expose parameters specific to their modules and then the templates can be passed to the openstack ovecloud deploy
command during the deployment.
Additionally, changes to the puppet manifests are required to handle the new values in the Hiera database and act on them to deploy the partner software. ML2 options will be added to the existing Nuage templates.
For OSP Director 10.0, multiple changes are required, both for the overcloud-full.qcow2 image and the undercloud codebase. The following changes include code and configuration modifications required to integrate Nuage with OSP Director 10.0.
From OSP Director 10.0 firewall rules are added by default. So, for releases Newton and later, tripleo-heat-templates were modified to add firewall rules for VxLAN and Metadata agent and the changes are at this review. This review contains the changes required to puppet files that enable Nuage specific code. ID: https://review.openstack.org/#/c/462286/. This change is not in OSP-Director 10.0 yet.
Additionally, OpenStack Newton has capability for composable services. Nuage is added as a mechanism driver with ML2 in a separate service to differentiate between Nuage as neutron core plugin and Nuage as mechanism driver when ML2 is the core plugin at this review. This review contains Nuage mechanism driver as a composable service in tripleo-heat-templates. ID: https://review.openstack.org/#/c/492245/. Also, Nuage nova compute components were moved to composable services as well. Moved away from extraconfig parameters and included everything in nova composable services at this review. This review contains removal of extraconfig parameters and addition of Nuage nova as composable service. ID: https://review.openstack.org/#/c/503136/. These changes are not in OSP Director 10.0 as well.
The changes required to create and modify the plugin.ini file for Nuage in puppet-neutron is upstreamed at this review. This review contains new code in manifests/plugins directory with the associated tests and custom resources. ID: https://review.openstack.org/#/c/483047/. Also, to include the plugin file as part of neutron-server service, puppet-neutron changes are required and are upstreamed at this review. ID: https://review.openstack.org/#/c/494011/. These changes are not in OSP-Director 10.0 yet.
Also, to support this addition of Nuage as mechanism driver, further changes are required for OpenStack Newton in puppet-tripleo which are present at this review. This review contains new code to enable Nuage as mechanism driver with ML2. ID: https://review.openstack.org/#/c/481751/. This change is not in OSP-Director 10.0 yet.
Additionally, Nuage VRS and metadata agent configuration files need to be created and populated with the required parameters. This can be achieved by a new puppet module, nuage-puppet-modules, that needs to be included on the overcloud image along with other required Nuage packages.
Since the typical deployment scenario of OSP Director assumes that all the packages are installed on the overcloud-full image, we need to patch the overcloud-full image with the following RPMs:
- nuage-openstack-neutron
- nuage-openstack-neutronclient
- nuage-nova-extensions
- nuage-metadata-agent
- selinux-policy-nuage
- nuage-puppet-modules-5.0 ( link )
- nuage-openvswitch (Nuage VRS)
Also, we need to un-install OVS and Install Nuage VRS
- Un-install OVS
- Install Nuage VRS
Some of the generic neutron.conf and nova.conf parameters need to be configured in the heat templates. In addition, Nuage VRS and metadata agent need to be configured as well. The values of these parameters are dependent on the configuration of Nuage VSP. The "Sample Templates" section contains some 'probable' values for these parameters in files neutron-nuage-config.yaml and nova-nuage-config.yaml.
The undercloud deployment should proceed as per the OSP Director documentation. Follow all the steps till the openstack overcloud deploy
command. The following needs to be taken care of before executing openstack overcloud deploy
command.
If optional features like heat extension, horizon extension or linux bonding with network isolation are required, then refer to the section of Optional Features before proceeding.
The upstream changes that are not in OSP Director 10.0 yet, need to be applied to the undercloud codebase. These changes are provided in the diff files at this link. This contains multiple diff_OSPD10 files with differences across multiple openstack-tripleo-heat-templates versions that need to be applied. The steps for applying this patch are provided in the README here
The installation of packages and un-installation of OVS can be done via this script.
Since the files required to configure plugin.ini, neutron.conf and ml2_conf.ini are not in the OSP Director yet, the changes can be added to the image using the same script. For the next release this code will be upstreamed.
The user will receive all the RPMs and the script to patch the overcloud-full image with the RPMs. The user needs to create a local repo that is accessible from the machine that the script will run on and add all the RPMs to that repo, including the RPM nuage-puppet-modules-5.0 ( link ). The machine also needs lib-guestfs-tools installed. The script needs the directory containing the 10_files at this link. Copy these files and execute the script.
The script syntax is:
python nuage_overcloud_full_patch_w_ml2.py --RhelUserName=<value> --RhelPassword='<value>' --RepoName=Nuage --RepoBaseUrl=http://IP/reponame --RhelPool=<value> --ImageName='<value>' --Version=10
This script takes in following input parameters:
RhelUserName: User name for the RHEL subscription
RhelPassword: Password for the RHEL subscription
RhelPool: RHEL Pool to subscribe to for base packages
RepoName: Name for the local repo hosting the Nuage RPMs
RepoBaseUrl: Base URL for the repo hosting the Nuage RPMs
ImageName: Name of the qcow2 image (overcloud-full.qcow2 for example)
Version: OSP-Director version (10)
For an Openstack installation, a CMS (Cloud Management System) ID needs to be generated to configure with Nuage VSD installation. The assumption is that Nuage VSD and Nuage VSC are already running before overcloud is deployed.
Steps to generate it:
- Copy the folder to a machine that can reach VSD (typically the undercloud node)
- From the folder run the following command to generate CMS_ID
Usage:
python configure_vsd_cms_id.py --server <vsd-ip-address>:<vsd-port> --serverauth <vsd-username>:<vsd-password> --organization <vsd-organization> --auth_resource /me --serverssl True --base_uri /nuage/api/<vsp-version>"
Variables:
vsd-ip-address and vsd-port: VSD load-balancer vip or ip address of a single VSD node and VSD port to send the request to
vsd-username & vsd-password: Username & password (typically belongs to CSP Root group and CSP CMS group)
vsd-organization: VSD organization (typically CSP corresponding to the CMS group)
vsp-version: Supported values include “v4_0” or “v5_0” based on Nuage VSP version used
Example command:
vsd-ip-address:vsd-port = 192.0.2.100:8443
vsd-username:vsd-password = ospdnuage:nuage123
vsd-organization = csp
vsp-version = v5_0
python configure_vsd_cms_id.py --server 192.0.2.100:8443 --serverauth ospdnuage:nuage123 --organization csp --auth_resource /me --serverssl True --base_uri "/nuage/api/v5_0"
Example output
CMS ID generated by Nuage VSD: 7614f5d6-7ca9-44b8-b7e7-294955c7db52
CMS ID has also been stored in auto-generated cms_id.txt file
- The CMS ID displayed "7614f5d6-7ca9-44b8-b7e7-294955c7db52" on the terminal needs to be added to neutron-nuage-config.yaml template file for the parameter NeutronNuageCMSId
You can chose to enable the following features
This feature exposes Neutron extensions for Nuage VSP as Heat Resources. This allows automation of Nuage-specific capabilities through HOT templates and Heat Stacks. To enable it, the following steps need to be executed:
- For "Updating Undercloud codebase" changes, nothing additional is required
- For "Modification of overcloud-full image", the following nuage-heat RPM needs to be added in addition to the RPMs mentioned
nuage-openstack-heat
- Lastly for "Deploy Overcloud", the heat template
neutron-nuage-config.yaml
needs to include the following parameter value:
HeatEnginePluginDirs: '/usr/lib/python2.7/site-packages/nuage-heat/'
This feature provides Nuage extensions through Horizon with the following functionality:
Net partition (admin only)
Host and bridge ports attach to subnet (network)
Manage gateways
To enable it, following steps need to be executed:
- For "Updating Undercloud codebase" changes, nothing additional is required
- For "Modification of overcloud-full image", the following nuage-horizon RPM needs to be added in addition to the RPMs mentioned
nuage-openstack-horizon
- Lastly for "Deploy Overcloud", the heat template
neutron-nuage-config.yaml
needs to include the following parameter values:
HorizonCustomizationModule: 'nuage_horizon.customization'
HorizonVhostExtraParams:
add_listen: false
priority: 10
access_log_format: '%a %l %u %t \"%r\" %>s %b \"%%{}{Referer}i\" \"%%{}{User-Agent}i\"'
aliases: [{'alias': '%{root_url}/static/nuage', 'path': '/usr/lib/python2.7/site-packages/nuage_horizon/static'}, {'alias': '%{root_url}/static', 'path': '/usr/share/openstack-dashboard/static'}]
directories: [{'path': '/usr/lib/python2.7/site-packages/nuage_horizon', 'options': ['FollowSymLinks'], 'allow_override': ['None'], 'require': 'all granted'}]
Border Gateway Protocol Provider Edge to Customer Edge (BGP PE-CE) support enables advertisement and learning of subnets for L3 overlay domain VPorts connected to external CEs that are BGP speakers. To enable it, the following steps need to be executed:
- For "Updating Undercloud codebase" changes, nothing additional is required
- For "Modification of overcloud-full image", the following nuage-bgp RPM needs to be added in addition to the RPMs mentioned under NUAGE_VRS_PACKAGE in the image patching script.
nuage-bgp
- Lastly for "Deploy Overcloud", nothing additional is required.
This feature allows an OpenStack installation to support Single Root I/O Virtualization (SR-IOV)-attached VMs (https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking) with VSP-managed VMs on the same KVM hypervisor cluster. It provides a Nuage ML2 mechanism driver that coexists with the sriovnicswitch mechanism driver.
Neutron ports attached through SR-IOV are configured by the sriovnicswitch mechanism driver. Neutron ports attached to Nuage VSD-managed networks are configured by the Nuage ML2 mechanism driver.
To enable it, the following steps need to be executed:
- For "Updating Undercloud codebase" changes, nothing additional is required
- For "Modification of overcloud-full image", the script will take care of updating the image. Nothing additional is required
- Lastly for "Deploy Overcloud", few additional parameters and template files are required.
- The heat template
neutron-nuage-config.yaml
needs to include the following additional/modified parameter values:
NeutronServicePlugins: 'NuagePortAttributes,NuageAPI,NuageL3,trunk,NuageNetTopology'
NeutronTypeDrivers: "vlan,vxlan,flat"
NeutronMechanismDrivers: ['nuage','nuage_sriov','sriovnicswitch']
NeutronFlatNetworks: '*'
NeutronTunnelIdRanges: "1:1000"
NeutronNetworkVLANRanges: "physnet1:2:100,physnet2:2:100"
NeutronVniRanges: "1001:2000"
NovaPatchConfigMonkeyPatch: true
NovaPatchConfigMonkeyPatchModules: 'nova.network.neutronv2.api:nuage_nova_extensions.nova.network.neutronv2.api.decorator'
- The heat template
nova-nuage-config.yaml
needs to include the following additional parameter value:
NovaPCIPassthrough: "[{\"devname\":\"eno2\",\"physical_network\":\"physnet1\"},{\"devname\":\"eno3\",\"physical_network\":\"physnet2\"}]"
- Lastly, the overcloud deployment command needs to include "neutron-sriov.yaml" file. A sample is provided in the "Sample Templates" section.
Add network-environment.yaml file to /usr/share/openstack-tripleo-heat-templates/environments/ The sample is provided in the "Sample Templates" section
Nuage uses default linux bridge and linux bonds. For this to take effect, following network files are changed.
/usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/controller.yaml
and
/usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans/compute.yaml
The changes that are required are:
- Remove ovs_bridge and move the containing members one level up
- Change ovs_bond to linux_bond with the right bonding_options (For example, bonding_options: 'mode=active-backup')
- Change the interface names under network_config and linux_bond to reflect the interface names of the baremetal machines that are being used.
Example
=========
Original
=========
properties:
group: os-apply-config
config:
os_net_config:
network_config:
-
type: interface
name: nic1
use_dhcp: false
addresses:
-
ip_netmask:
list_join:
- '/'
- - {get_param: ControlPlaneIp}
- {get_param: ControlPlaneSubnetCidr}
routes:
-
ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}
-
type: ovs_bridge
name: {get_input: bridge_name}
dns_servers: {get_param: DnsServers}
members:
-
type: ovs_bond
name: bond1
ovs_options: {get_param: BondInterfaceOvsOptions}
members:
-
type: interface
name: nic2
primary: true
-
type: interface
name: nic3
-
type: vlan
device: bond1
vlan_id: {get_param: ExternalNetworkVlanID}
addresses:
-
ip_netmask: {get_param: ExternalIpSubnet}
routes:
-
default: true
next_hop: {get_param: ExternalInterfaceDefaultRoute}
-
type: vlan
device: bond1
vlan_id: {get_param: InternalApiNetworkVlanID}
addresses:
-
ip_netmask: {get_param: InternalApiIpSubnet}
-
type: vlan
device: bond1
vlan_id: {get_param: StorageNetworkVlanID}
addresses:
-
ip_netmask: {get_param: StorageIpSubnet}
-
type: vlan
device: bond1
vlan_id: {get_param: StorageMgmtNetworkVlanID}
addresses:
-
ip_netmask: {get_param: StorageMgmtIpSubnet}
-
type: vlan
device: bond1
vlan_id: {get_param: TenantNetworkVlanID}
addresses:
-
ip_netmask: {get_param: TenantIpSubnet}
==================================
Modified (changes are **marked**)
==================================
properties:
group: os-apply-config
config:
os_net_config:
network_config:
-
type: interface
name: **eno1**
use_dhcp: false
addresses:
-
ip_netmask:
list_join:
- '/'
- - {get_param: ControlPlaneIp}
- {get_param: ControlPlaneSubnetCidr}
routes:
-
ip_netmask: 169.254.169.254/32
next_hop: {get_param: EC2MetadataIp}
-
type: **linux_bond**
name: bond1
**bonding_options: 'mode=active-backup'**
members:
-
type: interface
name: **eno2**
primary: true
-
type: interface
name: **eno3**
-
type: vlan
device: bond1
vlan_id: {get_param: ExternalNetworkVlanID}
addresses:
-
ip_netmask: {get_param: ExternalIpSubnet}
routes:
-
default: true
next_hop: {get_param: ExternalInterfaceDefaultRoute}
-
type: vlan
device: bond1
vlan_id: {get_param: InternalApiNetworkVlanID}
addresses:
-
ip_netmask: {get_param: InternalApiIpSubnet}
-
type: vlan
device: bond1
vlan_id: {get_param: StorageNetworkVlanID}
addresses:
-
ip_netmask: {get_param: StorageIpSubnet}
-
type: vlan
device: bond1
vlan_id: {get_param: StorageMgmtNetworkVlanID}
addresses:
-
ip_netmask: {get_param: StorageMgmtIpSubnet}
-
type: vlan
device: bond1
vlan_id: {get_param: TenantNetworkVlanID}
addresses:
-
ip_netmask: {get_param: TenantIpSubnet}
From OSPD 9 onwards, a verification step was added where the overcloud nodes will ping the gateway to verify connectivity on the external network vlan. Without this verification step the deployment would fail.
This is the case in deployments with linux bonding and network isolation.
For this verification step, the ExternalInterfaceDefaultRoute IP configured in the template network-environment.yaml should be reachable from the overcloud controller node(s) on the External API VLAN. This gateway can reside on the Undercloud as well. The gateway needs to be tagged with the same VLAN ID as that of the External API network of the controller.
For OSP Director, tuskar deployment commands are recommended. But as part of Nuage integration effort, it was found that heat-templates provide more options and customization to overcloud deployment. The templates can be passed in "openstack overcloud deploy" command line options and can create or update an overcloud deployment.
For non-HA overcloud deployment, following command can be used for deploying with Nuage:
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/nova-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-nuage-config.yaml --control-scale 1 --compute-scale 1 --ceph-storage-scale 0 --block-storage-scale 0 --swift-storage-scale 0
For Virtual deployment, need to add --libvirt-type parameter as:
openstack overcloud deploy --templates --libvirt-type qemu -e /usr/share/openstack-tripleo-heat-templates/environments/nova-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-nuage-config.yaml --control-scale 1 --compute-scale 1 --ceph-storage-scale 0 --block-storage-scale 0 --swift-storage-scale 0
where:
neutron-nuage-config.yaml: Controller specific parameter values
nova-nuage-config.yaml: Compute specific parameter values
for customized templates:
Use <path-to-template>
instead of /usr/share/openstack-tripleo-heat-templates/environments/
For HA deployment, following command was used for deploying with Nuage:
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/nova-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-nuage-config.yaml --control-scale 2 --compute-scale 2 --ceph-storage-scale 0 --block-storage-scale 0 --swift-storage-scale 0 --ntp-server ntp.zam.alcatel-lucent.com
For Virtual deployment, need to add --libvirt-type parameter as:
openstack overcloud deploy --templates --libvirt-type qemu -e /usr/share/openstack-tripleo-heat-templates/environments/nova-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-nuage-config.yaml --control-scale 2 --compute-scale 2 --ceph-storage-scale 0 --block-storage-scale 0 --swift-storage-scale 0 --ntp-server ntp.zam.alcatel-lucent.com
where:
neutron-nuage-config.yaml: Controller specific parameter values
nova-nuage-config.yaml: Compute specific parameter values
for customized templates:
Use <path-to-template>
instead of /usr/share/openstack-tripleo-heat-templates/environments/
For linux bonding deployment with VLANs, following command was used for deploying with Nuage:
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-bond-with-vlans.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-nuage-config.yaml --control-scale 1 --compute-scale 1 --ceph-storage-scale 0 --block-storage-scale 0 --swift-storage-scale 0
where:
network-environment.yaml: Configures additional network environment variables
network-isolation.yaml: Enables creation of networks for isolated overcloud traffic
net-bond-with-vlans.yaml: Configures an IP address and a pair of bonded nics on each network
neutron-nuage-config.yaml: Controller specific parameter values
nova-nuage-config.yaml: Compute specific parameter values
for customized templates:
Use <path-to-template>
instead of /usr/share/openstack-tripleo-heat-templates/environments/
For linux bonding deployment with VLANs for HA config, following command was used for deploying with Nuage:
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-bond-with-vlans.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-nuage-config.yaml --control-scale 2 --compute-scale 2 --ceph-storage-scale 0 --block-storage-scale 0 --swift-storage-scale 0 --ntp-server pool.ntp.org
where:
network-environment.yaml: Configures additional network environment variables
network-isolation.yaml: Enables creation of networks for isolated overcloud traffic
net-bond-with-vlans.yaml: Configures an IP address and a pair of bonded nics on each network
neutron-nuage-config.yaml: Controller specific parameter values
nova-nuage-config.yaml: Compute specific parameter values
for customized templates:
Use <path-to-template>
instead of /usr/share/openstack-tripleo-heat-templates/environments/
For including SRIOV in the overcloud deployment, following command can be used to include neutron-sriov environment file:
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/nova-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-nuage-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yaml --control-scale 2 --compute-scale 2 --ceph-storage-scale 0 --block-storage-scale 0 --swift-storage-scale 0
where:
neutron-nuage-config.yaml: Controller specific parameter values
nova-nuage-config.yaml: Compute specific parameter values
neutron-sriov.yaml: Neutron SRIOV specific parameter values
for customized templates:
Use <path-to-template>
instead of /usr/share/openstack-tripleo-heat-templates/environments/
#This file is an example of an environment file for defining the isolated
#networks and related parameters.
resource_registry:
# Network Interface templates to use (these files must exist)
OS::TripleO::BlockStorage::Net::SoftwareConfig:
../network/config/bond-with-vlans/cinder-storage.yaml
OS::TripleO::Compute::Net::SoftwareConfig:
../network/config/bond-with-vlans/compute.yaml
OS::TripleO::Controller::Net::SoftwareConfig:
../network/config/bond-with-vlans/controller.yaml
OS::TripleO::ObjectStorage::Net::SoftwareConfig:
../network/config/bond-with-vlans/swift-storage.yaml
OS::TripleO::CephStorage::Net::SoftwareConfig:
../network/config/bond-with-vlans/ceph-storage.yaml
parameter_defaults:
# This section is where deployment-specific configuration is done
# CIDR subnet mask length for provisioning network
ControlPlaneSubnetCidr: '24'
# Gateway router for the provisioning network (or Undercloud IP)
ControlPlaneDefaultRoute: 192.0.2.1
EC2MetadataIp: 192.0.2.1 # Generally the IP of the Undercloud
# Customize the IP subnets to match the local environment
InternalApiNetCidr: 172.17.0.0/24
StorageNetCidr: 172.18.0.0/24
StorageMgmtNetCidr: 172.19.0.0/24
TenantNetCidr: 172.16.0.0/24
ExternalNetCidr: 10.0.0.0/24
# Customize the VLAN IDs to match the local environment
InternalApiNetworkVlanID: 20
StorageNetworkVlanID: 30
StorageMgmtNetworkVlanID: 40
TenantNetworkVlanID: 50
ExternalNetworkVlanID: 10
# Customize the IP ranges on each network to use for static IPs and VIPs
InternalApiAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}]
StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}]
StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': '172.19.0.200'}]
TenantAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}]
# Leave room if the external network is also used for floating IPs
ExternalAllocationPools: [{'start': '10.0.0.10', 'end': '10.0.0.50'}]
# Gateway router for the external network
ExternalInterfaceDefaultRoute: 10.0.0.1
# Uncomment if using the Management Network (see network-management.yaml)
# ManagementNetCidr: 10.0.1.0/24
# ManagementAllocationPools: [{'start': '10.0.1.10', 'end', '10.0.1.50'}]
# Use either this parameter or ControlPlaneDefaultRoute in the NIC templates
# ManagementInterfaceDefaultRoute: 10.0.1.1
# Define the DNS servers (maximum 2) for the overcloud nodes
DnsServers: ["8.8.8.8","8.8.4.4"]
# Set to empty string to enable multiple external networks or VLANs
NeutronExternalNetworkBridge: "''"
# The tunnel type for the tenant network (vxlan or gre). Set to '' to disable tunneling.
NeutronTunnelTypes: 'vxlan'
# Customize bonding options, e.g. "mode=4 lacp_rate=1 updelay=1000 miimon=100"
BondInterfaceOvsOptions: "bond_mode=active-backup"
for customized templates:
Change ../network/config/bond-with-vlans/
to <path-to-template>
# A Heat environment file which can be used to enable a
# a Neutron Nuage backend on the controller, configured via puppet
resource_registry:
OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None
OS::TripleO::Services::NeutronL3Agent: OS::Heat::None
OS::TripleO::Services::NeutronMetadataAgent: OS::Heat::None
OS::TripleO::Services::NeutronOvsAgent: OS::Heat::None
OS::TripleO::Services::ComputeNeutronOvsAgent: OS::Heat::None
# Override the NeutronCorePlugin to use Nuage
OS::TripleO::Services::NeutronCorePlugin: OS::TripleO::Services::NeutronCorePluginML2Nuage
parameter_defaults:
NeutronNuageNetPartitionName: 'Nuage_Partition'
NeutronNuageVSDIp: '192.0.2.190:8443'
NeutronNuageVSDUsername: 'csproot'
NeutronNuageVSDPassword: 'csproot'
NeutronNuageVSDOrganization: 'csp'
NeutronNuageBaseURIVersion: 'v5_0'
NeutronNuageCMSId: '34093615-9b5b-4b7d-aba9-c5cde3e70796'
UseForwardedFor: true
NeutronServicePlugins: 'NuagePortAttributes,NuageAPI,NuageL3'
NeutronDBSyncExtraParams: '--config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/nuage/plugin.ini'
NeutronTypeDrivers: "vxlan"
NeutronNetworkType: 'vxlan'
NeutronMechanismDrivers: "nuage"
NeutronPluginExtensions: "nuage_subnet,nuage_port,port_security"
NeutronVniRanges: "1:1000"
NovaOVSBridge: 'alubr0'
NeutronMetadataProxySharedSecret: 'NuageNetworksSharedSecret'
InstanceNameTemplate: 'inst-%08x'
# A Heat environment file which can be used to enable
# Nuage backend on the compute, configured via puppet
resource_registry:
OS::TripleO::Services::ComputeNeutronCorePlugin: OS::TripleO::Services::ComputeNeutronCorePluginNuage
parameter_defaults:
NuageActiveController: '192.0.2.191'
NuageStandbyController: '0.0.0.0'
NovaOVSBridge: 'alubr0'
NovaComputeLibvirtType: 'qemu'
NovaIPv6: True
NuageMetadataProxySharedSecret: 'NuageNetworksSharedSecret'
NuageNovaApiEndpoint: 'internalURL'
NuageBridgeMTU: '9000'
# A Heat environment file which can be used to enable
# Nuage backend on the compute, configured via puppet
resource_registry:
OS::TripleO::Services::ComputeNeutronCorePlugin: OS::TripleO::Services::ComputeNeutronCorePluginNuage
parameter_defaults:
NuageActiveController: '192.0.2.191'
NuageStandbyController: '0.0.0.0'
NovaOVSBridge: 'alubr0'
NovaComputeLibvirtType: 'kvm'
NovaIPv6: True
NuageMetadataProxySharedSecret: 'NuageNetworksSharedSecret'
NuageNovaApiEndpoint: 'internalURL'
NuageBridgeMTU: '9000'
## A Heat environment that can be used to deploy SR-IOV
resource_registry:
OS::TripleO::Services::NeutronSriovAgent: /usr/share/openstack-tripleo-heat-templates/puppet/services/neutron-sriov-agent.yaml
parameter_defaults:
# Add PciPassthroughFilter to the scheduler default filters
NovaSchedulerDefaultFilters: ['RetryFilter','AvailabilityZoneFilter','RamFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter']
NovaSchedulerAvailableFilters: ['nova.scheduler.filters.all_filters']
# Provide the vendorid:productid of the VFs
NeutronSupportedPCIVendorDevs: ['8086:154c','8086:10ca','8086:1520']
NeutronPhysicalDevMappings: "datacentre:eno2,physnet2:eno3"
# Number of VFs that needs to be configured for a physical interface
NeutronSriovNumVFs: "eno2:5,eno3:7"
for customized templates:
Change /usr/share/openstack-tripleo-heat-templates/puppet/services/
to <path-to-template>
This section described the details of the parameters specified in the template files. Also, the configuration files where these parameters are set and used. See OpenStack Newton user guide install section for more details.
The following parameters are mapped to values in /etc/neutron/plugins/nuage/plugin.ini file on the neutron controller
NeutronNuageNetPartitionName
Maps to default_net_partition_name parameter
NeutronNuageVSDIp
Maps to server parameter
NeutronNuageVSDUsername
NeutronNuageVSDPassword
Maps to serverauth as username:password
NeutronNuageVSDOrganization
Maps to organization parameter
NeutronNuageBaseURIVersion
Maps to the version in base_uri as /nuage/api/<version>
NeutronNuageCMSId
Maps to the cms_id parameter
The following parameters are mapped to values in /etc/neutron/neutron.conf file on the neutron controller
NeutronCorePlugin
Maps to core_plugin parameter in [DEFAULT] section
NeutronServicePlugins
Maps to service_plugins parameter in [DEFAULT] section
The following parameters are mapped to values in /etc/nova/nova.conf file on the neutron controller
UseForwardedFor
Maps to use_forwarded_for parameter in [DEFAULT] section
NeutronMetadataProxySharedSecret
Maps to metadata_proxy_shared_secret parameter in [neutron] section
InstanceNameTemplate
Maps to instance_name_template parameter in [DEFAULT] section
The following parameters are mapped to values in /etc/neutron/plugins/ml2/ml2_conf.ini file on the neutron controller
NeutronNetworkType
Maps to tenant_network_types in [ml2] section
NeutronPluginExtensions
Maps to extension_drivers in [ml2] section
NeutronTypeDrivers
Maps to type_drivers in [ml2] section
NeutronMechanismDrivers
Maps to mechanism_drivers in [ml2] section
NeutronFlatNetworks
Maps to flat_networks parameter in [ml2_type_flat] section
NeutronTunnelIdRanges
Maps to tunnel_id_ranges in [ml2_type_gre] section
NeutronNetworkVLANRanges
Maps to network_vlan_ranges in [ml2_type_vlan] section
NeutronVniRanges
Maps to vni_ranges in [ml2_type_vxlan] section
NeutronNuagePluginsML2FirewallDriver
Maps to firewall_driver in [securitygroup] section
The following parameters are used for setting/disabling services in undercloud's puppet code
OS::TripleO::Services::NeutronEnableDHCPAgent
OS::TripleO::Services::NeutronEnableL3Agent
OS::TripleO::Services::NeutronEnableMetadataAgent
OS::TripleO::Services::NeutronEnableOVSAgent
These parameters are used to disable the OpenStack default services as these are not used with Nuage integrated OpenStack cluster
The following parameter is used for setting values on the Controller using puppet code
NeutronNuageDBSyncExtraParams
String of extra command line parameters to append to the neutron-db-manage upgrade head command
The following parameters are mapped to values in /etc/default/openvswitch file on the nova compute
NuageActiveController
Maps to ACTIVE_CONTROLLER parameter
NuageStandbyController
Maps to STANDBY_CONTROLLER parameter
The following parameters are mapped to values in /etc/nova/nova.conf file on the nova compute
NovaOVSBridge
Maps to ovs_bridge parameter in [neutron] section
NovaComputeLibvirtType
Maps to virt_type parameter in [libvirt] section
NovaIPv6
Maps to use_ipv6 in [DEFAULT] section
The following parameters are mapped to values in /etc/default/nuage-metadata-agent file on the nova compute
NuageMetadataProxySharedSecret
Maps to METADATA_PROXY_SHARED_SECRET parameter. This need to match the setting in neutron controller above
NuageNovaApiEndpoint
Maps to NOVA_API_ENDPOINT_TYPE parameter. This needs to correspond to the setting for the Nova API endpoint as configured by OSP Director
Then for the node that was shutdown
nova start <node_name> as in overcloud-controller-0
Once the node is up, execute the following on the node
pcs cluster start --all
pcs status
If the services do not come up, then try
pcs resource cleanup
virt-customize: error: libguestfs error: could not create appliance through
libvirt.
Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct
Run the following command before executing the script
export LIBGUESTFS_BACKEND=direct
openstack baremetal import --json instackenv.json
No valid host was found. Reason: No conductor service registered which supports driver pxe_ipmitool. (HTTP 404)
Workaround: Install python package python-dracclient and restart ironic-conductor service. Then try the command again
sudo yum install -y python-dracclient
exit (go to root user)
systemctl restart openstack-ironic-conductor
su - stack (switch to stack user)
source stackrc (source stackrc)
[stack@instack ~]$ heat stack-list
WARNING (shell) "heat stack-list" is deprecated, please use "openstack stack list" instead
+----+------------+--------------+---------------+--------------+
| id | stack_name | stack_status | creation_time | updated_time |
+----+------------+--------------+---------------+--------------+
+----+------------+--------------+---------------+--------------+
[stack@instack ~]$ nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
[stack@instack ~]$ ironic node-list
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+
| 9e57d620-3ec5-4b5e-96b1-bf56cce43411 | None | 1b7a6e50-3c15-4228-85d4-1f666a200ad5 | power off | available | False |
| 88b73085-1c8e-4b6d-bd0b-b876060e2e81 | None | 31196811-ee42-4df7-b8e2-6c83a716f5d9 | power off | available | False |
| d3ac9b50-bfe4-435b-a6f8-05545cd4a629 | None | 2b962287-6e1f-4f75-8991-46b3fa01e942 | power off | available | False |
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+
Workaround: Manually remove instance_uuid reference
ironic node-update <node_uuid> remove instance_uuid
E.g. ironic node-update 9e57d620-3ec5-4b5e-96b1-bf56cce43411 remove instance_uuid