6.5. Triple-O Deployment Architecture

Apex is based on the OpenStack Triple-O project as distributed by the RDO Project. It is important to understand the basics of a Triple-O deployment to help make decisions that will assist in successfully deploying OPNFV.

Triple-O stands for OpenStack On OpenStack. This means that OpenStack will be used to install OpenStack. The target OPNFV deployment is an OpenStack cloud with NFV features built-in that will be deployed by a smaller all-in-one deployment of OpenStack. In this deployment methodology there are two OpenStack installations. They are referred to as the undercloud and the overcloud. The undercloud is used to deploy the overcloud.

The undercloud is the all-in-one installation of OpenStack that includes baremetal provisioning capability. The undercloud will be deployed as a virtual machine on a Jump Host.

The overcloud is OPNFV. Configuration will be passed into undercloud and the undercloud will use OpenStack’s orchestration component, named Heat, to execute a deployment that will provision the target OPNFV nodes.

6.6. Apex High Availability Architecture

6.6.1. Undercloud

The undercloud is not Highly Available. End users do not depend on the undercloud. It is only for management purposes.

6.6.2. Overcloud

Apex will deploy three control nodes in an HA deployment. Each of these nodes will run the following services:

  • Stateless OpenStack services
  • MariaDB / Galera
  • RabbitMQ
  • OpenDaylight
  • HA Proxy
  • Pacemaker & VIPs
  • Ceph Monitors and OSDs
Stateless OpenStack services
All running stateless OpenStack services are load balanced by HA Proxy. Pacemaker monitors the services and ensures that they are running.
Stateful OpenStack services
All running stateful OpenStack services are load balanced by HA Proxy. They are monitored by pacemaker in an active/passive failover configuration.
MariaDB / Galera
The MariaDB database is replicated across the control nodes using Galera. Pacemaker is responsible for a proper start up of the Galera cluster. HA Proxy provides and active/passive failover methodology to connections to the database.
The message bus is managed by Pacemaker to ensure proper start up and establishment of clustering across cluster members.
OpenDaylight is currently installed on all three control nodes and started as an HA cluster unless otherwise noted for that scenario. OpenDaylight’s database, known as MD-SAL, breaks up pieces of the database into “shards”. Each shard will have its own election take place, which will determine which OpenDaylight node is the leader for that shard. The other OpenDaylight nodes in the cluster will be in standby. Every Open vSwitch node connects to every OpenDaylight to enable HA.
HA Proxy
HA Proxy is monitored by Pacemaker to ensure it is running across all nodes and available to balance connections.
Pacemaker & VIPs
Pacemaker has relationships and restraints setup to ensure proper service start up order and Virtual IPs associated with specific services are running on the proper host.
Ceph Monitors & OSDs
The Ceph monitors run on each of the control nodes. Each control node also has a Ceph OSD running on it. By default the OSDs use an autogenerated virtual disk as their target device. A non-autogenerated device can be specified in the deploy file.

VM Migration is configured and VMs can be evacuated as needed or as invoked by tools such as heat as part of a monitored stack deployment in the overcloud.

6.7. OPNFV Scenario Architecture

OPNFV distinguishes different types of SDN controllers, deployment options, and features into “scenarios”. These scenarios are universal across all OPNFV installers, although some may or may not be supported by each installer.

The standard naming convention for a scenario is: <VIM platform>-<SDN type>-<feature>-<ha/noha>

The only supported VIM type is “OS” (OpenStack), while SDN types can be any supported SDN controller. “feature” includes things like ovs_dpdk, sfc, etc. “ha” or “noha” determines if the deployment will be highly available. If “ha” is used at least 3 control nodes are required.

6.8. OPNFV Scenarios in Apex

Apex provides pre-built scenario files in /etc/opnfv-apex which a user can select from to deploy the desired scenario. Simply pass the desired file to the installer as a (-d) deploy setting. Read further in the Apex documentation to learn more about invoking the deploy command. Below is quick reference matrix for OPNFV scenarios supported in Apex. Please refer to the respective OPNFV Docs documentation for each scenario in order to see a full scenario description. Also, please refer to release notes for information about known issues per scenario. The following scenarios correspond to a supported <Scenario>.yaml deploy settings file:

Scenario Owner Supported
os-nosdn-nofeature-ha Apex Yes
os-nosdn-nofeature-noha Apex Yes
os-nosdn-bar-ha Barometer No
os-nosdn-bar-noha Barometer No
os-nosdn-calipso-noha Calipso Yes
os-nosdn-ovs_dpdk-ha Apex No
os-nosdn-ovs_dpdk-noha Apex No
os-nosdn-fdio-ha FDS No
os-nosdn-fdio-noha FDS No
os-nosdn-kvm_ovs_dpdk-ha KVM for NFV No
os-nosdn-kvm_ovs_dpdk -noha KVM for NFV No
os-nosdn-performance-ha Apex No
os-odl-nofeature-ha Apex Yes
os-odl-nofeature-noha Apex Yes
os-odl-ovs_dpdk-ha Apex No
os-odl-ovs_dpdk-noha Apex No
os-odl-bgpvpn-ha SDNVPN Yes
os-odl-bgpvpn-noha SDNVPN Yes
os-odl-sriov-ha Apex No
os-odl-sriov-noha Apex No
os-odl-l2gw-ha Apex No
os-odl-l2gw-noha Apex No
os-odl-sfc-ha SFC Yes
os-odl-sfc-noha SFC Yes
os-odl-gluon-noha Gluon No
os-odl-csit-noha Apex No
os-odl-fdio-ha FDS No
os-odl-fdio-noha FDS No
os-odl-fdio_dvr-ha FDS No
os-odl-fdio_dvr-noha FDS No
os-onos-nofeature-ha ONOSFW No
os-onos-sfc-ha ONOSFW No
os-ovn-nofeature-noha Apex No
os-ovn-nofeature-ha Apex Yes