2. OPNFV Airship Installation Instruction

2.1. Abstract

This document describes how to deploy NFVi with Airship Installer.

2.1.1. Version history

Date

Ver.

Author

Comment

2020-01-21

0.1.0

James Gu

First draft based on Wiki content

2020-09-29

0.2.0

James Gu

Airship 1.8 update

2.2. Introduction

This document provides concepts and procedures for deploying an NFVi with Airship Installer in a hardware infrastructure.

This document includes the following content:

  • Introduction to the upstream tool set used by the Airship Installer, for example, Airship Project, OpenStack Helm, Treasuremap, and so on.

  • Instructions for preparing a site manifest in declarative YAML, including hardware profile and

software stack, according to the hardware infrastructure and software component model specified in the NFVi reference model and reference architecture. • Instructions for customizing the settings in the site manifest. • Instructions for running the deployment script. • Instructions for setting up a CI/CD pipeline for automating deployment and testing.

Intel Pod 17 is used to deploy reference NFVi. Therefore, the examples in this document are based on the hardware profile of Intel Pod 17. Instructions are either referenced (to the upstream document) or provided (in this document) so that the reader can modify the settings of the hardware profile and/or software stack accordingly.

2.2.1. Airship

Airship is a collection of loosely coupled and interoperable open source tools that declaratively automate cloud provisioning.

Airship is a robust delivery mechanism for organizations who want to embrace containers as the new unit of infrastructure delivery at scale. Starting from raw bare metal infrastructure, Airship manages the full lifecycle of data center infrastructure to deliver a production-grade Kubernetes cluster with Helm deployed artifacts, including OpenStack-Helm. Airship allows operators to manage their infrastructure deployments and lifecycle through the declarative YAML documents that describe an Airship environment.

For more information, see https://www.airshipit.org.

2.2.2. OpenStack Helm

OpenStack-Helm is a set of Helm charts that enable deployment, maintenance, and upgrading of loosely coupled OpenStack services and their dependencies individually or as part of complex environments. For more information, see https://wiki.openstack.org/wiki/Openstack-helm.

2.2.3. Treasuremap

Treasuremap is a deployment reference as well as CI/CD project for Airship.

Airship site deployments use the treasuremap repository as a global manifest set (YAML configuration documents) that are then overridden with site-specific configuration details (networking, disk layout, and so on).

For more information, see https://airship-treasuremap.readthedocs.io.

2.3. Manifests

Airship is a declarative way of automating the deployment of a site. Therefore, all the deployment details are defined in the manifests.

The manifests are divided into three layers: global, type, and site. They are hierarchical and meant as overrides from one layer to another. This means that global is baseline for all sites, type is a subset of common overrides for a number of sites with common configuration patterns (such as similar hardware, specific feature settings, and so on), and finally the site is the last layer of site-specific overrides and configuration (such as specific IP addresses, hostnames, and so on). See Deckhand documentation for more details on layering.

The global and type manifests can be used as is unless any major differences from a reference deployment are required. In the latter case, this may introduce a new type, or even contributions to the global manifests.

The site manifests are specific for each site and are required to be customized for each new deployment. The specific documentation for customizing these documents is located here:

2.3.1. Global

Global manifests, defined in Airship Treasuremap, contain base configurations common to all sites. The versions of all Helm charts and Docker images, for example, are specified in versions.yaml.

2.3.2. Type

The type cntt will eventually support specifications published by the CNTT community. See CNTT type.

2.3.3. Site

The site documents reside under the site folder. While the folder already contains some sites, and will contain more in the future, the intel-pod17 site shall be considered the Airship OPNFV reference site. See more at POD17 manifests.

The site-definition.yaml ties together site with the specific type and global manifests.

data:
  site_type: cntt

  repositories:
    global:
      revision: v1.8
      url: https://opendev.org/airship/treasuremap.git

2.4. Prerequisites

Airship installation requires access to external repositories and the deployed services has two virtual IPs that are defined in the site/intel-pod17/networks/common-addresses.yaml.

If an existing DNS service is to be used, add the Airship required DNS entries following Register DNS names.

In Intel POD 17, a CoreDNS service has been installed on the jump host using the following example procedure:

$ sudo -i
$ mkdir /root/coredns
$ cp {cloned_airship_repo_location}/site/intel-pod17/tools/files/Corefile-intel-pod17 /root/coredns
$ cp {cloned_airship_repo_location}/site/intel-pod17/tools/files/intel-pod17.db /root/coredns
$ docker run -d --name coredns --restart=always --volume=/root/coredns/:/root/coredns -p 53:53/udp coredns/coredns -conf /root/coredns/Corefile-intel-pod17

Update the external DNS server IP addresses and Airship MAAS and Ingress virtual IPS accordingly in the Corefile-intel-pod17 and intel-pod17.db files.

Set up and install the genesis node following Genenis node section in Airship Site Authoring and Deployment Guide.

On the genesis node, ensure that Virtualization is enabled in BIOS, and PXE is set as first boot device and the correct NIC is selected for PXE.

The last step is to configure the Hugepages settings for OVS-DPDK by adding the following line in /etc/default/grub on the genesis node:

GRUB_CMDLINE_LINUX="hugepagesz=1G hugepages=12 transparent_hugepage=never default_hugepagesz=1G dpdk-socket-mem=4096,4096 iommu=pt intel_iommu=on amd_iommu=on cgroup_disable=hugetlb console=ttyS1,115200n8"

Reboot the genesis node after GRUB setting change.

Note that the Hugepages configuration values should match the ones in treasuremap/type/cruiserlite/profiles/host/cp-intel-s2600wt.yaml.

2.5. Deployment

As Airship is tooling to declaratively automate site deployment, the automation from the installer side is light. See deploy.sh.

You will need to export environment variables that correspond to the new site (keystone URL, node IPs, and so on). See the beginning of the deploy script for details on the required variables.

Once the prerequisites are met and the manifests are created, you are ready to execute deploy.sh that supports Shipyard actions: deploy_site and update_site.

$ tools/deploy.sh
  Usage: deploy.sh <deploy_site|update_site>

2.6. CI/CD

TODO: Describe pipelines and approach https://build.opnfv.org/ci/view/airship/

2.7. OpenStack

The treasuremap repository contains a wrapper script for running OpenStack clients tools/openstack. The wrapper uses heat image that already has OpenStack client installed.

Clone latest treasuremap code

$ git clone https://github.com/airshipit/treasuremap.git

Setup the needed environment variables, and execute the script as OpenStack CLI

$ export OSH_KEYSTONE_URL='http://identity-nc.intel-pod17.opnfv.org/v3'
$ export OS_REGION_NAME=intel-pod17
$ treasuremap/tools/openstack image list