intel open source technology center - intel open network … · 2019. 6. 27. · intel® onp server...

64
1 Intel ® ONP Server Reference Architecture Solutions Guide Intel ® Open Network Platform Server Reference Architecture (Release 1.3) NFV/SDN Solutions with Intel ® Open Network Platform Server

Upload: others

Post on 03-Sep-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

1

Intelreg ONP Server Reference ArchitectureSolutions Guide

Intelreg Open Network Platform Server Reference Architecture (Release 13)NFVSDN Solutions with Intelreg Open Network Platform Server

Intelreg ONP Server Reference ArchitectureSolutions Guide

2

Revision History

Revision Date Comments

13 February 23 2015 Updated document for the release 13 of Intelreg Open Network Platform Server 13

12 December 15 2014 Document prepared for release 12 of Intelreg Open Network Platform Server 12

111 October 29 2014Changed two links to the following bull https01orgsitesdefaultfilespagevbng-scriptstgzbull https01orgsitesdefaultfilespageqat_patches_netkeyshimzip

11 September 18 2014 Minor edits throughout document

10 August 21 2014 Initial document for release of Intelreg Open Network Platform Server 11

3

Intelreg ONP Server Reference ArchitectureSolutions Guide

Contents

10 Audience and Purpose 520 Summary 7

21 Network Services Examples 9211 Suricata (Next Generation IDSIPS engine) 9212 vBNG (Broadband Network Gateway) 9

30 Hardware Components 1140 Software Versions 13

41 Obtaining Software Ingredients 14

50 Installation and Configuration Guide 1551 Instructions Common to Compute and Controller Nodes 15

511 BIOS Settings 15512 Operating System Installation and Configuration16

52 Controller Node Setup 23521 OpenStack (Juno)23

53 Compute Node Setup 29531 Host Configuration29

54 Virtual Network Functions 33541 Installing and Configuring vIPS33542 Installing and Configuring the vBNG33543 Configuring the Network for Sink and Source VMs35

60 Testing the Setup 3761 Preparing with OpenStack 37

611 Deploying Virtual Machines 37612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack 42

62 Using OpenDaylight 46621 Preparing the OpenDaylight Controller46

Appendix A Additional OpenDaylight Information49A1 Create VMs Using the DevStack Horizon GUI 51

Appendix B Configuring the Proxy 59

Appendix C Glossary61

Appendix D References 63

Intelreg ONP Server Reference ArchitectureSolutions Guide

4

5

Intelreg ONP Server Reference ArchitectureSolutions Guide

10 Audience and Purpose

The primary audiences for this document are architects and engineers implementing the Intelreg Open Network Platform Server Reference Architecture using Open Source software Software ingredients include the following

bull DevStack

bull OpenStack

bull OpenDaylight

bull Data Plane Development Kit (DPDK)

bull Regular OpenvSwitch

bull Open vSwitch with DPDK‐netdev

bull Fedora

This document provides a guide for integration and performance characterization using the Intelreg Open Network Platform Server (Intel ONP Server) Content includes high-level architecture setup and configuration procedures integration learnings and a set of baseline performance data This information is intended to help architects and engineers evaluate Network Function Virtualization (NFV) and Software Defined Network (SDN) solutions

Ingredient versions integration procedures configuration parameters and test methodologies all influence performance The performance data provided here does not represent best possible performance but rather provides a baseline of what is possible using ldquoout-of-boxrdquo open source software ingredients

The purpose of documenting configurations is not to imply any preferred methods Providing a baseline configuration of well tested procedures however can help to achieve optimal system performance when developing an NFVSDN solution

Intelreg ONP Server Reference ArchitectureSolutions Guide

6

NOTE This page intentionally left blank

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to set up and configure the controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg dual Xeonreg Processor Series E5-2600 V3

bull Intelreg XL710 4x10 GbE Adapter

The host operating system is Fedora 21 with Qemu‐kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) OpenvSwitch OpenvSwitch with DPDK‐netdev OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration the orchestrator and controller (management and control plane) and compute node (data plane) run on different server nodes

Note Many variations of this setup can be deployed

The test cases described in this document are designed to illustrate functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators mdash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Section 542 or Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240 GB SSD 25in SATA 6 Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

‒ Ivy Bridge Socket-R (EP) 10 Core 28 GHz 115W 25 M per core LLC 80 GTs QPI DDR3-1867 HT turbo‒ Long product availability

Cores 10 physical coresCPU 20 hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

‒ NICs (82599)‒ NICs (XL710

‒ 2x Intelreg 82599 10 GbE Controller (code named Niantic)‒ Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville)

NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

‒ Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)‒ Hyper-threading enabled

Table 32 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4Supports SR-IOV

Processors Intelreg Dual Xeonreg Processor Series E5-2697 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 14 Core 260GHz 145W 35 M per core LLC 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE that has been tested with Intel FTLX8571D3BCV-IT and Intel AFBR-703sDZ-IN2 850nm SFPs

(code-named Fortville)NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644Release Date 09042014

IntelregVirtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through tests hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Dual Xeonreg Processor Series E5-2699 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 18 Cores 23 GHz 145 W 45 MB total cache per processor 96 GTs QPI DDR4-160018662133

Cores 18 physical coresCPU 28 hyper-threaded cores per CPU for 72 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville) NICs are on socket zero

Bios

SE5C61086B0101005

- Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass- through tests- Hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 21 x86_64 Host OS 3178-300fc21x86_64

Fedora 20 x86_64 Host OS only for the controller and OpenDaylightOpenStack integration

This is because of SW incompatibilities of the integration in Fedora 20

Real-Time Kernel Targeted towards Telco environment which is sensitive to low latency

Real-Time Kernel v31431-rt28

Qemu‐kvm Virtualization technology QEMU-KVM 212-7fc21x86_64

Data Plane Development Kit (DPDK)

Network stack bypass and libraries for packet processing includes user space poll mode drivers

171

Open vSwitch vSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS) ‒ Compute OpenvSwitch 2390 (OVS) ‒ For OVS with DPDK-netdev Compute node Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN orchestrator Juno Release + Intel patches(https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id 3be5e02cf873289b814da87a0ea35c3dad21765b

OpenDaylight SDN Controller Helium-SR1

Suricata IPS application Suricata v202

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 2: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

2

Revision History

Revision Date Comments

13 February 23 2015 Updated document for the release 13 of Intelreg Open Network Platform Server 13

12 December 15 2014 Document prepared for release 12 of Intelreg Open Network Platform Server 12

111 October 29 2014Changed two links to the following bull https01orgsitesdefaultfilespagevbng-scriptstgzbull https01orgsitesdefaultfilespageqat_patches_netkeyshimzip

11 September 18 2014 Minor edits throughout document

10 August 21 2014 Initial document for release of Intelreg Open Network Platform Server 11

3

Intelreg ONP Server Reference ArchitectureSolutions Guide

Contents

10 Audience and Purpose 520 Summary 7

21 Network Services Examples 9211 Suricata (Next Generation IDSIPS engine) 9212 vBNG (Broadband Network Gateway) 9

30 Hardware Components 1140 Software Versions 13

41 Obtaining Software Ingredients 14

50 Installation and Configuration Guide 1551 Instructions Common to Compute and Controller Nodes 15

511 BIOS Settings 15512 Operating System Installation and Configuration16

52 Controller Node Setup 23521 OpenStack (Juno)23

53 Compute Node Setup 29531 Host Configuration29

54 Virtual Network Functions 33541 Installing and Configuring vIPS33542 Installing and Configuring the vBNG33543 Configuring the Network for Sink and Source VMs35

60 Testing the Setup 3761 Preparing with OpenStack 37

611 Deploying Virtual Machines 37612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack 42

62 Using OpenDaylight 46621 Preparing the OpenDaylight Controller46

Appendix A Additional OpenDaylight Information49A1 Create VMs Using the DevStack Horizon GUI 51

Appendix B Configuring the Proxy 59

Appendix C Glossary61

Appendix D References 63

Intelreg ONP Server Reference ArchitectureSolutions Guide

4

5

Intelreg ONP Server Reference ArchitectureSolutions Guide

10 Audience and Purpose

The primary audiences for this document are architects and engineers implementing the Intelreg Open Network Platform Server Reference Architecture using Open Source software Software ingredients include the following

bull DevStack

bull OpenStack

bull OpenDaylight

bull Data Plane Development Kit (DPDK)

bull Regular OpenvSwitch

bull Open vSwitch with DPDK‐netdev

bull Fedora

This document provides a guide for integration and performance characterization using the Intelreg Open Network Platform Server (Intel ONP Server) Content includes high-level architecture setup and configuration procedures integration learnings and a set of baseline performance data This information is intended to help architects and engineers evaluate Network Function Virtualization (NFV) and Software Defined Network (SDN) solutions

Ingredient versions integration procedures configuration parameters and test methodologies all influence performance The performance data provided here does not represent best possible performance but rather provides a baseline of what is possible using ldquoout-of-boxrdquo open source software ingredients

The purpose of documenting configurations is not to imply any preferred methods Providing a baseline configuration of well tested procedures however can help to achieve optimal system performance when developing an NFVSDN solution

Intelreg ONP Server Reference ArchitectureSolutions Guide

6

NOTE This page intentionally left blank

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to set up and configure the controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg dual Xeonreg Processor Series E5-2600 V3

bull Intelreg XL710 4x10 GbE Adapter

The host operating system is Fedora 21 with Qemu‐kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) OpenvSwitch OpenvSwitch with DPDK‐netdev OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration the orchestrator and controller (management and control plane) and compute node (data plane) run on different server nodes

Note Many variations of this setup can be deployed

The test cases described in this document are designed to illustrate functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators mdash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Section 542 or Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240 GB SSD 25in SATA 6 Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

‒ Ivy Bridge Socket-R (EP) 10 Core 28 GHz 115W 25 M per core LLC 80 GTs QPI DDR3-1867 HT turbo‒ Long product availability

Cores 10 physical coresCPU 20 hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

‒ NICs (82599)‒ NICs (XL710

‒ 2x Intelreg 82599 10 GbE Controller (code named Niantic)‒ Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville)

NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

‒ Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)‒ Hyper-threading enabled

Table 32 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4Supports SR-IOV

Processors Intelreg Dual Xeonreg Processor Series E5-2697 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 14 Core 260GHz 145W 35 M per core LLC 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE that has been tested with Intel FTLX8571D3BCV-IT and Intel AFBR-703sDZ-IN2 850nm SFPs

(code-named Fortville)NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644Release Date 09042014

IntelregVirtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through tests hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Dual Xeonreg Processor Series E5-2699 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 18 Cores 23 GHz 145 W 45 MB total cache per processor 96 GTs QPI DDR4-160018662133

Cores 18 physical coresCPU 28 hyper-threaded cores per CPU for 72 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville) NICs are on socket zero

Bios

SE5C61086B0101005

- Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass- through tests- Hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 21 x86_64 Host OS 3178-300fc21x86_64

Fedora 20 x86_64 Host OS only for the controller and OpenDaylightOpenStack integration

This is because of SW incompatibilities of the integration in Fedora 20

Real-Time Kernel Targeted towards Telco environment which is sensitive to low latency

Real-Time Kernel v31431-rt28

Qemu‐kvm Virtualization technology QEMU-KVM 212-7fc21x86_64

Data Plane Development Kit (DPDK)

Network stack bypass and libraries for packet processing includes user space poll mode drivers

171

Open vSwitch vSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS) ‒ Compute OpenvSwitch 2390 (OVS) ‒ For OVS with DPDK-netdev Compute node Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN orchestrator Juno Release + Intel patches(https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id 3be5e02cf873289b814da87a0ea35c3dad21765b

OpenDaylight SDN Controller Helium-SR1

Suricata IPS application Suricata v202

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 3: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

3

Intelreg ONP Server Reference ArchitectureSolutions Guide

Contents

10 Audience and Purpose 520 Summary 7

21 Network Services Examples 9211 Suricata (Next Generation IDSIPS engine) 9212 vBNG (Broadband Network Gateway) 9

30 Hardware Components 1140 Software Versions 13

41 Obtaining Software Ingredients 14

50 Installation and Configuration Guide 1551 Instructions Common to Compute and Controller Nodes 15

511 BIOS Settings 15512 Operating System Installation and Configuration16

52 Controller Node Setup 23521 OpenStack (Juno)23

53 Compute Node Setup 29531 Host Configuration29

54 Virtual Network Functions 33541 Installing and Configuring vIPS33542 Installing and Configuring the vBNG33543 Configuring the Network for Sink and Source VMs35

60 Testing the Setup 3761 Preparing with OpenStack 37

611 Deploying Virtual Machines 37612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack 42

62 Using OpenDaylight 46621 Preparing the OpenDaylight Controller46

Appendix A Additional OpenDaylight Information49A1 Create VMs Using the DevStack Horizon GUI 51

Appendix B Configuring the Proxy 59

Appendix C Glossary61

Appendix D References 63

Intelreg ONP Server Reference ArchitectureSolutions Guide

4

5

Intelreg ONP Server Reference ArchitectureSolutions Guide

10 Audience and Purpose

The primary audiences for this document are architects and engineers implementing the Intelreg Open Network Platform Server Reference Architecture using Open Source software Software ingredients include the following

bull DevStack

bull OpenStack

bull OpenDaylight

bull Data Plane Development Kit (DPDK)

bull Regular OpenvSwitch

bull Open vSwitch with DPDK‐netdev

bull Fedora

This document provides a guide for integration and performance characterization using the Intelreg Open Network Platform Server (Intel ONP Server) Content includes high-level architecture setup and configuration procedures integration learnings and a set of baseline performance data This information is intended to help architects and engineers evaluate Network Function Virtualization (NFV) and Software Defined Network (SDN) solutions

Ingredient versions integration procedures configuration parameters and test methodologies all influence performance The performance data provided here does not represent best possible performance but rather provides a baseline of what is possible using ldquoout-of-boxrdquo open source software ingredients

The purpose of documenting configurations is not to imply any preferred methods Providing a baseline configuration of well tested procedures however can help to achieve optimal system performance when developing an NFVSDN solution

Intelreg ONP Server Reference ArchitectureSolutions Guide

6

NOTE This page intentionally left blank

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to set up and configure the controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg dual Xeonreg Processor Series E5-2600 V3

bull Intelreg XL710 4x10 GbE Adapter

The host operating system is Fedora 21 with Qemu‐kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) OpenvSwitch OpenvSwitch with DPDK‐netdev OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration the orchestrator and controller (management and control plane) and compute node (data plane) run on different server nodes

Note Many variations of this setup can be deployed

The test cases described in this document are designed to illustrate functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators mdash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Section 542 or Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240 GB SSD 25in SATA 6 Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

‒ Ivy Bridge Socket-R (EP) 10 Core 28 GHz 115W 25 M per core LLC 80 GTs QPI DDR3-1867 HT turbo‒ Long product availability

Cores 10 physical coresCPU 20 hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

‒ NICs (82599)‒ NICs (XL710

‒ 2x Intelreg 82599 10 GbE Controller (code named Niantic)‒ Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville)

NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

‒ Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)‒ Hyper-threading enabled

Table 32 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4Supports SR-IOV

Processors Intelreg Dual Xeonreg Processor Series E5-2697 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 14 Core 260GHz 145W 35 M per core LLC 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE that has been tested with Intel FTLX8571D3BCV-IT and Intel AFBR-703sDZ-IN2 850nm SFPs

(code-named Fortville)NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644Release Date 09042014

IntelregVirtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through tests hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Dual Xeonreg Processor Series E5-2699 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 18 Cores 23 GHz 145 W 45 MB total cache per processor 96 GTs QPI DDR4-160018662133

Cores 18 physical coresCPU 28 hyper-threaded cores per CPU for 72 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville) NICs are on socket zero

Bios

SE5C61086B0101005

- Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass- through tests- Hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 21 x86_64 Host OS 3178-300fc21x86_64

Fedora 20 x86_64 Host OS only for the controller and OpenDaylightOpenStack integration

This is because of SW incompatibilities of the integration in Fedora 20

Real-Time Kernel Targeted towards Telco environment which is sensitive to low latency

Real-Time Kernel v31431-rt28

Qemu‐kvm Virtualization technology QEMU-KVM 212-7fc21x86_64

Data Plane Development Kit (DPDK)

Network stack bypass and libraries for packet processing includes user space poll mode drivers

171

Open vSwitch vSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS) ‒ Compute OpenvSwitch 2390 (OVS) ‒ For OVS with DPDK-netdev Compute node Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN orchestrator Juno Release + Intel patches(https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id 3be5e02cf873289b814da87a0ea35c3dad21765b

OpenDaylight SDN Controller Helium-SR1

Suricata IPS application Suricata v202

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 4: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

4

5

Intelreg ONP Server Reference ArchitectureSolutions Guide

10 Audience and Purpose

The primary audiences for this document are architects and engineers implementing the Intelreg Open Network Platform Server Reference Architecture using Open Source software Software ingredients include the following

bull DevStack

bull OpenStack

bull OpenDaylight

bull Data Plane Development Kit (DPDK)

bull Regular OpenvSwitch

bull Open vSwitch with DPDK‐netdev

bull Fedora

This document provides a guide for integration and performance characterization using the Intelreg Open Network Platform Server (Intel ONP Server) Content includes high-level architecture setup and configuration procedures integration learnings and a set of baseline performance data This information is intended to help architects and engineers evaluate Network Function Virtualization (NFV) and Software Defined Network (SDN) solutions

Ingredient versions integration procedures configuration parameters and test methodologies all influence performance The performance data provided here does not represent best possible performance but rather provides a baseline of what is possible using ldquoout-of-boxrdquo open source software ingredients

The purpose of documenting configurations is not to imply any preferred methods Providing a baseline configuration of well tested procedures however can help to achieve optimal system performance when developing an NFVSDN solution

Intelreg ONP Server Reference ArchitectureSolutions Guide

6

NOTE This page intentionally left blank

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to set up and configure the controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg dual Xeonreg Processor Series E5-2600 V3

bull Intelreg XL710 4x10 GbE Adapter

The host operating system is Fedora 21 with Qemu‐kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) OpenvSwitch OpenvSwitch with DPDK‐netdev OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration the orchestrator and controller (management and control plane) and compute node (data plane) run on different server nodes

Note Many variations of this setup can be deployed

The test cases described in this document are designed to illustrate functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators mdash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Section 542 or Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240 GB SSD 25in SATA 6 Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

‒ Ivy Bridge Socket-R (EP) 10 Core 28 GHz 115W 25 M per core LLC 80 GTs QPI DDR3-1867 HT turbo‒ Long product availability

Cores 10 physical coresCPU 20 hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

‒ NICs (82599)‒ NICs (XL710

‒ 2x Intelreg 82599 10 GbE Controller (code named Niantic)‒ Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville)

NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

‒ Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)‒ Hyper-threading enabled

Table 32 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4Supports SR-IOV

Processors Intelreg Dual Xeonreg Processor Series E5-2697 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 14 Core 260GHz 145W 35 M per core LLC 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE that has been tested with Intel FTLX8571D3BCV-IT and Intel AFBR-703sDZ-IN2 850nm SFPs

(code-named Fortville)NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644Release Date 09042014

IntelregVirtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through tests hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Dual Xeonreg Processor Series E5-2699 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 18 Cores 23 GHz 145 W 45 MB total cache per processor 96 GTs QPI DDR4-160018662133

Cores 18 physical coresCPU 28 hyper-threaded cores per CPU for 72 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville) NICs are on socket zero

Bios

SE5C61086B0101005

- Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass- through tests- Hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 21 x86_64 Host OS 3178-300fc21x86_64

Fedora 20 x86_64 Host OS only for the controller and OpenDaylightOpenStack integration

This is because of SW incompatibilities of the integration in Fedora 20

Real-Time Kernel Targeted towards Telco environment which is sensitive to low latency

Real-Time Kernel v31431-rt28

Qemu‐kvm Virtualization technology QEMU-KVM 212-7fc21x86_64

Data Plane Development Kit (DPDK)

Network stack bypass and libraries for packet processing includes user space poll mode drivers

171

Open vSwitch vSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS) ‒ Compute OpenvSwitch 2390 (OVS) ‒ For OVS with DPDK-netdev Compute node Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN orchestrator Juno Release + Intel patches(https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id 3be5e02cf873289b814da87a0ea35c3dad21765b

OpenDaylight SDN Controller Helium-SR1

Suricata IPS application Suricata v202

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 5: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

5

Intelreg ONP Server Reference ArchitectureSolutions Guide

10 Audience and Purpose

The primary audiences for this document are architects and engineers implementing the Intelreg Open Network Platform Server Reference Architecture using Open Source software Software ingredients include the following

bull DevStack

bull OpenStack

bull OpenDaylight

bull Data Plane Development Kit (DPDK)

bull Regular OpenvSwitch

bull Open vSwitch with DPDK‐netdev

bull Fedora

This document provides a guide for integration and performance characterization using the Intelreg Open Network Platform Server (Intel ONP Server) Content includes high-level architecture setup and configuration procedures integration learnings and a set of baseline performance data This information is intended to help architects and engineers evaluate Network Function Virtualization (NFV) and Software Defined Network (SDN) solutions

Ingredient versions integration procedures configuration parameters and test methodologies all influence performance The performance data provided here does not represent best possible performance but rather provides a baseline of what is possible using ldquoout-of-boxrdquo open source software ingredients

The purpose of documenting configurations is not to imply any preferred methods Providing a baseline configuration of well tested procedures however can help to achieve optimal system performance when developing an NFVSDN solution

Intelreg ONP Server Reference ArchitectureSolutions Guide

6

NOTE This page intentionally left blank

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to set up and configure the controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg dual Xeonreg Processor Series E5-2600 V3

bull Intelreg XL710 4x10 GbE Adapter

The host operating system is Fedora 21 with Qemu‐kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) OpenvSwitch OpenvSwitch with DPDK‐netdev OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration the orchestrator and controller (management and control plane) and compute node (data plane) run on different server nodes

Note Many variations of this setup can be deployed

The test cases described in this document are designed to illustrate functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators mdash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Section 542 or Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240 GB SSD 25in SATA 6 Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

‒ Ivy Bridge Socket-R (EP) 10 Core 28 GHz 115W 25 M per core LLC 80 GTs QPI DDR3-1867 HT turbo‒ Long product availability

Cores 10 physical coresCPU 20 hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

‒ NICs (82599)‒ NICs (XL710

‒ 2x Intelreg 82599 10 GbE Controller (code named Niantic)‒ Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville)

NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

‒ Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)‒ Hyper-threading enabled

Table 32 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4Supports SR-IOV

Processors Intelreg Dual Xeonreg Processor Series E5-2697 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 14 Core 260GHz 145W 35 M per core LLC 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE that has been tested with Intel FTLX8571D3BCV-IT and Intel AFBR-703sDZ-IN2 850nm SFPs

(code-named Fortville)NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644Release Date 09042014

IntelregVirtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through tests hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Dual Xeonreg Processor Series E5-2699 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 18 Cores 23 GHz 145 W 45 MB total cache per processor 96 GTs QPI DDR4-160018662133

Cores 18 physical coresCPU 28 hyper-threaded cores per CPU for 72 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville) NICs are on socket zero

Bios

SE5C61086B0101005

- Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass- through tests- Hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 21 x86_64 Host OS 3178-300fc21x86_64

Fedora 20 x86_64 Host OS only for the controller and OpenDaylightOpenStack integration

This is because of SW incompatibilities of the integration in Fedora 20

Real-Time Kernel Targeted towards Telco environment which is sensitive to low latency

Real-Time Kernel v31431-rt28

Qemu‐kvm Virtualization technology QEMU-KVM 212-7fc21x86_64

Data Plane Development Kit (DPDK)

Network stack bypass and libraries for packet processing includes user space poll mode drivers

171

Open vSwitch vSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS) ‒ Compute OpenvSwitch 2390 (OVS) ‒ For OVS with DPDK-netdev Compute node Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN orchestrator Juno Release + Intel patches(https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id 3be5e02cf873289b814da87a0ea35c3dad21765b

OpenDaylight SDN Controller Helium-SR1

Suricata IPS application Suricata v202

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 6: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

6

NOTE This page intentionally left blank

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to set up and configure the controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg dual Xeonreg Processor Series E5-2600 V3

bull Intelreg XL710 4x10 GbE Adapter

The host operating system is Fedora 21 with Qemu‐kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) OpenvSwitch OpenvSwitch with DPDK‐netdev OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration the orchestrator and controller (management and control plane) and compute node (data plane) run on different server nodes

Note Many variations of this setup can be deployed

The test cases described in this document are designed to illustrate functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators mdash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Section 542 or Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240 GB SSD 25in SATA 6 Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

‒ Ivy Bridge Socket-R (EP) 10 Core 28 GHz 115W 25 M per core LLC 80 GTs QPI DDR3-1867 HT turbo‒ Long product availability

Cores 10 physical coresCPU 20 hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

‒ NICs (82599)‒ NICs (XL710

‒ 2x Intelreg 82599 10 GbE Controller (code named Niantic)‒ Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville)

NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

‒ Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)‒ Hyper-threading enabled

Table 32 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4Supports SR-IOV

Processors Intelreg Dual Xeonreg Processor Series E5-2697 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 14 Core 260GHz 145W 35 M per core LLC 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE that has been tested with Intel FTLX8571D3BCV-IT and Intel AFBR-703sDZ-IN2 850nm SFPs

(code-named Fortville)NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644Release Date 09042014

IntelregVirtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through tests hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Dual Xeonreg Processor Series E5-2699 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 18 Cores 23 GHz 145 W 45 MB total cache per processor 96 GTs QPI DDR4-160018662133

Cores 18 physical coresCPU 28 hyper-threaded cores per CPU for 72 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville) NICs are on socket zero

Bios

SE5C61086B0101005

- Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass- through tests- Hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 21 x86_64 Host OS 3178-300fc21x86_64

Fedora 20 x86_64 Host OS only for the controller and OpenDaylightOpenStack integration

This is because of SW incompatibilities of the integration in Fedora 20

Real-Time Kernel Targeted towards Telco environment which is sensitive to low latency

Real-Time Kernel v31431-rt28

Qemu‐kvm Virtualization technology QEMU-KVM 212-7fc21x86_64

Data Plane Development Kit (DPDK)

Network stack bypass and libraries for packet processing includes user space poll mode drivers

171

Open vSwitch vSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS) ‒ Compute OpenvSwitch 2390 (OVS) ‒ For OVS with DPDK-netdev Compute node Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN orchestrator Juno Release + Intel patches(https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id 3be5e02cf873289b814da87a0ea35c3dad21765b

OpenDaylight SDN Controller Helium-SR1

Suricata IPS application Suricata v202

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 7: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

7

Intelreg ONP Server Reference ArchitectureSolutions Guide

20 Summary

The Intel ONP Server uses Open Source software to help accelerate SDN and NFV commercialization with the latest Intel Architecture Communications Platform

This document describes how to set up and configure the controller and compute nodes for evaluating and developing NFVSDN solutions using the Intelreg Open Network Platform ingredients

Platform hardware is based on a Intelreg Xeonreg DP Server with the following

bull Intelreg dual Xeonreg Processor Series E5-2600 V3

bull Intelreg XL710 4x10 GbE Adapter

The host operating system is Fedora 21 with Qemu‐kvm virtualization technology Software ingredients include Data Plane Development Kit (DPDK) OpenvSwitch OpenvSwitch with DPDK‐netdev OpenStack and OpenDaylight

Figure 2-1 Intel ONP Server - Hardware and Software Ingredients

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration the orchestrator and controller (management and control plane) and compute node (data plane) run on different server nodes

Note Many variations of this setup can be deployed

The test cases described in this document are designed to illustrate functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators mdash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Section 542 or Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240 GB SSD 25in SATA 6 Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

‒ Ivy Bridge Socket-R (EP) 10 Core 28 GHz 115W 25 M per core LLC 80 GTs QPI DDR3-1867 HT turbo‒ Long product availability

Cores 10 physical coresCPU 20 hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

‒ NICs (82599)‒ NICs (XL710

‒ 2x Intelreg 82599 10 GbE Controller (code named Niantic)‒ Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville)

NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

‒ Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)‒ Hyper-threading enabled

Table 32 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4Supports SR-IOV

Processors Intelreg Dual Xeonreg Processor Series E5-2697 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 14 Core 260GHz 145W 35 M per core LLC 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE that has been tested with Intel FTLX8571D3BCV-IT and Intel AFBR-703sDZ-IN2 850nm SFPs

(code-named Fortville)NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644Release Date 09042014

IntelregVirtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through tests hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Dual Xeonreg Processor Series E5-2699 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 18 Cores 23 GHz 145 W 45 MB total cache per processor 96 GTs QPI DDR4-160018662133

Cores 18 physical coresCPU 28 hyper-threaded cores per CPU for 72 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville) NICs are on socket zero

Bios

SE5C61086B0101005

- Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass- through tests- Hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 21 x86_64 Host OS 3178-300fc21x86_64

Fedora 20 x86_64 Host OS only for the controller and OpenDaylightOpenStack integration

This is because of SW incompatibilities of the integration in Fedora 20

Real-Time Kernel Targeted towards Telco environment which is sensitive to low latency

Real-Time Kernel v31431-rt28

Qemu‐kvm Virtualization technology QEMU-KVM 212-7fc21x86_64

Data Plane Development Kit (DPDK)

Network stack bypass and libraries for packet processing includes user space poll mode drivers

171

Open vSwitch vSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS) ‒ Compute OpenvSwitch 2390 (OVS) ‒ For OVS with DPDK-netdev Compute node Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN orchestrator Juno Release + Intel patches(https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id 3be5e02cf873289b814da87a0ea35c3dad21765b

OpenDaylight SDN Controller Helium-SR1

Suricata IPS application Suricata v202

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 8: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

8

Figure 2-2 shows a generic SDNNFV setup In this configuration the orchestrator and controller (management and control plane) and compute node (data plane) run on different server nodes

Note Many variations of this setup can be deployed

The test cases described in this document are designed to illustrate functionality using the specified ingredients configurations and specific test methodology A simple network topology was used as shown in Figure 2-2

Test cases are designed to

bull Verify communication between controller and compute nodes

bull Validate basic controller functionality

Figure 2-2 Generic Setup with Controller and Two Compute Nodes

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators mdash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Section 542 or Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240 GB SSD 25in SATA 6 Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

‒ Ivy Bridge Socket-R (EP) 10 Core 28 GHz 115W 25 M per core LLC 80 GTs QPI DDR3-1867 HT turbo‒ Long product availability

Cores 10 physical coresCPU 20 hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

‒ NICs (82599)‒ NICs (XL710

‒ 2x Intelreg 82599 10 GbE Controller (code named Niantic)‒ Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville)

NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

‒ Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)‒ Hyper-threading enabled

Table 32 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4Supports SR-IOV

Processors Intelreg Dual Xeonreg Processor Series E5-2697 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 14 Core 260GHz 145W 35 M per core LLC 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE that has been tested with Intel FTLX8571D3BCV-IT and Intel AFBR-703sDZ-IN2 850nm SFPs

(code-named Fortville)NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644Release Date 09042014

IntelregVirtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through tests hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Dual Xeonreg Processor Series E5-2699 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 18 Cores 23 GHz 145 W 45 MB total cache per processor 96 GTs QPI DDR4-160018662133

Cores 18 physical coresCPU 28 hyper-threaded cores per CPU for 72 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville) NICs are on socket zero

Bios

SE5C61086B0101005

- Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass- through tests- Hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 21 x86_64 Host OS 3178-300fc21x86_64

Fedora 20 x86_64 Host OS only for the controller and OpenDaylightOpenStack integration

This is because of SW incompatibilities of the integration in Fedora 20

Real-Time Kernel Targeted towards Telco environment which is sensitive to low latency

Real-Time Kernel v31431-rt28

Qemu‐kvm Virtualization technology QEMU-KVM 212-7fc21x86_64

Data Plane Development Kit (DPDK)

Network stack bypass and libraries for packet processing includes user space poll mode drivers

171

Open vSwitch vSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS) ‒ Compute OpenvSwitch 2390 (OVS) ‒ For OVS with DPDK-netdev Compute node Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN orchestrator Juno Release + Intel patches(https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id 3be5e02cf873289b814da87a0ea35c3dad21765b

OpenDaylight SDN Controller Helium-SR1

Suricata IPS application Suricata v202

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 9: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

9

Intelreg ONP Server Reference ArchitectureSolutions Guide

21 Network Services ExamplesThe following examples of network services are included as use-cases that have been tested with the Intelreg Open Network Platform Server Reference Architecture

211 Suricata (Next Generation IDSIPS engine)Suricata is a high performance Network IDS IPS and Network Security Monitoring engine developed by the OISF its supporting vendors and the community

httpsuricata-idsorg

212 vBNG (Broadband Network Gateway)Intel Data Plane Performance Demonstrators mdash Border Network Gateway (BNG) using DPDK

https01orgintel-data-plane-performance-demonstratorsdownloadsbng-application-v013

A Broadband (or Border) Network Gateway may also be known as a Broadband Remote Access Server (BRAS) and routes traffic to and from broadband remote access devices such as digital subscriber line access multiplexers (DSLAM) This network function is included as an example of a workload that can be virtualized on the Intel ONP Server

Additional information on the performance characterization of this vBNG implementation can be found at

httpnetworkbuildersintelcomdocsNetwork_Builders_RA_vBRAS_Finalpdf

Refer to Section 542 or Appendix B for more information on running the BNG as an appliance

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240 GB SSD 25in SATA 6 Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

‒ Ivy Bridge Socket-R (EP) 10 Core 28 GHz 115W 25 M per core LLC 80 GTs QPI DDR3-1867 HT turbo‒ Long product availability

Cores 10 physical coresCPU 20 hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

‒ NICs (82599)‒ NICs (XL710

‒ 2x Intelreg 82599 10 GbE Controller (code named Niantic)‒ Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville)

NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

‒ Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)‒ Hyper-threading enabled

Table 32 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4Supports SR-IOV

Processors Intelreg Dual Xeonreg Processor Series E5-2697 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 14 Core 260GHz 145W 35 M per core LLC 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE that has been tested with Intel FTLX8571D3BCV-IT and Intel AFBR-703sDZ-IN2 850nm SFPs

(code-named Fortville)NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644Release Date 09042014

IntelregVirtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through tests hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Dual Xeonreg Processor Series E5-2699 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 18 Cores 23 GHz 145 W 45 MB total cache per processor 96 GTs QPI DDR4-160018662133

Cores 18 physical coresCPU 28 hyper-threaded cores per CPU for 72 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville) NICs are on socket zero

Bios

SE5C61086B0101005

- Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass- through tests- Hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 21 x86_64 Host OS 3178-300fc21x86_64

Fedora 20 x86_64 Host OS only for the controller and OpenDaylightOpenStack integration

This is because of SW incompatibilities of the integration in Fedora 20

Real-Time Kernel Targeted towards Telco environment which is sensitive to low latency

Real-Time Kernel v31431-rt28

Qemu‐kvm Virtualization technology QEMU-KVM 212-7fc21x86_64

Data Plane Development Kit (DPDK)

Network stack bypass and libraries for packet processing includes user space poll mode drivers

171

Open vSwitch vSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS) ‒ Compute OpenvSwitch 2390 (OVS) ‒ For OVS with DPDK-netdev Compute node Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN orchestrator Juno Release + Intel patches(https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id 3be5e02cf873289b814da87a0ea35c3dad21765b

OpenDaylight SDN Controller Helium-SR1

Suricata IPS application Suricata v202

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 10: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

10

NOTE This page intentionally left blank

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240 GB SSD 25in SATA 6 Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

‒ Ivy Bridge Socket-R (EP) 10 Core 28 GHz 115W 25 M per core LLC 80 GTs QPI DDR3-1867 HT turbo‒ Long product availability

Cores 10 physical coresCPU 20 hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

‒ NICs (82599)‒ NICs (XL710

‒ 2x Intelreg 82599 10 GbE Controller (code named Niantic)‒ Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville)

NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

‒ Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)‒ Hyper-threading enabled

Table 32 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4Supports SR-IOV

Processors Intelreg Dual Xeonreg Processor Series E5-2697 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 14 Core 260GHz 145W 35 M per core LLC 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE that has been tested with Intel FTLX8571D3BCV-IT and Intel AFBR-703sDZ-IN2 850nm SFPs

(code-named Fortville)NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644Release Date 09042014

IntelregVirtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through tests hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Dual Xeonreg Processor Series E5-2699 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 18 Cores 23 GHz 145 W 45 MB total cache per processor 96 GTs QPI DDR4-160018662133

Cores 18 physical coresCPU 28 hyper-threaded cores per CPU for 72 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville) NICs are on socket zero

Bios

SE5C61086B0101005

- Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass- through tests- Hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 21 x86_64 Host OS 3178-300fc21x86_64

Fedora 20 x86_64 Host OS only for the controller and OpenDaylightOpenStack integration

This is because of SW incompatibilities of the integration in Fedora 20

Real-Time Kernel Targeted towards Telco environment which is sensitive to low latency

Real-Time Kernel v31431-rt28

Qemu‐kvm Virtualization technology QEMU-KVM 212-7fc21x86_64

Data Plane Development Kit (DPDK)

Network stack bypass and libraries for packet processing includes user space poll mode drivers

171

Open vSwitch vSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS) ‒ Compute OpenvSwitch 2390 (OVS) ‒ For OVS with DPDK-netdev Compute node Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN orchestrator Juno Release + Intel patches(https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id 3be5e02cf873289b814da87a0ea35c3dad21765b

OpenDaylight SDN Controller Helium-SR1

Suricata IPS application Suricata v202

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 11: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

11

Intelreg ONP Server Reference ArchitectureSolutions Guide

30 Hardware Components

Table 3-1 Hardware Ingredients (Grizzly Pass)

Item Description Notes

Platform Intelreg Server Board 2U 8x35 SATA 2x750W 2xHS Rails Intel R2308GZ4GC

Grizzly Pass Xeon DP Server (2 CPU sockets) 240 GB SSD 25in SATA 6 Gbs Intel Wolfsville SSDSC2BB240G401 DC S3500 Series

Processors Intelreg Xeonreg Processor Series E5-2680 v2 LGA2011 28GHz 25MB 115W 10 cores

‒ Ivy Bridge Socket-R (EP) 10 Core 28 GHz 115W 25 M per core LLC 80 GTs QPI DDR3-1867 HT turbo‒ Long product availability

Cores 10 physical coresCPU 20 hyper-threaded cores per CPU for 40 total cores

Memory 8 GB 1600 Reg ECC 15 V DDR3 Kingston KVR16R11S48I Romley

64 GB RAM (8x 8 GB)

‒ NICs (82599)‒ NICs (XL710

‒ 2x Intelreg 82599 10 GbE Controller (code named Niantic)‒ Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville)

NICs are on socket zero (3 PCIe slots available on socket 0)

BIOS SE5C60086B02010002082220131453Release Date 08222013BIOS Revision 46

‒ Intelreg Virtualization Technology for Directed IO (Intelreg VT-d)‒ Hyper-threading enabled

Table 32 Hardware Ingredients (Wildcat Pass)

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4Supports SR-IOV

Processors Intelreg Dual Xeonreg Processor Series E5-2697 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 14 Core 260GHz 145W 35 M per core LLC 96 GTs QPI DDR4-160018662133

Cores 14 physical coresCPU 28 hyper-threaded cores per CPU for 56 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x 8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE that has been tested with Intel FTLX8571D3BCV-IT and Intel AFBR-703sDZ-IN2 850nm SFPs

(code-named Fortville)NICs are on socket zero

BIOS GRNDSDP186B0038R011409040644Release Date 09042014

IntelregVirtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass-through tests hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Dual Xeonreg Processor Series E5-2699 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 18 Cores 23 GHz 145 W 45 MB total cache per processor 96 GTs QPI DDR4-160018662133

Cores 18 physical coresCPU 28 hyper-threaded cores per CPU for 72 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville) NICs are on socket zero

Bios

SE5C61086B0101005

- Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass- through tests- Hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 21 x86_64 Host OS 3178-300fc21x86_64

Fedora 20 x86_64 Host OS only for the controller and OpenDaylightOpenStack integration

This is because of SW incompatibilities of the integration in Fedora 20

Real-Time Kernel Targeted towards Telco environment which is sensitive to low latency

Real-Time Kernel v31431-rt28

Qemu‐kvm Virtualization technology QEMU-KVM 212-7fc21x86_64

Data Plane Development Kit (DPDK)

Network stack bypass and libraries for packet processing includes user space poll mode drivers

171

Open vSwitch vSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS) ‒ Compute OpenvSwitch 2390 (OVS) ‒ For OVS with DPDK-netdev Compute node Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN orchestrator Juno Release + Intel patches(https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id 3be5e02cf873289b814da87a0ea35c3dad21765b

OpenDaylight SDN Controller Helium-SR1

Suricata IPS application Suricata v202

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 12: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

12

Item Description Notes

Platform Intelreg Server Board S2600WTT 1100 W power supply Wildcat Pass Xeon DP Server (2 CPU sockets) 120 GB SSD 25in SATA 6 GBs Intel Wolfsville SSDSC2BB120G4

Processors Intelreg Dual Xeonreg Processor Series E5-2699 v3 23 GHz 45 MB 145 W 18 cores

(Formerly code-named Haswell) 18 Cores 23 GHz 145 W 45 MB total cache per processor 96 GTs QPI DDR4-160018662133

Cores 18 physical coresCPU 28 hyper-threaded cores per CPU for 72 total cores

Memory 8 GB DDR4 RDIMM Crucial CT8G4RFS423 64 GB RAM (8x8 GB)

NICs (XL710) Intelreg Ethernet Controller XL710 4x10 GbE (code named Fortville) NICs are on socket zero

Bios

SE5C61086B0101005

- Intelreg Virtualization Technology for Directed IO (Intelreg VT-d) enabled only for SR-IOV PCI pass- through tests- Hyper-threading enabled but disabled for benchmark testing

Quick Assist Technology

Intelreg Communications Chipset 8950 (Coleto Creek) Walnut Hill PCIe card 1xColeto Creek supports SR-IOV

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 21 x86_64 Host OS 3178-300fc21x86_64

Fedora 20 x86_64 Host OS only for the controller and OpenDaylightOpenStack integration

This is because of SW incompatibilities of the integration in Fedora 20

Real-Time Kernel Targeted towards Telco environment which is sensitive to low latency

Real-Time Kernel v31431-rt28

Qemu‐kvm Virtualization technology QEMU-KVM 212-7fc21x86_64

Data Plane Development Kit (DPDK)

Network stack bypass and libraries for packet processing includes user space poll mode drivers

171

Open vSwitch vSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS) ‒ Compute OpenvSwitch 2390 (OVS) ‒ For OVS with DPDK-netdev Compute node Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN orchestrator Juno Release + Intel patches(https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id 3be5e02cf873289b814da87a0ea35c3dad21765b

OpenDaylight SDN Controller Helium-SR1

Suricata IPS application Suricata v202

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 13: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

13

Intelreg ONP Server Reference ArchitectureSolutions Guide

40 Software Versions

Table 4-1 Software Versions

Software Component Function VersionConfiguration

Fedora 21 x86_64 Host OS 3178-300fc21x86_64

Fedora 20 x86_64 Host OS only for the controller and OpenDaylightOpenStack integration

This is because of SW incompatibilities of the integration in Fedora 20

Real-Time Kernel Targeted towards Telco environment which is sensitive to low latency

Real-Time Kernel v31431-rt28

Qemu‐kvm Virtualization technology QEMU-KVM 212-7fc21x86_64

Data Plane Development Kit (DPDK)

Network stack bypass and libraries for packet processing includes user space poll mode drivers

171

Open vSwitch vSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS) ‒ Compute OpenvSwitch 2390 (OVS) ‒ For OVS with DPDK-netdev Compute node Commit id b35839f3855e3b812709c6ad1c9278f498aa9935

OpenStack SDN orchestrator Juno Release + Intel patches(https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip)

DevStack Tool for Open Stack deployment

httpsgithubcomopenstack-devdevstackgit Commit id 3be5e02cf873289b814da87a0ea35c3dad21765b

OpenDaylight SDN Controller Helium-SR1

Suricata IPS application Suricata v202

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 14: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

14

41 Obtaining Software IngredientsTable 4-2 Software Ingredients

Software Component

Software Sub-components Patches Location Comments

Fedora 21 httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

Standard Fedora 21 iso image

Fedora 20 httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedorax86_64isoFedora-20-x86_64-DVDiso

Standard Fedora 20 iso image

Real- Time Kernel

httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Data Plane Development Kit (DPDK)

DPDK poll mode driver sample apps (bundled)

httpdpdkorggitdpdk All sub-components in one zip file

OpenvSwitch ‒ Controller OpenvSwitch 231-git3282e51 (OVS)‒ Compute OpenvSwitch 2390 (OVS)‒ For OVS with DPDK-netdev compute node Commit id b35839f3855e3b812709c6ad1c927 8f4 98aa9935

OpenStack Juno release to be deployed using DevStack(see following row)

DevStack Patches for DevStack and Nova

DevStackgit clone httpsgithubcomopenstack-devdevstackgit

Commit id 3be5e02cf873289b814da87a0ea35c3dad21765bThen apply to that commit the patch inhomestackpatchesdevstackpatch

NovahttpsgithubcomopenstacknovagitCommit id78dbed87b53ad3e60dc00f6c077a23506d228b6cThen apply to that commit the patch in

homestackpatchesnovapatch

Two patches downloaded as one zip file Then follow the instructions to deploy

OpenDaylight httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

Intelreg ONPServer Release13 Script

Helper scripts to setup SRT 13 using DevStack

httpsdownload01orgpacket- processingONPS13 onps_server_1_3targz

Suricata Suricata version 202 yum install suricata

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 15: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

15

Intelreg ONP Server Reference ArchitectureSolutions Guide

50 Installation and Configuration Guide

This section describes the installation and configuration instructions to prepare the controller and compute nodes

51 Instructions Common to Compute and Controller Nodes

This section describes how to prepare both the controller and compute nodes with the right BIOS settings and operating system installation The preferred operating system is Fedora 21 although it is considered relatively easy to use this solutions guide for other Linux distributions

511 BIOS SettingsTable 5-1 BIOS Settings

Configuration Setting forController Node

Setting forCompute Node

Intelreg Virtualization Technology Enabled Enabled

Intelreg Hyper-Threading Technology (HTT) Enabled Enabled

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 16: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

16

512 Operating System Installation and ConfigurationFollowing are some generic instructions for installing and configuring the operating system Other ways of installing the operating system are not described in this solutions guide such as network installation PXE boot installation USB key installation etc

5121 Getting the Fedora 20 DVD

1 Download the 64-bit Fedora 20 DVD from the following site

httpdownloadfedoraprojectorgpubfedoralinuxreleases20Fedora x86_64isoFedora-20-x86_64-DVDiso

2 Download the 64-bit Fedora 21 DVD from the following site

httpsgetfedoraorgenserver

or from direct URL

httpdownloadfedoraprojectorgpubfedoralinuxreleases21Serverx86_64isoFedora-Server-DVD-x86_64-21iso

3 Burn the ISO file to DVD and create an installation disk

5122 Installing Fedora 21

Use the DVD to install Fedora 21 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Virtualization

4 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball These scripts are automating the process described below and if using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

When using the scripts start with the README file Yoursquoll get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 17: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

17

Intelreg ONP Server Reference ArchitectureSolutions Guide

5123 Installing Fedora 20

Use the DVD to install Fedora 20 During the installation click Software selection then choose the following

1 C Development Tool and Libraries

2 Development Tools

3 Also create a user stack and check the box Make this user administrator during the installation The user stack is used in the OpenStack installation

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

Follow the steps below to install Fortville driver on the system with Fedora 20 OS

1 Base OS preparation

a Install Fedora 20 with the software selection of C Development Tools and Development Tools

b Reboot the system after the installation is complete

Note After reboot even though the Fortville hardware device is detected by the OS no driver is available because no Fortville interface is shown in the output of the ifconfig command

2 Install the Fortville driver

a Log in as the root user

b Download the driver The Fortville Linux driver source code can be downloaded from the following Intelcom support site

wget httpdownloadmirrorintelcom24411engi40e-1123targz

c Compile and install the driver and then run the following commands

tar zxvf i40e-1123targzcd i40e-1123srcmakemake installmodprobe i40e

d Run the ifconfig command to confirm the availability of all Forville ports

e From the output of the previous step the determine network interface names and their MAC addresses

f Create a configuration file for each of the interfaces (The example below is for the interface p1p1)

cd etcsysconfignetwork-scriptsecho ldquoTYPE=Ethernetrdquo gt ifcfg-p1p1echo ldquoBOOTPROTO=nonerdquo gtgt ifcfg-p1p1echo ldquoNAME=p1p1rdquo gtgt ifcfg-p1p1echo ldquoONBOOT=yesrdquo gtgt ifcfg-p1p1echo ldquoHWADDR=ltmac addressgtrdquo gtgt ifcfg-p1p1

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 18: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

18

g Repeat the preceding step for each of the Fortville interfaces

h Reboot

After the reboot the interfaces are ready to be used

5124 Proxy Configuration

If your infrastructure requires you to configure the proxy server follow the instructions in Appendix B

5125 Installing Additional Packages and Upgrading the System

Some packages are not installed with the standard Fedora 21 (or 20) installation but are required by Intelreg Open Network Platform for Server (ONPS) components The following packages should be installed by the user

yum ndashy install git ntp patch socat python-passlib libxslt-devel libffi-devel fuse-devel gluster python-cliff git

5126 Installing the Fedora 21 Kernel

ONPS supports Fedora kernel 3156 which is a newer version than the native Fedora 20 kernel 31110 To upgrade to 3156 follow these steps

Note If the Linux real‐time kernel is preferred you can skip this section and go to Section 5127

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-core-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-devel-3178-300fc21x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3178300fc21x86_64kernel-modules-extra-3178-300fc21x86_64rpm

2 Install the kernel packages

rpm -i kernel-core-3178-300fc21x86_64rpm

rpm -i kernel-modules-3178-300fc21x86_64rpm

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 19: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

19

Intelreg ONP Server Reference ArchitectureSolutions Guide

rpm -i kernel-3178-300fc21x86_64rpm

rpm -i kernel-devel-3178-300fc21x86_64rpm

3 Reboot system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution As such it is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 The following command upgrades to the latest kernel that Fedora supports (In order to maintain kernel version 3178 the yum configuration file needs modified with this command prior to running the yum update)

echo exclude=kernel gtgt etcyumconf

5 After installing the required kernel packages the operating system should be updated with the following command

yum update -y

6 After the update completes reboot the system

5127 Installing the Fedora 20 Kernel

Note Fedora 20 and its kernel installation are only required for OpenDaylightOpenStack integration

ONPS supports kernel 3156 which is newer than the native Fedora 20 kernel 31110

To upgrade to 3156 perform the following steps

1 Download the kernel packages

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-devel-3156-200fc20x86_64rpm

wget httpskojipkgsfedoraprojectorgpackageskernel3156200fc20x86_64 kernel-modules-extra-3156-200fc20x86_64rpm

2 Install the kernel packages

rpm -i kernel-3156-200fc20x86_64rpmrpm -i kernel-devel-3156-200fc20x86_64rpmrpm -i kernel-modules-extra-3156-200fc20x86_64rpm

3 Reboot the system to allow booting into the 3156 kernel

Note ONPS depends on libraries provided by your Linux distribution It is recommended that you regularly update your Linux distribution with the latest bug fixes and security patches to reduce the risk of security vulnerabilities in your system

4 Upgrade to the 3156 kernel by modifying the yum configuration file prior to running yum update with this command

echo exclude=kernel gtgt etcyumconf

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 20: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

20

5 After installing the required kernel packages update the operating system with the following command

yum update -y

6 After the update completes reboot the system

5128 Enabling the Real-Time Kernel Compute Node

In some cases (eg Telco environment sensitive to low latency and jitter applications like media etc) it makes sense to install the Linux real-time stable kernel to a compute node instead of the standard Fedora kernel This section describes how to do this If a real-time kernel is required you can omit Section 5127

1 Install the real-time kernel

a Get real-time kernel sources

cd usrsrckernel

git clone httpswwwkernelorgpubscmlinuxkernelgitrtlinux-stable-rtgit

Note It may take a while to complete the download

b Find the latest rt version from git tag and then check out this version

Note v31431-rt28 is the latest current version

cd linux-stable-rt

git tag

git checkout v31431-rt28

2 Compile the RT kernel

Note Refer to httpsrtwikikernelorgindexphpRT_PREEMPT_HOWTO

a Install the package

yum install ncurses-devel

b Copy kernel configuration file to kernel source

cp usrsrckernel3174-301f21x86_64config usrsrckernellinux-stable-rt

cd usrsrckernellinux-stable-rt

make menuconfig

The resulting configuration interface is shown below

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 21: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

21

Intelreg ONP Server Reference ArchitectureSolutions Guide

c Select the following

1 Enable the high resolution timer

General Setup gt Timer Subsystem gt High Resolution Timer Support

2 Enable the Preempt RT

Processor type and features gt Preemption Model gt Fully Preemptible Kernel (RT)

3 Set the high-timer frequency

Processor type and features gt Timer frequency gt 1000 HZ

4 Enable the max number SMP

Processor type and features gt Enable Maximum Number of SMP Processor and NUMA Nodes

5 Exit and save

6 Compile the kernel

make ndashj `grep ndashn processor proccpuinfo` ampamp make modules_install ampamp make install

3 Make changes to the boot sequence

a To show all menu entry

grep ^menuentry bootgrub2grubcfg

b To set default menu entry

grub2-set-default the desired default menu entry

c To verify

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 22: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

22

grub2-editenv list

d Reboot and log to the new kernel

Note Use the same procedures described in Section 53 for the compute node setup

5129 Disabling and Enabling Services

For OpenStack the following services need to be disabled selinux firewall and NetworkManager To do so run the following commands

sed -i sSELINUX=enforcingSELINUX=disabledg etcselinuxconfig

systemctl disable firewalldservicesystemctl disable NetworkManagerservice

The following services should be enabled ntp sshd and network Run the following commands

systemctl enable ntpdservicesystemctl enable ntpdateservicesystemctl enable sshdservicechkconfig network on

It is important to keep the timing synchronized between all nodes and necessary to use a known NTP server for all of them Users can edit etcntpconf to add a new server and remove default servers

The following example replaces a default NTP server with a local NTP server 100012 and comments out other default servers

sed -i sserver 0fedorapoolntporg iburstserver 101664516g etcntpconfsed -i sserver 1fedorapoolntporg iburst server 1fedorapoolntporg iburst g etcntpconfsed -i sserver 2fedorapoolntporg iburst server 2fedorapoolntporg iburst g etcntpconfsed -i sserver 3fedorapoolntporg iburst server 3fedorapoolntporg iburst g etcntpconf

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 23: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

23

Intelreg ONP Server Reference ArchitectureSolutions Guide

52 Controller Node SetupThis section describes the controller node setup It is assumed that the user successfully followed the operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

521 OpenStack (Juno)This section documents the configurations that are to be made and the installation of Openstack on the controller node

5211 Network Requirements

If your infrastructure requires you to configure proxy server follow the instructions in Appendix B

General

At least two networks are required to build the OpenStack infrastructure in a lab environment One network is used to connect all nodes for OpenStack management (management network) and the other one is a private network exclusively for an OpenStack internal connection (tenant network) between instances (or virtual machines)

One additional network is required for Internet connectivity because installing OpenStack requires pulling packages from various sourcesrepositories on the Internet

Some users might want to have Internet andor external connectivity for OpenStack instances (virtual machines) In this case an optional network can be used

The assumption is that the targeting OpenStack infrastructure contains multiple nodes one is a controller node and one or more are compute nodes

Network Configuration Example

The following is an example of how to configure networks for OpenStack infrastructure The example uses four network interfaces as follows

bull ens2f1 Internet network mdash Used to pull all necessary packagespatches from repositories on the Internet configured to obtain a DHCP address

bull ens2f0 Management network mdash Used to connect all nodes for OpenStack management configured to use network 10110016

bull p1p1 Tenant network mdash Used for OpenStack internal connections for virtual machines configured with no IP address

bull p1p2 Optional External networkmdash Used for virtual machine Internetexternal connectivity configured with no IP address This interface is only in the controller node if external network is configured This interface is not required for the compute node

Note Among these interfaces the interface for the virtual network (in this example p1p1) may be an 82599 port (Niantic) or XL710 port (Fortville) because it is used for DPDK and OVS

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 24: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

24

with DPDK-netdev Also note that a static IP address should be used for the interface of the management network

In Fedora the network configuration files are located at

etcsysconfignetwork-scripts

To configure a network on the host system edit the following network configuration files

ifcfg-ens2f1 DEVICE=ens2f1TYPE=Ethernet ONBOOT=yes BOOTPROTO=dhcp

ifcfg-ens2f0DEVICE=ens2f0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=10111211NETMASK=25525500

ifcfg-p1p1DEVICE=p1p1TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

ifcfg-p1p2DEVICE=p1p2TYPE=Ethernet ONBOOT=yes BOOTPROTO=none

Notes 1 Do not configure the IP address for p1p1 (10 Gbs interface) otherwise DPDK does not work when binding the driver during OpenStack Neutron installation

2 10111211 and 25525500 are static IP address and net mask to the management network It is necessary to have static IP address on this subnet The IP address 10111211 is use here only as an example

5212 Storage Requirements

By default DevStack uses blocked storage (Cinder) with a volume group stack-volumes If not specified stack-volumes is created with 10 Gbs space from a local file system Note that stack-volumes is the name for the volume group not more than 1 volume

The following example shows how to use spare local disks devsdb and devsdc to form stack- volumes on a controller node Need to find spare disks ie disks not partitioned or formatted on the system and then use the spare disks to form physical volumes and then volume group Run the following commands

lsblkpvcreate devsdb pvcreate devsdc vgcreate stack-volumes devsdb devsdc

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 25: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

25

Intelreg ONP Server Reference ArchitectureSolutions Guide

5213 OpenStack Installation Procedures

General

DevStack is used to deploy OpenStack in the example found in this section The following procedure uses an actual example of an installation performed in an Intel test lab that consists of one controller node (controller) and one compute node (compute)

Controller Node Installation Procedures

The following example uses a host for controller node installation with the following

bull Hostname sdnlab-k01

bull Internet network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011121

bull Userpassword stackstack

Root User Actions

Log in as root user and perform the following

1 Add stack user to sudoer list if not already

echo stack ALL=(ALL) NOPASSWD ALL gtgt etcsudoers

2 Edit etclibvirtqemuconf add or modify with the following lines

cgroup_controllers = [ cpu devices memory blkio cpusetcpuacct ]

cgroup_device_acl = [devnull devfull devzero devrandom devurandom devptmx devkvm devkqemu devrtc devhpet devnettun mnthuge devvhost-net]

hugetlbs_mount = mnthuge

3 Restart libvirt service and make sure libvird is active

systemctl restart libvirtdservicesystemctl status libvirtdservice

Stack User Actions

1 Log in as a stack user

2 Configure the appropriate proxies (yum http https and git) for the package installation and make sure these proxies are functional

Note On the controller node localhost and its IP address should be included in no_proxy setup (eg export no_proxy=localhost1011121) For detailed instructions on how to set up your proxy refer to Appendix B

3 Download Intelreg DPDK OVS patches for OpenStack

The tar file openstack-ovs-dpdk-911zip contains necessary patches for OpenStack Currently it is not native to the OpenStack The file can be downloaded from

https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 26: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

26

4 Place the file in the homestack directory and unzip

mkdir homestackpatches

cd homestackpatches

wget https01orgsitesdefaultfilespageopenstack-ovs-dpdk-911zip unzip openstack-ovs-dpdk-911zip

Two patch files devstackpatch and novapatch are present after unzipping

5 Download the DevStack source

git_clone httpsgithubcomopenstack-devdevstackgit

6 Check out DevStack at the desired commit id and patch

cd homestackdevstackgit checkout 3be5e02cf873289b814da87a0ea35c3dad21765b patch -p1 lt homestackpatchesdevstackpatch

7 Clone and patch Nova

sudo mkdir optstacksudo chown stackstack optstack cd optstackgit clone httpsgithubcomopenstacknovagit cd optstacknovagit checkout 78dbed87b53ad3e60dc00f6c077a23506d228b6c patch -p1 lt homestackpatchesnovapatch

8 Create localconf file in homestackdevstack

9 Pay attention to the following in the localconf file

a Use Rabbit for messaging services (Rabbit is on by default)

Note In the past Fedora only supported QPID for OpenStack Presently it only supports Rabbit

b Explicitly disable Nova compute service on the controller This is because by default Nova compute service is enabled

disable_service n-cpu

c To use Open vSwitch specify in configuration for ML2 plug-in

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

d Explicitly disable tenant tunneling and enable tenant VLAN This is because by default tunneling is used

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

A sample localconf files for controller node is as follows

Controller node[[local|localrc]]

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 27: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

27

Intelreg ONP Server Reference ArchitectureSolutions Guide

FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE=ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-enp8s0f0

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

10 Install DevStack

cd homestackdevstackstacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 28: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

28

11 For a successful installation the following shows at the end of screen output

stacksh completed in XXX seconds

where XXX is the number of seconds it took to complete stacking

12 For controller node only mdash Add physical port(s) to the bridge(s) created by the DevStack installation The following example can be used to configure the two bridges br-p1p1 (for virtual network) and br-ex (for external network)

sudo ovs-vsctl add-port br-p1p1 p1p1sudo ovs-vsctl add-port br-ex p1p2

13 Make sure proper VLANs are created in the switch connecting physical port p1p1 For example the previous localconf specifies VLAN range of 1000-1010 therefore matching VLANs 1000 to 1010 should be configured in the switch

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 29: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

29

Intelreg ONP Server Reference ArchitectureSolutions Guide

53 Compute Node SetupThis section describes how to complete the setup of the compute nodes It is assumed that the user has successfully completed the BIOS settings and operating system installation and configuration sections

Note Make sure to download and use the onps_server_1_3targz tarball Start with the README file You will get instructions on how to use Intelrsquos scripts to automate most of the installation steps described in this section to save you time If using them you can jump to Section 54 after installing OpenDaylight based on the instructions described in Section 62

531 Host Configuration

5311 Using DevStack to Deploy vSwitch and OpenStack Components

General

Deploying OpenStack and OpenvSwitch with DPDK-netdev using DevStack on a compute node follows the same procedures as on the controller node Differences include

bull Required services are nova compute neutron agent and Rabbit

bull OpenvSwitch with DPDK‐netdev is used in place of OpenvSwitch for neutron agent

Compute Node Installation Example

The following example uses a host for the compute node installation with the following

bull Hostname sdnlab-k02

bull Lab network IP address Obtained from DHCP server

bull OpenStack Management IP address 1011122

bull Userpassword stackstack

Note the following

bull No_proxy setup Localhost and its IP address should be included in the no_proxy setup In addition hostname and IP address of the controller node should also be included For example

export no_proxy=localhost1011122sdnlab-k011011121

Refer to Appendix B if you need more details about setting up the proxy

bull Differences in the localconf file

mdash The service host is the controller as well as other OpenStack servers such as MySQL Rabbit Keystone and Image Therefore they should be spelled out Using the controller node example in the previous section the service host and its IP address should be

SERVICE_HOST_NAME=sdnlab-k01SERVICE_HOST=1011121

mdash The only OpenStack services required in compute nodes are messaging nova compute and neutron agent so the localconf might look like

disable_all_services enable_service rabbitenable_service n-cpu enable_service q-agt

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 30: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

30

mdash The user has option to use openvswitch for the neutron agent

Q_AGENT=openvswitch

Notes 1 For openvswitch the user can specify regular OVS or OVS with DPDK‐netdev If OVS with DPDK‐netdev is used the following setup should be added

OVS_DATAPATH_TYPE=netdev

2 If both are specified in the same localconf file the later one overwrites the previous one

mdash For the OVS with DPDK‐netdev huge pages setting specify The number of hugepages to be allocated and mounting point (default is mnthuge)

OVS_NUM_HUGEPAGES=8192

mdash For this version Intel uses specific versions for OVS with DPDK‐netdev from their respective repositories Specify the following in the localconf file if OVS with DPDK‐netdev is used

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

mdash For regular OVS and OVS with DPDK-netdev binding the physical port to the bridge is through the following line in localconf For example to bind port p1p1 to bridge br-p1p1 use

OVS_PHYSICAL_BRIDGE=br-p1p1

mdash A sample localconf file for compute node with ovdk agent is as follows

Compute node OVS_TYPE=ovs-dpdk[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstack

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 31: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

31

Intelreg ONP Server Reference ArchitectureSolutions Guide

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanOVS_NUM_HUGEPAGES=8192 OVS_DATAPATH_TYPE=netdev

OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111314

[libvirt]cpu_mode=host-model

mdash A sample localconf file for compute node with accelerated ovs agent is as follows

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=1011122HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=1011121SERVICE_HOST=1011121

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=1011121

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=password

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 32: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

32

SERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

OVS_GIT_TAG=

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111313

[libvirt]cpu_mode=host-model

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 33: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

33

Intelreg ONP Server Reference ArchitectureSolutions Guide

54 Virtual Network FunctionsThis section describes the Virtual Network Functions (VNFs) that have been used in the Open Network Platform for servers They assume Virtual Machines (VMs) that have been prepared in a similar way to compute nodes

541 Installing and Configuring vIPSThe vIPS used is Suricata which should be installed as an rpm package as previously described in a VM In order to configure it to run in inline mode (IPS) perform the following steps

1 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

2 Mangle all traffic from one vPort to the other using a netfilter queue

iptables -I FORWARD -i eth1 -o eth2 -j NFQUEUE iptables -I FORWARD -i eth2 -o eth1 -j NFQUEUE

3 Have Suricata run in inline mode using the netfilter queue

suricata -c etcsuricatasuricatayaml -q 0

4 Enable ARP proxying

echo 1 gt procsysnetipv4confeth1proxy_arp echo 1 gt procsysnetipv4confeth2proxy_arp

542 Installing and Configuring the vBNG1 Execute the following command in a Fedora VM with two Virtio interfaces

yum -y update

2 Disable SELinux

setenforce 0vi etcselinuxconfig

And change so SELINUX=disabled

3 Disable the firewall

systemctl disable firewalldservicereboot

4 Edit the grub default configuration

vi etcdefaultgrub

5 Add hugepages

hellip noirqbalance intel_idlemax_cstate=0 processormax_cstate=0 ipv6disable=1 default_hugepagesz=1G hugepagesz=1G hugepages=2 isolcpus=1234

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 34: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

34

6 Verify that hugepages are available in the VM

cat procmeminfoHugePages_Total2HugePages_Free2 Hugepagesize1048576 kB

7 Add the following to the end of ~bashrc file

---------------------------------------------export RTE_SDK=rootdpdkexport RTE_TARGET=x86_64-native-linuxapp-gcc export OVS_DIR=rootovs

export RTE_UNBIND=$RTE_SDKtoolsdpdk_nic_bindpy export DPDK_DIR=$RTE_SDKexport DPDK_BUILD=$DPDK_DIR$RTE_TARGET ---------------------------------------------

8 Log in again or source the file

bashrc

9 Install DPDK

git clone httpdpdkorggitdpdk cd dpdkgit checkout v171make install T=$RTE_TARGET modprobe uioinsmod $RTE_SDK$RTE_TARGETkmodigb_uioko

10 Check the PCI addresses of the 82599 cards

lspci | grep Ethernet00040 Ethernet controller Red Hat Inc Virtio network device 00050 Ethernet controller Red Hat Inc Virtio network device

11 Use the DPDK binding scripts to bind the interfaces to DPDK instead of the kernel

$RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00040 $RTE_SDKtoolsdpdk_nic_bindpy ndashb igb_uio 00050

12 Download BNG packages

wget https01orgsitesdefaultfilesdownloadsintel-data-plane-performance- demonstratorsdppd-bng-v013zip

13 Extract DPPD BNG sources

unzip dppd-bng-v013zip

14 Build a BNG DPPD application

yum -y install ncurses-devel cd dppd-BNG-v013make

The application starts like this

builddppd -f confighandle_nonecfg

When run under OpenStack it should look as shown below

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 35: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

35

Intelreg ONP Server Reference ArchitectureSolutions Guide

543 Configuring the Network for Sink and Source VMsSink and Source are two Fedora VMs that are used to generate traffic

1 Install iperf

yum install ndashy iperf

2 Turn on IP forwarding

sysctl -w netipv4ip_forward=1

3 In the source add the route to the sink

route add -net 1100024 eth0

4 At the sink add the route to the source

route add -net 1000024 eth0

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 36: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

36

NOTE This page intentionally left blank

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 37: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

37

Intelreg ONP Server Reference ArchitectureSolutions Guide

60 Testing the Setup

This section describes how to bring up the VMs in a compute node connect them to the virtual network(s) and verify functionality

Note Currently it is not possible to have more than one virtual network in a multi-compute node setup Although it is possible to have more than one virtual network in a single compute node setup

61 Preparing with OpenStack

611 Deploying Virtual Machines

6111 Default Settings

OpenStack comes with the following default settings

bull Tenant (Project) admin and demo

bull Network

mdash Private network (virtual network) 1000024

mdash Public network (external network) 172244024

bull Image cirros-031-x86_64

bull Flavor nano micro tiny small medium large and xlarge

To deploy new instances (VMs) with different setups (such as a different VM image flavor or network) users must create their own See below for details on how to create them

To access the OpenStack dashboard use a web browser (Firefox Internet Explorer or others) and the controllers IP address (management network) For example

http1011121

Login information is defined in the localconf file In the following examples password is the password for both admin and demo users

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 38: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

38

6112 Custom Settings

The following examples describe how to create a custom VM image flavor and aggregateavailability zone using OpenStack commands The examples assume the IP address of the controller is 1011121

1 Create a credential file admin-cred for admin user The file contains the following lines

export OS_USERNAME=adminexport OS_TENANT_NAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source admin-cred to the shell environment for actions of creating glance image aggregateavailability zone and flavor

source admin-cred

3 Create an OpenStack glance image A VM image file should be ready in a location accessible by OpenStack

glance image-create --name ltimage-name-to-creategt --is-public=true --container-format=bare --disk-format=ltformatgt --file=ltimage-file-path-namegt

The following example shows the image file fedora20-x86_64-basicqcow2 is located in a NFS share and mounted at mntnfsopenstackimages to the controller host The following command creates a glance image named fedora-basic with qcow2 format for public use (such as any tenant can use this glance image)

glance image-create --name fedora-basic --is-public=true --container-format=bare --disk-format=qcow2 --file=mntnfsopenstackimagesfedora20-x86_64-basicqcow2

4 Create a host aggregate and availability zone

First find out the available hypervisors and then use the information for creating aggregateavailability zone

nova hypervisor-listnova aggregate-create ltaggregate-namegt ltzone-namegtnova aggregate-add-host ltaggregate-namegt lthypervisor-namegt

The following example creates an aggregate named aggr-g06 with one availability zone named zone-g06 and the aggregate contains one hypervisor named sdnlab-g06

nova aggregate-create aggr-g06 zone-g06nova aggregate-add-host aggr-g06 sdnlab-g06

5 Create flavor Flavor is a virtual hardware configuration for the VMs it defines the number of virtual CPUs size of virtual memory and disk space etc

The following command creates a flavor named onps-flavor with an ID of 1001 1024 Mb virtual memory 4 Gb virtual disk space and 1 virtual CPU

nova flavor-create onps-flavor 1001 1024 4 1

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 39: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

39

Intelreg ONP Server Reference ArchitectureSolutions Guide

6113 Example mdash VM Deployment

The following example describes how to use a customer VM image flavor and aggregate to launch a VM for a demo Tenant using OpenStack commands Again the example assumes the IP address of the controller is 1011121

1 Create a credential file demo-cred for a demo user The file contains the following lines

export OS_USERNAME=demoexport OS_TENANT_NAME=demoexport OS_PASSWORD=passwordexport OS_AUTH_URL=http101112135357v20

2 Source demo-cred to the shell environment for actions of creating tenant network and instance (VM)

source demo-cred

3 Create a network for the tenant demo by performing the following steps

a Get the tenant demo

keystone tenant-list | grep -Fw demo

The following example creates a network with a name of ldquonet-demordquo for the tenant with the ID 10618268adb64f17b266fd8fb83c960d

neutron net-create --tenant-id 10618268adb64f17b266fd8fb83c960d net-demo

b Create the subnet

neutron subnet-create --tenant-id ltdemo-tenant-idgt --name ltsubnet_namegt ltnetwork-namegt ltnet-ip-rangegt

The following creates a subnet with a name of sub-demo and CIDR address 1921682024for network net-demo

neutron subnet-create --tenant-id 10618268adb64f17b266fd8fb83c960d --name sub-demo net-demo 1921682024

4 Create the instance (VM) for the tenant demo

a Get the name andor ID of the image flavor and availability zone to be used for creating instance

glance image-listnova flavor-listnova aggregate-listneutron net-list

b Launch an instance (VM) using information obtained from the previous step

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt --nic net-id=ltnetwork-idgt ltinstance-namegt

c The new VM should be up and running in a few minutes

5 Log in to the OpenStack dashboard using the demo user credential click Instances under Project in the left pane the new VM should show in the right pane Click the instance name to open the Instance Details view then click Console on the top menu to access the VM as show below

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 40: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

40

6114 Local VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 IP addresses of the VMs get configured using the DHCP server VM1 belongs to one subnet and VM3 to a different one VM2 has ports on both subnets

3 Flows get programmed to the vSwitch by the OpenDaylight controller (Section 62)

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch

2 The vSwitch forwards the flow to the first vPort of VM2 (active IPS)

Figure 6-1 Local VNF

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 41: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

41

Intelreg ONP Server Reference ArchitectureSolutions Guide

3 The IPS receives the flow inspects it and (if not malicious) sends it out through its second vPort

4 The vSwitch forwards it to VM3

6115 Remote VNF

Configuration

1 OpenStack brings up the VMs and connects them to the vSwitch

2 The IP addresses of the VMs get configured using the DHCP server

Data Path (Numbers Matching Red Circles)

1 VM1 sends a flow to VM3 through the vSwitch inside compute node 1

2 The vSwitch forwards the flow out of the first port to the first port of compute node 2

3 The vSwitch of compute node 2 forwards the flow to the first port of the vHost where the traffic gets consumed by VM1

4 The IPS receives the flow inspects it and (unless malicious) sends it out through its second port of the vHost into the vSwitch of compute node 2

5 The vSwitch forwards the flow out of the second XL710 port of compute node 2 into the second port of the 82599 in compute node 1

6 The vSwitch of compute node 1 forwards the flow into the port of the vHost of VM3 where the flow is terminated

Figure 6-2 Remote VNF

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 42: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

42

612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack

NUMA was implemented as a new feature in the OpenStack Juno release NUMA placement enables an OpenStack administrator to ping particular NUMA nodes for guest systems optimization With a SR-IOV enabled network interface card each SR-IOV port is associated with a virtual function (VF) OpenStack SR-IOV pass-through enables a guest access to a VF directly

6121 Preparing Compute Node for SR-IOV Pass-through

To enable the previous features follow these steps to configure the compute node

1 The server hardware support IOMMU or Intel VT-d To check whether IOMMU is supported run the following command and the output should show IOMMU entries

dmesg | grep -e IOMMU

Note IOMMU cab be enableddisabled through a BIOS setting under Advanced and then Processor

2 Enable the kernel IOMMU in grub For Fedora 20 run the commands

sed -i srhgb quietrhgb quite intel_iommu=ong etcdefaultgrubgrub2-mkconfig -o bootgrub2grubcfg

3 Install the necessary packages

yum install -y yajl-devel device-mapper-devel libpciaccess-devel libnl-devel dbus-devel numactl-devel python-devel

4 Install Libvirt to v128 or newer The following example uses v129

systemctl stop libvirtd

yum remove libvirtyum remove libvirtd

wget httplibvirtorgsourceslibvirt-129targztar zxvf libvirt-129targz

cd libvirt-129autogensh --system --with-dbusmakemake install

systemctl start libvirtd

Make sure libvirtd is running v129

libvirtd --version

5 Install libvirt-python Example below uses v129 to match libvirt version

yum remove libvirt-python

wget httpspypipythonorgpackagessourcellibvirt-pythonlibvirt-python-129targz tar zxvf libvirt-python-129targz

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 43: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

43

Intelreg ONP Server Reference ArchitectureSolutions Guide

cd libvirt-python-129 python setuppy install

6 Modify etclibvirtqemuconf by adding

devvfiovfio

to

cgroup_device_acl list

An example follows

cgroup_device_acl = [devnull devfull devzerodevrandom devurandomdevptmx devkvm devkqemudevrtc devhpet devnettundevvfiovfio]

7 Enable the SR-IOV virtual function for an XL710 interface The following example enables 2 VFs for the interface p1p1

echo 2 gt sysclassnetp1p1devicesriov_numvfs

To check that virtual functions are enabled

lspci -nn | grep XL710

The screen output should display the physical function and two virtual functions

6122 Devstack Configurations

In the following text the example uses a controller with IP address 1011121 and compute 1011124 PCI device vendor ID (8086) and product ID of the 82599 can be obtained from output (10fb for physical function and 10ed for VF)

lspci -nn | grep XL710

On Controller Node

1 Edit the controller localconf Note that the same localconf file of Section 5213 is used here but add the following

Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchsriovnicswitch

[[post-config|$NOVA_CONF]][DEFAULT]scheduler_default_filters=RamFilterComputeFilterAvailabilityZoneFilterComputeCapabilitiesFilterImagePropertiesFilterPciPassthroughFilterNUMATopologyFilter pci_alias=namenianticproduct_id10edvendor_id8086

[[post-config|$Q_PLUGIN_CONF_FILE]][ml2_sriov]supported_pci_vendor_devs = 808610fb 808610ed

2 Run stacksh

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 44: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

44

On Compute Node

1 Edit optstacknovarequirementstxt add ldquolibvirt-pythongt=128rdquo

echo libvirt-pythongt=128 gtgt optstacknovarequirementstxt

2 Edit compute localconf for OVS with DPDK-netdev Note that the same localconf file of Section 5311 is used here

3 Add the following

[[post-config|$NOVA_CONF]][DEFAULT]pci_passthrough_whitelist=address000008000vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008100vendor_id8086physical_networkphysnet1

pci_passthrough_whitelist=address000008102vendor_id8086physical_networkphysnet1

4 Remove (or comment out) the following

OVS_NUM_HUGEPAGES=8192OVS_DATAPATH_TYPE=netdev OVS_GIT_TAG=b35839f3855e3b812709c6ad1c9278f498aa9935

Note Currently SR-IOV pass-through is only supported with a standard OVS

5 Run stacksh for both the controller and compute nodes to complete the Devstack installation

6123 Creating the VM with Numa Placement and SR-IOV

1 After stacking is successful on both the controller and compute nodes verify the PCI pass-through device(s) are in the OpenStack database

mysql -uroot -ppassword -h 1011121 nova -e select from pci_devices

2 The output should show entry(ies) of PCI device(s) similar to the following

| 2014-11-18 194114 | NULL | NULL | 0 | 1 | 3 | 000008100 | 10ed | 8086 | type-VF | pci_0000_08_10_0 | label_8086_10ed | available | phys_function 000008000 | NULL | NULL | 0 |

3 Next to create a flavor for example

nova flavor-create numa-flavor 1001 1024 4 1

where

flavor name = numa-flavorid = 1001virtual memory = 1024 Mbvirtual disk size = 4Gbnumber of virtual CPU = 1

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 45: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

45

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Modify flavor for numa placement with PCI pass-through

nova flavor-key 1001 set pci_passthroughalias=niantic1 hwnuma_nodes=1 hwnuma_cpus0=0 hwnuma_mem0=1024

5 Show detailed information of the flavor

nova flavor-show 1001

6 Create a VM numa-vm1 with the flavor numa-flavor under the default project demo

nova boot --image ltimage-idgt --flavor ltflavor-idgt --availability-zone ltzone-namegt--nic ltnetwork-idgt numa-vm1

where numa-vm1 is the name of instance of the VM to be booted

Note The preceding example assumes a image fedora-basic and an availability zone zone-04 are already in place (see Section 6112) and the private is the default network for demo project

7 Access the VM from the OpenStack Horizon The new VM shows two virtual network interfaces The interface with a SR-IOV VF should show a name of ensX where X is a numerical number (eg ens5) If a DHCP server is available for the physical interface (p1p1 in this example) the VF gets an IP address automatically otherwise users can assign an IP address to the interface the same way as a standard network interface

To verify network connectivity through a VF users can set up two compute hosts and create a VM on each node After obtaining IP addresses the VMs should communicate with each other just like a normal network

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 46: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

46

62 Using OpenDaylightThis section describes how to download install and set up an OpenDaylight controller

621 Preparing the OpenDaylight Controller1 Download the pre-built OpenDaylight Helium-SR1 distribution

wget

httpsnexusopendaylightorgcontentrepositoriespublicorgopendaylightintegrationdistribution-karaf021-Helium-SR11distribution-karaf-021-Helium-SR11targz

2 Set Java home JAVA_HOME must be set to run Karaf

a Install java

yum install java -y

b Find the java binary location from the logical link etcalternativesjava

ls -l etcalternativesjava

c Set the java home in shell environment (assuming java binary is at usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre)

echo export JAVA_HOME=usrlibjvmjava-170-openjdk-17071-2533fc20x86_64jre gtgt rootbashrc

source rootbashrc

3 If your infrastructure requires a proxy server to access the Internet follow the maven‐specific instructions in Appendix B

4 Extract the archive and cd into it

- tar zxvf distribution-karaf-021-Helium-SR11targz

- cd distribution-karaf-021-Helium-SR11targz

5 Use the binkaraf executable to start the Karaf shell

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 47: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

47

Intelreg ONP Server Reference ArchitectureSolutions Guide

6 Install the required ODL features from the Karaf shell

- featurelist

- featureinstall odl-base-all odl-aaa-authn odl-restconf odl-nsf-all odl-adsal- northbound odl-mdsal-apidocs odl-ovsdb-openstack odl-ovsdb-northbound odl-dlux- core odl-dlux-all

7 Update localconf file for ODL to be functional with Devstack Add the following lines

On the controllerComment out these lines

enable_service q-agtQ_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138 and port p786p1 are used for the data plane network)

enable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 ODL_PROVIDER_MAPPINGS=physnet1p786p1Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 48: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

48

Add these line at the bottom of the file

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http10111388080controllernbv2neutronusername=adminpassword=admin

On Compute nodeComment out these lines

enable_service q-agtQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

Add these lines in the middle of file anywhere before [[post-config|$NOVA_CONF]] (This assumes that the controller management IP address is 1011138) enable_service neutronenable_service odl-computeQ_HOST=$HOST_IPODL_MGR_IP=1011138 Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Note The install for Karaf might take a long time to start or feature The installation might fail if the host does not have network access Yoursquoll need to set up the appropriate proxy settings

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 49: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

49

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix A Additional OpenDaylight Information

This section describes how OpenDaylight can be used in a multi-node setup Two hosts are used one running OpenDaylight OpenStack controller plus compute and OVS The second host is the compute node This section describes how to create a Vxlan tunnel VMs and ping from one VM to another

Following is a sample localconf for the OpenDaylight host

Controller node[[local|localrc]]FORCE=yes

HOST_NAME=$(hostname)HOST_IP=10111211HOST_IP_IFACE= ens2f0

PUBLIC_INTERFACE=p1p2VLAN_INTERFACE=p1p1FLAT_INTERFACE=p1p1

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_service n-netdisable_service n-cpu

enable_service q-svcenable_service q-agtenable_service q-dhcpenable_service q-l3enable_service q-metaenable_service neutronenable_service horizon

LOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

enable_service odl-compute

Q_HOST=$HOST_IPODL_MGR_IP=10111211

Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 50: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

50

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlanflatlocalQ_ML2_TENANT_NETWORK_TYPE=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

PHYSICAL_NETWORK=physnet1ML2_VLAN_RANGES=physnet110001010OVS_PHYSICAL_BRIDGE=br-p1p1

MULTI_HOST=True

[[post-config|$NOVA_CONF]]

disable nova security groups[DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivernovncproxy_host=0000novncproxy_port=6080

[[post-config|etcneutronpluginsml2ml2_confini]][ml2_odl]url=http101112238080controllernbv2neutronusername=adminpassword=admin

Here is a sample localconf for compute node

Compute node OVS_TYPE=ovs[[local|localrc]]

FORCE=yesMULTI_HOST=True

HOST_NAME=$(hostname)HOST_IP=10111212HOST_IP_IFACE=ens2f0SERVICE_HOST_NAME=10111211SERVICE_HOST=10111211

MYSQL_HOST=$SERVICE_HOSTRABBIT_HOST=$SERVICE_HOST

GLANCE_HOST=$SERVICE_HOSTGLANCE_HOSTPORT=$SERVICE_HOST9292KEYSTONE_AUTH_HOST=$SERVICE_HOSTKEYSTONE_SERVICE_HOST=10111211

ADMIN_PASSWORD=passwordMYSQL_PASSWORD=passwordDATABASE_PASSWORD=passwordSERVICE_PASSWORD=passwordSERVICE_TOKEN=no-token-passwordHORIZON_PASSWORD=passwordRABBIT_PASSWORD=password

disable_all_services

enable_service n-cpuenable_service q-agt

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 51: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

51

Intelreg ONP Server Reference ArchitectureSolutions Guide

DEST=optstackLOGFILE=$DESTstackshlogSCREEN_LOGDIR=$DESTscreenSYSLOG=TrueLOGDAYS=1

Q_AGENT=openvswitchQ_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchQ_ML2_PLUGIN_TYPE_DRIVERS=vlan

ENABLE_TENANT_TUNNELS=FalseENABLE_TENANT_VLANS=True

Q_ML2_TENANT_NETWORK_TYPE=vlanML2_VLAN_RANGES=physnet110001010PHYSICAL_NETWORK=physnet1OVS_PHYSICAL_BRIDGE=br-enp8s0f0

enable_service odl-computeODL_MGR_IP=10111211Q_PLUGIN=ml2Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitchopendaylight

[[post-config|$NOVA_CONF]][DEFAULT]firewall_driver=novavirtfirewallNoopFirewallDrivervnc_enabled=Truevncserver_listen=0000vncserver_proxyclient_address=10111224

A1 Create VMs Using the DevStack Horizon GUI

After starting the OpenDaylight controller as described in Section 61 run a stack on the controller and compute nodes

1 Log in to httpltcontrol node ip addressgt8080 to start the horizon GUI

2 Verify that the node shows up in the following GUI

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 52: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

52

3 Create a new Vxlan network

a Click Network

b Click Create Network

c Enter the Network name and then click Next

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 53: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

53

Intelreg ONP Server Reference ArchitectureSolutions Guide

4 Enter the subnet information then click Next

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 54: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

54

5 Add additional information then click Next

6 Click Create

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 55: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

55

Intelreg ONP Server Reference ArchitectureSolutions Guide

7 Click Launch Instances to create a VM instance by

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 56: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

56

8 Click Details to enter the VM details

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 57: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

57

Intelreg ONP Server Reference ArchitectureSolutions Guide

9 Click Networking then enter the network information

The VM is now created

Note Instead of disabling the OVSDB neutron service by removing the OVSDB neutron bundle file it is possible to disable the bundle from the OSGi console However there does not appear to be a way to make this persistent so it must be done each time the controller restarts

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 58: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

58

Once the controller is up and running connect to the OSGi console The ss command displays all of the bundles that are installed and their current status Adding a string(s) filters the list of bundles

1 List the OVSDB bundles

osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 ACTIVE orgopendaylightovsdbneutron_050

Note There are three OVSDB bundles running (the OVSDB neutron bundle has not been removed in this case)

2 Disable the OVSDB neutron bundle and then list the OVSDB bundles again

osgigt stop 262osgigt ss ovsFramework is launched

id State Bundle106 ACTIVE orgopendaylightovsdbnorthbound_050112 ACTIVE orgopendaylightovsdb_050262 RESOLVED orgopendaylightovsdbneutron_050

Now the OVSDB neutron bundle is in the RESOLVED state which means that it is not active

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 59: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

59

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix B Configuring the Proxy

This paragraph describes how to configure the proxy in case the infrastructure requires it

Generally speaking the proxy settings are set as environment variables in the userrsquos bashrc

$ vi ~bashrc

And add

export http_proxy=ltyour http proxy servergtltyour http proxy portgtexport https_proxy=ltyour https proxy servergtltyour http proxy portgt

Also add the no proxy settings ie the hosts andor subnets that you donrsquot want to use proxy server to access them

export no_proxy=1921681221ltintranet subnetsgt

If you want to make the change across all users instead of just your individual one make the above additions in etcprofile as root

vi etcprofile

This will allow most shell commands to (like wget or curl) to access your proxy server first

In addition you will be required to edit also your etcyumconf as root since yum does not read the proxy settings from your shell

vi etcyumconf

And add the following line

proxy=httpltyour http proxy servergtltyour http proxy portgt

In order for git to also use your proxy servers execute the following command

$ git config --global httpproxy ltyour http proxy servergtltyour http proxy portgt$ git config --global httpsproxy ltyour https proxy servergtltyour https proxy portgt

If you want to make the git proxy settings available to all users as root run the following commands instead

git config --system httpproxy ltyour http proxy servergtltyour http proxy portgt git config --system httpsproxy ltyour https proxy servergtltyour https proxy portgt

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 60: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

60

For OpenDaylight deployments the proxy needs to be defined as part of the XML settings file of Maven

If the settingsxml to m2 directory does not exist create it

$ mkdir ~m2

And edit the ~m2settingsxml file

$ vi ~m2settingsxml

Add the following

ltsettings xmlns=httpmavenapacheorgSETTINGS100 xmlnsxsi=httpwwww3org2001XMLSchema-instance xsischemaLocation=httpmavenapacheorgSETTINGS100 httpmavenapacheorgxsdsettings-100xsdgtltlocalRepositorygtltinteractiveModegtltusePluginRegistrygtltofflinegtltpluginGroupsgtltserversgtltmirrorsgtltproxiesgt ltproxygt ltidgtintelltidgt ltactivegttrueltactivegt ltprotocolgthttpltprotocolgt lthostgtyour http proxy hostlthostgt ltportgtyour http proxy port noltportgt ltnonProxyHostsgtlocalhost127001ltnonProxyHostsgt ltproxygtltproxiesgtltprofilesgtltactiveProfilesgtltsettingsgt

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 61: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

61

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix C Glossary

Acronym Description

ATR Application Targeted Routing

COTS Commercial Off‐The-Shelf

DPI Deep Packet Inspection

FCS Frame Check Sequence

GRE Generic Routing Encapsulation

GRO Generic Receive Offload

IOMMU InputOutput Memory Management Unit

Kpps Kilo packets per seconds

KVM Kernel-based Virtual Machine

LRO Large Receive Offload

MSI Message Signaling Interrupt

MPLS Multi-protocol Label Switching

Mpps Millions packets per seconds

NIC Network Interface Card

pps Packets per seconds

QAT Quick Assist Technology

QinQ VLAN stacking (8021ad)

RA Reference Architecture

RSC Receive Side Coalescing

RSS Receive Side Scaling

SP Service Provider

SR-IOV Single root IO Virtualization

TCO Total Cost of Ownership

TSO TCP Segmentation Offload

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 62: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

62

NOTE This page intentionally left blank

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 63: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

63

Intelreg ONP Server Reference ArchitectureSolutions Guide

Appendix D References

Document Name Source

Internet Protocol version 4 httpwwwietforgrfcrfc791txt

Internet Protocol version 6 httpwwwfaqsorgrfcrfc2460txt

Intelreg 82599 10 Gigabit Ethernet Controller Datasheet httpwwwintelcomcontentwwwusenethernet-controllers82599-10-gbe-controller-datasheethtml

4 X Intelreg 10 Gigabit Fortville (FVL) XL710 Ethernet Controller

httparkintelcomproducts82945Intel-Ethernet-Controller-XL710-AM1

Intel DDIO httpswww-sslintelcomcontentwwwuseniodirect-data-i-ohtml

Bandwidth Sharing Fairness httpwwwintelcomcontentwwwusennetwork-adapters10-gigabit- network-adapters10-gbe-ethernet-flexible-port-partitioning-briefhtml

Design Considerations for efficient network applications with Intelreg multi-core processor- based systems on Linux

httpdownloadintelcomdesignintarchpapers324176pdf

OpenFlow with Intel 82599 httpftpsunetsepubLinuxdistributionsbifrostseminarsworkshop-2011-03-31Openflow_1103031pdf

Wu W DeMarP amp CrawfordM (2012) A Transport-Friendly NIC for Multicore Multiprocessor Systems

IEEE transactions on parallel and distributed systems vol 23 no 4 April 2012 httplssfnalgovarchive2010pubfermilab-pub-10-327-cdpdf

Why does Flow Director Cause Placket Reordering httparxivorgftparxivpapers110611060443pdf

IA packet processing httpwwwintelcompen_USembeddedhwswtechnologypacket- processing

High Performance Packet Processing on Cloud Platforms using Linux with Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_packet_processingpdf

Packet Processing Performance of Virtualized Platforms with Linux and Intelreg Architecture

httpnetworkbuildersintelcomdocsnetwork_builders_RA_NFVpdf

DPDK httpwwwintelcomgodpdk

Intelreg DPDK Accelerated vSwitch https01orgpacket-processing

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL
Page 64: Intel Open Source Technology Center - Intel Open Network … · 2019. 6. 27. · Intel® ONP Server Reference Architecture Solutions Guide 2 Revision History Revision Date Comments

Intelreg ONP Server Reference ArchitectureSolutions Guide

64

LEGAL

By using this document in addition to any agreements you have with Intel you accept the terms set forth below

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein You agree to grant Intel a non-exclusive royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS NO LICENSE EXPRESS OR IMPLIED BY ESTOPPEL OR OTHERWISE TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT EXCEPT AS PROVIDED IN INTELS TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE ANDOR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE MERCHANTABILITY OR INFRINGEMENT OF ANY PATENT COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors

Performance tests such as SYSmark and MobileMark are measured using specific computer systems components software operations and functions Any change to any of those factors may cause the results to vary You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases including the performance of that product when combined with other products

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications Current characterized errata are available on request Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order

Intel technologies may require enabled hardware specific software or services activation Check with your system manufacturer or retailer Tests document performance of components on a particular test in specific systems Differences in hardware software or configuration will affect actual performance Consult other sources of information to evaluate performance as you consider your purchase For more complete information about performance and benchmark results visit httpwwwintelcomperformance

All products computer systems dates and figures specified are preliminary based on current expectations and are subject to change without notice Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling and provided to you for informational purposes Any differences in your system hardware software or configuration may affect your actual performance

No computer system can be absolutely secure Intel does not assume any liability for lost or stolen data or systems or any damages resulting from such losses

Intel does not control or audit third-party web sites referenced in this document You should visit the referenced web site and confirm whether referenced data are accurate

Intel Corporation may have patents or pending patent applications trademarks copyrights or other intellectual property rights that relate to the presented subject matter The furnishing of documents and other materials and information does not provide any license express or implied by estoppel or otherwise to any such patents trademarks copyrights or other intellectual property rights

copy 2015 Intel Corporation All rights reserved Intel the Intel logo Core Xeon and others are trademarks of Intel Corporation in the US andor other countries Other names and brands may be claimed as the property of others

  • Intelreg Open Network Platform Server Reference Architecture (Release 13)
    • Revision History
    • Contents
    • 10 Audience and Purpose
    • 20 Summary
      • 21 Network Services Examples
        • 211 Suricata (Next Generation IDSIPS engine)
        • 212 vBNG (Broadband Network Gateway)
            • 30 Hardware Components
            • 40 Software Versions
              • 41 Obtaining Software Ingredients
                • 50 Installation and Configuration Guide
                  • 51 Instructions Common to Compute and Controller Nodes
                    • 511 BIOS Settings
                    • 512 Operating System Installation and Configuration
                      • 5121 Getting the Fedora 20 DVD
                      • 5122 Installing Fedora 21
                      • 5123 Installing Fedora 20
                      • 5124 Proxy Configuration
                      • 5125 Installing Additional Packages and Upgrading the System
                      • 5126 Installing the Fedora 21 Kernel
                      • 5127 Installing the Fedora 20 Kernel
                      • 5128 Enabling the Real-Time Kernel Compute Node
                      • 5129 Disabling and Enabling Services
                          • 52 Controller Node Setup
                            • 521 OpenStack (Juno)
                              • 5211 Network Requirements
                              • 5212 Storage Requirements
                              • 5213 OpenStack Installation Procedures
                                  • 53 Compute Node Setup
                                    • 531 Host Configuration
                                      • 5311 Using DevStack to Deploy vSwitch and OpenStack Components
                                          • 54 Virtual Network Functions
                                            • 541 Installing and Configuring vIPS
                                            • 542 Installing and Configuring the vBNG
                                            • 543 Configuring the Network for Sink and Source VMs
                                                • 60 Testing the Setup
                                                  • 61 Preparing with OpenStack
                                                    • 611 Deploying Virtual Machines
                                                      • 6111 Default Settings
                                                      • 6112 Custom Settings
                                                      • 6113 Example mdash VM Deployment
                                                      • 6114 Local VNF
                                                      • 6115 Remote VNF
                                                        • 612 Non-Uniform Memory Access (NUMA) Placement and SR-IOV Pass-through for OpenStack
                                                          • 6121 Preparing Compute Node for SR-IOV Pass-through
                                                          • 6122 Devstack Configurations
                                                          • 6123 Creating the VM with Numa Placement and SR-IOV
                                                              • 62 Using OpenDaylight
                                                                • 621 Preparing the OpenDaylight Controller
                                                                    • Appendix A Additional OpenDaylight Information
                                                                      • A1 Create VMs Using the DevStack Horizon GUI
                                                                        • Appendix B Configuring the Proxy
                                                                        • Appendix C Glossary
                                                                        • Appendix D References
                                                                        • LEGAL