understanding, verifying, and troubleshooting€¦ ·  · 2016-02-10understanding, verifying, and...

153

Upload: doankhue

Post on 19-Apr-2018

222 views

Category:

Documents


2 download

TRANSCRIPT

Understanding, Verifying, and Troubleshooting

ACI Configuration PoliciesDaniel Pita, ACI Solutions TAC

BRKACI-2101

• Introduction

• Quick Review of the Object Model

• Flow of Configuration

• Verification at the Different Stages of Configuration

• Case Studies and Troubleshooting Methodology

• Live Troubleshooting Activity

• Final Q & A

• Summary / Closing Remarks

Agenda

Like all new tools, ACI has emerged from a need. A need to simplify Datacenters, abstract the Network, and focus on the Applications that run in a datacenter.

For new tools to be used properly, they must be understood. Once understood, the user is empowered and the possibilities are endless…

ACI

• Application Centric Infrastructure

• Deploy a physical and logical network based on the needs of my application

• Virtualizing the network infrastructure and hardware

• Stateless Switches

• How is this accomplished?

• Policies

• The Object Model

• Overlays

Acronyms

• ALE: Application Leaf Engine

• Use vsh_lc to interrogate ALE ASIC

• BD: Bridge Domain

• EPG: Endpoint Group

• AP: Application Profile

• VMM: Virtual Machine Management

• PI VLAN: Platform Independent VLAN

• DN = Distinguished Name

Endpoint Group

• Most basic and fundamental entity in ACI

• All endpoint will be classified into an EPG

• Policy is applied BETWEEN EPGs

Objects = Configuration

Conf t

Int e1/25

Switchport mode trunk

Switchport trunk vlan 3,4

No shut

Endpoint Verification

Tenants ACME-CLApplication

ProfilesACME-AP

Application EPGs

EPG1

Endpoint Verification - CLI

Endpoint Verification - CLI

Show vlan extended

• First (top) section shows the PI VLAN ID and what Tenant:AP:EPG it maps to

• Also shows what interface the VLAN is configured on

• Second (bottom) section shows the relationship/translation from the PI VLAN to the system VXLAN or access encapsulation (on the wire) VLAN

Show system internal eltmc info vlan brief

Show system internal eltmc info vlan brief

• This command clearly shows the relationship between the BD_VLAN and the FD_VLAN and their respective attributes such as the PI VLAN, BCM HW VLAN, access encap (on the wire), the VXLAN or VNID

Path of the Packet

• EPG Classification

• VLAN Normalization

Scenario 1: Same EPG, Same Leaf

ALE

BCM

EP

1

EP

2

• Same EPG, Same Leaf is L2 switched in the front panel ASIC

• Same HW VLAN, no policy enforcement

• Regular L2 switch behavior

IP: 4.100

MAC: 5B89

VLAN: 356

IP: 4.101

MAC: 5B90

VLAN: 356

Scenario 2: Same EPG, Different Leaf

• Same EPG, Different leaf needs to go to the ALE for transmission to destination leaf

• Same BD VXLAN

• Different HW VLAN

• Different PI VLAN

• Why ALE?

• ALE is the ASIC that

understands Policy!

ALE

BCM

EP

2

ALE

BCM

EP

1

Tunnel

IP: 4.100

MAC: 5B89

VLAN: 356

IP: 4.101

MAC: 5B90

VLAN: 356

L3iPayload L2i iVXLAN L3o L2o

Encap:

3449

HW:

33

HW:

33

PI:

31

PI:

31

BD VXLAN

1612790

BD VXLAN

1612790

PI:

29

PI:

29

HW:

31

HW:

31

Encap:

356

What’s Next?

• Communication between EPGs requires policy

• Policy is enforced at the Private Network/VRF layer

• Policy is specified through contracts!

Contracts

Contracts

• One EPG is Providing the other is Consuming

• Think client/server relationship. One EPG is a server providing a service the client is consuming the service

• Bi-Directional Communication is allowed by default

• Once again, do not confuse bi-directional communication with a provider/consumer role

• Pro-Tip: Only the client/consumer is allowed to initiate communications!

Client/Server in a TCP flow

• Source/Client/Consumer establishes connection to destination port 80 source port 60444

• SYN is sent

• Destination/Server/Provider receives SYN and sends SYNACK with source port 80 destination 60444

• Source/Client/Consumer sends ACK to source port 80 destination port 60444

• Source sends HTTP GET

In ACI however…

Client Server

SYN

SYNACK

ACK

DATA

ACK

ACI Provider/Consumer

• Gotcha!

• Exactly the same. Assuming a contract is in place between the web-client EPG and web-server EPG!

• The difference is ACI is granular enough to enforce directionality of the flow!

Web-Server

EPG

Web-Client

EPG

HTTP Contract

HTTP Subject

HTTP Filter

Source X

Dest 80

Provide 80Consume 80

Sport = X Dport = 80 Sport = 80 Dport = X

Verify in GUI Tenant ACME-CLApplication

ProfileACME-AP

ApplicationEPGs

On APIC

Tenant ACME-CL Networking Private Networks Operational Tab Associated EPGs

Private Network/Context Segment ID

Tenant ACME-CL Networking Private Networks ACME-PN Policy Tab

Show zoning-rules

• Confirms contracts are create on a switch

• Shows source and destination EPG based on their PCTAG value

• From CLI show zoning-rule [scope-id]

• Scope ID is the context number

• Found under the tenant > Networking>Private Network > Policy tab as “Segment ID”

Show zoning-rule

• Displays Rule ID and the EPG PCTAG as source and destination as well as the action

Show zoning-filter [filter-id]

• In depth information on the filters associated to a rule.

Show system internal policy-mgr stats | grep [scope]

• Shows Model information for a specific rule and hit counter statistics

Scenario 3: Different EPG, Same Leaf

ALE

BCM

EP

1

EP

2

• Different EPG, Same Leaf is Sent to ALE for policy and for forwarding to different VXLAN

IP: 4.100

MAC: 5B89

VLAN: 356

IP: 13.100

MAC: DCF3

VLAN: 390

Scenario 4: Different EPG, Different Leaf

• Different EPG, Different leaf needs to go to the ALE for transmission to destination leaf

• Different VXLAN

• Policy enforcement:

• Generally happens on ingress leaf when Destination EP is known

ALE

BCM

EP

2

ALE

BCM

EP

1

Tunnel

IP: 4.230

MAC: 9055

VLAN: 3449

IP: 13.100

MAC: DCF3

VLAN: 390

Object Model Review

Types of Objects

• Logical, resolved, and concrete

• Logical = configured in the GUI by the user

• Resolved = created by the APIC as a unit/object to communicate and pass information to the switches

• Concrete = objects used by the switches to program hardware

Logical Resolved Concrete Hardware

Flow

• Process flow

• Sequential

• Use to your advantage

Logical Resolved Concrete HardwareNGINX/

API

APICPM/

PE NXOS

Flow

NGINX Policy

Manager

Policy

ElementNXOS Hardware

Logical MO

Resolved

Concrete

Concrete

fvTenant

fvAp

fvAEPg

fvCtx

fvBD

fvCtxDef

fvEpP

fvLocale

fvStPathAtt

fvDyPathAtt

l3Ctx

vlanCktEp

actrlArule

Verification

Verification

• Are my interfaces UP?

• Are my VLANs provisioned on the switch? Which Platform Independent (PI) VLAN maps to which Encapsulation VLAN?

• What endpoints are learned on this leaf? What EPG do they belong to?

• How do I start troubleshooting with only an IP address?

Verification

• Start at the Concrete/Hardware and confirm configuration.

• GUI under Fabric>Inventory

• CLI using show commands or moquery• Pro-Tip: CLI show command syntax help is available albeit different than NXOS

• Use <esc><esc> which is equivalent to ?

• Show must always be written out completely to be parsed correctly

• <tab> and <tab><tab> work as expected

• If there is a problem check logical model.

• Make sure configuration policies exists

• No faults are present

CLI Verification Commands

• Are my VLANs provisioned on the switch? Which Platform Independent (PI) VLAN maps to which Encapsulation VLAN• Show vlan extended

• Show system internal eltmc info vlan brief

• What endpoints are learned on this leaf? What EPG do they belong to?• Show endpoint detail [vrf <name> | ip | mac | int | vlan |

detail]

• If you know the IP or MAC for a particular endpoint• Show system internal epmc endpoint [IP|MAC] [x.x.x.x|0.0.0]

• Moquery –c fvCEp | grep –A 10 –B 8 “[IP]”

• Run this command on the APIC

GUI Verification

• Fabric>Inventory

• Holds all the “show commands”

• In reality, Fabric>Inventory is reading the objects(mo and summary) on the switches and populating an HTML5 page.

• Pro-Tip: The CLI holds all the same information found in the GUI

• /mit/ under the APIC or the Switches hold the actual model and objects

• /aci/ under the APIC or Switches follows the same structure as the GUI for easier navigation and naming!

GUI: Fabric > Inventory

vPC View From Under Fabric>Inventory

Visore and moquery

• Visore and moquery serve the same purpose, just a different front-end

• Visore is via HTTP/HTTPS through the browser

• https://<apic-address>/visore.html

• https://<switch-address>/visore.html

• Moquery is a CLI command that searches the model for a specific object

• Used on the APIC or switches

• Takes flags and arguments

Pro-Tip: objects are case sensitive!!

Visore

• Pro-Tip: Only the APIC will have the ? Link next to the object name

moquery

Moquery –c fvCEp –A 8 –B 10 [ip]

Fabric Access Policies

Fabric > Access Policies

• Govern how a physical switch or switch port will be configured

• Controls layer 1 and layer 2 properties such as:

• LACP

• CDP/LLDP

• Port Channels/vPC

• FEX association• FEX port configuration

• Relationships and associations here impact deployment later on!

Verify in model

• On switch CLI• /mit/uni/epp/fv-[uni--tn-<name>--ap-<name>--epg-<name>]/node-<#>

• Confirm dynamic path attachment was created.

• List directory (ls)

• Presence of “dyatt” directory or “stpath” directory

• Dyatt = dynamic path attachment and relates to VMM integration

• Stpath = static path defined under an EPG, used for baremetal endpoint connection

• Object is fvLocale

• /mit/sys/phys-[eth1--<#>]

• Relation to domain with “dom” directory

• dbgX directory summary files include statistics and counters

• Object is l1PhysIf

• /mit/sys/phys-[eth1--<#>]/phys

• Cat summary in this directory shows VLAN info and other L1/L2 information

• Object is ethpmPhysIf

On Switch

Verifying in Traditional CLI

• Show interface e1/13 [switchport]

• Show interface e1/13 trunk

• Standard commands

Port Channel with x Members

• Port Channel aggregation groups are controlled by unique interface policy groups

• All port blocks associated to a PC interface policy group will be bundled together into a port channel

Verify in model and hardware

• On switch /mit/sys/aggr-[poX]/

• On APIC /mit/topology/pod-1/node-101/sys/aggr-[poX]/

• Object is pcAggrIf

Traditional CLI

• Show port-channel summary

• Show lacp neighbor

Cat summary of aggr-[poX]

vPC with x members

• Two methods

• Wizard for side-A and wizard for side-B

• Wizard for side-A and manually for side-B, reusing the switch selector• Create new interface selector and port block, new vPC interface policy group and associate to

switch selector

• Manual configuration is one switch profile 1 switch selector with 2 switches in the block(or any other combination! Just remember the model)

• Two interface profiles tied to two different vPC interface policy group to create the 2/4 port channels.

Verify in model and hardware• On Switches:

• /mit/sys/aggr-[po-<#>]

• Cat summary will show the port channel information

• Directory should have a domain association inside

• Object is pcAggrIf

• /mit/sys/vpc/inst/dom-<#>

• Cat summary will show vPC domain information. Equivalent to show vpc

• /mit/sys/vpc/inst/dom-<#>/if-#

• vPC interface object. Cat summary will show the VLANs being used

• Object are vpcIf, fabricLagId, fabricProtPol

Verify in model and hardware

• On APIC:• /mit/topology/pod-1/protpaths-101-103/pathep-[<interface policy

group name>]

• Protpath is used to reference the vPC when using a Static Path.

• Class is fabricPathEp

• /mit/uni/fabric/protpol/expgep-vpc-[node-pairs]/

• Cat summary here to see the Virtual anycast IP of the vPC pair

• LS here to see all vPC interface policy groups associated to this vPC domain.

• /mit/uni/fabric/protpol/expgep-vpc-[node-pairs]/lagid-default-

[IPG-name]/

• Cat summary here to see the ID of this vPC interface

Bundle Relationship to IPG

IPG: Creates Bundles

Case Study: EP learning on a vPC

Show system internal epmc endpoint ip <x.x.x.x>

Front Panel ASIC learning l2 show

BCM-shell trunk show

• Pro-tip: when looking at BCM output, xe ports are front panel ports and are always offset by 1. This is because BCM starts counting at 0 whereas the front panel and GUI starts at 1. In this case xe19 is referencing port 20

What We Confirmed

sys/phys-[eth1/13]

topology/pod-1/protpaths-101-103/pathep-[ACME-pod3-ucsb-B-int-pol-gro]topology/pod-1/protpaths-101-103/pathep-[ACME-pod3-ucsb-A-int-pol-gro]

Contracts

Contract Model

Contract

Filter Subject

EPGExternal L2

EPG

External L3

EPG

Verification

• Contracts go directly past ‘GO’

• After Logical object created from APIC API (GUI in this case) a Concrete object is created and the rules are programmed into hardware on the leafs

• The flow is NGINX -> APIC PM -> Leaf PE -> hardware

• Object is vzBrCP

• Consumer EPG object is vzConsDef

• Provider EPG object is vzProvDef

• Found in /mit/uni/tn-<tenant-name>

• Switch object is in actrlRule

Contract Logical Object

EPGs

• Confirm EPG and context is deployed on a switch• fvEpP is concrete object on switch that relates to logical fvAEPg (EPG)

• The APIC validates if an EPG is deployable onto switches

• BD associated, context configured on that BD

• Otherwise, faults on the EPG/BD

• The Leaf validates after the APIC and before deployment on to hardware

• Path endpoint validation (port, PC, vPC)

• VLAN encapsulation validation

• Each EPG is assigned a PCTAG or source-class ID

• Can be seen in GUI under the context>operational or it can be queried in the CLI• moquery –c fvAEPg

On APIC

On Switch

Show zoning-rules

• Confirms contracts are create on a switch

• Shows source and destination EPG based on their PCTAG value

• From CLI show zoning-rule [scope-id]

• Scope ID is the context number

• Found under the tenant > Networking>Private Network > Policy tab as “Segment ID”

• In APIC CLI with moquery –c fvCtx

• In Model (on leaf):• /mit/sys/actrl/

• This directory contains the structure of contracts with filters and scope directories.

• Cd into a scope-[#]

• /mit/sys/actrl/scope-[#]/

• This directory will contain all rules associated to this context scope ID

Case Study: Any to Any Contracts

vzAny• Found under the Private Network

• References a collection of EPGs

• Applies contract at the context level and affects all EPGs with a BD tied to the context

• Referenced in CLI as pcTag 0

• Use Case:

• Used to conserve TCAM space and as a place to easily apply large amounts of contracts

• Example: I want everything in my tenant to be able to ping each other. vzAny will provide and consume the icmp-contract

• Example: I need every EPG to have access to this one webserver on web-EPG. Have web-EPG provide an http-contract and vzAny consume the http-contract

vzAny in the GUI

Case Study: Troubleshooting Tool -

Contracts

Troubleshooting Tool Contracts Tab

What We Confirmed

• Contract defined for HTTP between EPG1 and EPG2

• ICMP any-to-any applied at vzAny

• Applies to all EPGs associated to the VRF

sport: X dport: 80

Icmp any to any

VMM Integration

VMM Model

VMM

Domain

VMM

ControllerVLAN Pool

EPG

Port Group/

VM

Network

AAEP

VMM Integration• Allows ACI and the APICs insight into VMMs and allows dynamic configuration of Virtual

Machine networks

• “Easy-Button” for provisioning networks to virtual machines

• VMM Domain policy creates a DVS with the name of the VMM Domain policy

• Objects are as follows:

• VMM Domain = vmmDomP

• Controller = vmmCtrlrP

• EPG = infraRtDomAtt (with a target class of fvAEPG)

• AAEP = infraRtDomP (with a target class of infraAttEntityP)• This is the AAEP under fabric>access policies that is associated to an interface policy group that is then associated

to the interfaces where the Hypervisors are connected

• VLAN Pool = infraRsVlanNs (with a target class of fvnsVlanInstP)

• Port Group = vmmEpPD. Important information available with this object.

In Reality…

VmmDomP

vmmCtrlrP infraRsVlanNs

infraRtDomAtt vmmEpPDinfraRtDomP

Verify from GUI

• Check policy for faults

• Check if inventory was populated• Good indication that at least communication between the APICs and the controller is established

and the inventory can be shared

ObjectscompCtrlr

compEpPD compVmcompHv

fvEpP

fvAEPg compEpPConn

compVNiccompRtHv

compRsHvcompHpNic

hvsRtNicAdj hvsAdj

fabricPathEp

Portgroup

VM

Hypervisor

Object Verification

• hvsAdj is critical. It is a hypervisor adjacency established through a discovery protocol such as CDP or LLDP

• Without this object, leaf interfaces will not be programmed dynamically.

• hvsAdj is tied to fabricPathEp which is connected again to fvDyPathAtt.

• Dynamic Path Attachment is how VMM deployment works.

• hvsAdj is found on the APIC:

• /mit/comp/prov-Vmware/ctrlr-[<vmm-domain-name>]-

<controller-name>/hv-host-#/

hvsAdj Child Objects

Case Study: Adjacency Issues

Adjacency Discovery Issues

• Establishing adjacencies is very important and can hinder deployment.

• Problems arise when NIC does not support LLDP(on by default on ACI leaf ports). If the discovery information is not exchanged, adjacencies will fail. Faults will trigger.

• UCSB, by having the FI’s in between adds more steps to adjacencies

• FI’s also do not support LLDP down to the IOMs

• CDP must be used from the blade up to the FI

• Resolved with two options1. Disable LLDP and enable CDP on the ports where the FI’s connect when using a UCS-B

2. Utilize the AAEP vSwitch Override policy

Override Policy Use Case

• Added a new blade and new uplinks from the FI’s to the fabric via a vPC again. Decided this blade will have its own DVS therefore created a new VMM domain policy to the same vCenter/datacenter

• Interface policy group used all defaults except LACP

FAULTS!?

ObjectscompCtrlr

compEpPD compVmcompHv

fvEpP

fvAEPg compEpPConn

compVNiccompRtHv

compRsHvcompHpNic

hvsRtNicAdj hvsAdj

fabricPathEp

Portgroup

VM

Hypervisor

DVS discovery protocol

• Why is this happening now?

Problem

• Since the interface policy group was left at defaults for LLDP and CDP this means:

• LLDP on by default

• CDP is off by default

• DVS was created using the active discovery protocol on the interfaces as its discovery protocol type

• Resolution:

• Change the interface discovery protocols on the two interface policy groups

• Use the override policy on the AAEP• We will proceed with this method

Configuring the Override Policy

FabricAccess Policies

Global Policies

AAEPRight-Click or Actions menu

Config vSwitch Policies

Override dialog box

DVS is updated

Most Importantly…

• Faults are gone!

Rack Server Adjacency

Review

Review

• After configuration:

• Check for faults in related objects and recently created objects

• Use show commands to confirm deployment/instantiation

• If show commands are not what is expected, use the sequential flow of the model to help narrow down the issue• Navigate the model on the APIC and on the Leafs

• Moquery or Visore for objects of importance

Demo

• ACI gives you the rope

• Need to learn how to use it and understand its potential.

• Thank you

Summary / Closing Remarks

• BRKACI-1024 Dev-Ops and the Application Centric Infrastructure –Open Standards and Open API’s

• BRKAC-1502 Simplify Operations with ACI

• BRKACI-1789 How to Perform Common Tasks in ACI

• BRKACI-2501 Operationalize ACI

• CCSACI-2552 - The Journey to Nexus 9k and ACI: NetApp Global Engineering Cloud

But wait! There’s more!

Participate in the “My Favorite Speaker” Contest

• Promote your favorite speaker through Twitter and you could win $200 of Cisco Press products (@CiscoPress)

• Send a tweet and include

• Two hashtags: #CLUS #MyFavoriteSpeaker

• You can submit an entry for more than one of your “favorite” speakers

• Don’t forget to follow @CiscoLive and @CiscoPress

• View the official rules at http://bit.ly/CLUSwin

• Promote Your Favorite Speaker and You Could Be a Winner

Complete Your Online Session Evaluation

Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online

• Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card.

• Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect.

Continue Your Education

• Demos in the Cisco campus

• Walk-in Self-Paced Labs

• Table Topics

• Meet the Engineer 1:1 meetings

• Related sessions

Thank you

For Reference

Step 0:

• Are interfaces UP?

• GUI = UP and EPG• Fabric>inventory>pod-1>switch-x>physical interfaces>ex/x

• CLI = UP• Show int ex/x

• CLI = trunking expected VLANs?• Show int ex/x switchport

Step 1:

• Are both endpoints learned where they are expected.

• Show endpoint

• If not, are the VLANs programmed on the switch?• Show vlan extended

• Are the expected objects created?

Step 2

• If both endpoints are learned. Can they ping their own gateway? Can they ping the opposite gateway?

• To confirm ping try from the leaf:• Tcpdump –i kpm_inb icmp host x.x.x.x

Step 3:

• Can source EP connect to destination EP?

• Same EPG, same leaf

• Same EPG, different leaf

• Different EPG, same leaf

• Different EPG, different leaf

Logs

NGINX Policy

Manager

Policy

ElementNXOS Hardware

Logical MO

Resolved

Concrete

Concrete

fvTenant

fvAp

fvAEPg

fvCtx

fvBD

fvCtxDef

fvEpP

fvLocale

fvStPathAtt

fvDyPathAtt

l3Ctx

vlanCktEp

actrlArule

/var/log/dme/log/nginx.bin.log

/var/log/dme/log/svc_ifc_policymgr.log

/var/log/dme/log/svc_ifc_policymgr.bin.log

Tenant Policies

BD1

ACME-CL

ACME-PNACME-AP

EPG1 EPG2

BD2

Access Policies• Switch Selector

• Block: 101-103

• Interface Selector A

• Port 1/20

• Interface Selector B

• Port 1/26

• Switch Selector

• Block: 104

• Interface Selector 1

• Port 1/13

• VLAN Pool 1

• Block: 356-406

• VLAN Pool 2

• Block: 3440-3450

• AAEP

VMM Domain

• Two VMM Domains

• ACME-VMM-1• 1 UCS C

• ACME-VMM-2• 1 blade of a B series

Fabric Access Policies

Single Attached Hypervisor Host Configuration

Use cases

• vSwitch does not support LACP so use LACP policy ON not ACTIVE

• Static path under EPG references vSwitch vPC/PC

• Path prefix represents a port channel when creating a static path

• Protpath prefix represents a vPC when creating a static path

• Static paths manually deploy an EPG/VLAN on an interface, PC, or vPC

• Can be set to expect tagged traffic

• Untagged

• Or L2 CoS

Contracts

Contract Subject

• The subject is the only object that is not re-usable in the contract model.

• Subject represents an application

• Why?

• Subjects associate to Service Graphs which are uniquely built between two EPGs• Instantiating a service graph requires configuration of parameters specific to a particular flow from

one EPG to another EPG

• VIPs, BVI, security groups, ACLs etc.

• Subjects also control two important options to allow the bi-directional communication as well as preserve the Provider/Consumer model!• Reverse Port Filter

• Apply in both directions

Configuring Contracts

Rule HW Programming

• To get a peak into hardware programming enter vsh_lc

• Show system internal aclqos zoning-rules

• Here we see the TCAM resource usage such as HW and SW index entries

TCAM Dump Rule 4154: HTTP EPG1 to EPG2

• Show platform internal ns table

mth_lux_slvz_DHS_SecurityGroupKeyTable0_memif_data [hw_idx]

• Consumer to Provider HW dump

• Pro-tip: Table 0 = ingress, Table 1 = egress

TCAM Dump 4153: HTTP EPG2 to EPG1

• Reply from Provider to Consumer

• Source class is still 0x8002/32770; destination class is still 0x8003/32771

TCAM Dump 4152: ICMP Any to Any

Table health command

• From vsh_lc show platform internal ns table-health

• Interesting field is SEC GRP

• Shows current TCAM usage

Contract related logs and traces

• Still under vsh_lc

• Show system internal aclqos event-history [trace | errors]

• Trace will show all major events and sessions related to rules and ACLQOS

• Errors will show errors encountered by ACLQOS

• In CLI: show system internal policy-mgr event-history [trace | errors]

Aclqos event-history trace

GUI deny hits

• Change the facility filter “default” to severity “information”

FabricFabric

PoliciesMonitoring

PoliciesCommon Policy

Syslog Message Policies

Policy for System Syslog

Messages

GUI Deny Hits

• Information enabled can be seen under a leaf node in Fabric>Inventory>Pod-1

• Click on the leaf, in the work pane click the History tab and then the Events sub-tab

Fabric Inventory Pod-1 Leaf Node-X History Events

GUI Contract/Filter Hits

• Found under Fabric>Inventory>Pod-1>Leaf-node>Rules

• Click on the rule and click on the stats tab.

Fabric Inventory Pod-1 Leaf Node-X Rules

VMM Integration

Traffic flow!

• After the VMM Domain has been integrated to ACI, communication is proven to be online through DVS and port groups being created, the fun can begin

• Add a hypervisor to the DVS and add the physical NICs that are connected to the fabric as uplinks.

• Through the inventory population process, the APIC will be notified that a host has been attached to one of its leafs

• CDP or LLDP neighbor adjacencies

• Once a VM is added to a port group, the APIC is notified and the VM is VMM learned that a VM exists in that portgroup (EPG)

• When the VM starts sending traffic, then the VM is learned in a traditional (and next-generation) sense. (show mac address-table and show endpoint)

Blade Server Adjacency

• hvsAdj object’s attribute ifId is supposed to point to a ACI Leaf interface object.

• With Blade Servers, ifId is the Veth on the FI

• hvsAdj has children

• hvsRsLsNode = target dn towards the unmanaged node which is the FI

• This is what we want to explore

• hvsRtNicAdj = target dn towards hpnic-vmnic2

hvsRsLsNode and fabricLooseNode

• Target DN relation towards a topology/lsnode-[FI-IP]/• Starting to look more like a fabricPathEP that we would expect

• This object is a fabricLooseNode as described the tCl or target class

• fabricLooseNode has children of interest:

• fabricRtLsNode = references the hvsAdj

• fabricProtLooseLink = describes the location of where the FI’s are connected, the interface policy group and the ProtPath in this case since there is a vPC involved.

fabricProtLooseLink

protPathDn

• fabricProtLooseLink has the attribute protPathDn which is accually referencing a fabricPathEp!

SCVMM

• Cloud is almost equivalent to vCenter Datacenter

• Portgroup in vCenter = Network Site = VM Network

• APIC and Windows Agents automatically provision Network Sites and VM Networks when the Windows VMM Domain is associated to an EPG.

Case Studies

High Level Domain & EPG relation

Domain

AAEP Interfaces

EPG

VLAN Pool

Static Path

What are Domains and why I need them?

• Domains tie together the Access Policy model to the Tenant/EPG model.

• When a domain is associated VLANs and interfaces are associated to an EPG

• Static Paths and Static VLAN pools work together with Domains to properly program interfaces

• Imperative to have domains associated to EPGs when mixing VMM dynamic domains and any other Domains

UCSB and port group hashing

• Known issue between UCSB FI’s and vCenter

• Problem exists in ACI

• Solved with vSwitch over ride for LACP

• Use MAC-Pinning so that port groups in vCenter are created as “route based on originating virtual port”