Cisco ACI 2.1.1h notes


Some interesting notes on the latest ACI 2.1.1h firmware.

When configuring two Layer 3 external networks on the same node, the loopbacks need to be configured separately for both Layer 3 networks.

All endpoint groups (EPGs), including application EPGs and Layer 3 external EPGs, require a domain. Interface policy groups must also be associated with an Attach Entity Profile (AEP), and the AEP must be associated with domains. Based on the association of EPGs to domains and of the interface policy groups to domains, the ports and VLANs that the EPG uses are validated. This applies to all EPGs including bridged Layer 2 outside and routed Layer 3 outside EPGs. For more information, see the Cisco Fundamentals Guide and the KB: Creating Domains, Attach Entity Profiles, and VLANs to Deploy an EPG on a Specific Port article.

When creating static paths for application EPGs or Layer 2/Layer 3 outside EPGs, the physical domain is not required. Upgrading without the physical domain will raise a fault on the EPG stating “invalid path configuration.”

When contracts are not associated with an endpoint group, DSCP marking is not supported for a VRF with a vzAny contract. DSCP is sent to a leaf along with the actrl rule, but a vzAny contract does not have an actrl rule. Therefore, the DSCP value cannot be sent.

The Cisco Discovery Protocol (CDP) is not supported in policies that are used on FEX interfaces.


NPV traffic map

If you use SAN pin groups in UCS manager. it wil translate to the NPV traffic map feature on the CLI. You can see that with the “show npv traffic-map” command on the FI (connect nxos)

show npv traffic-map

UCS-SB60-A(nxos)# show npv traffic-map

NPV Traffic Map Information:
Server-If External-If(s)

vfc699 san-port-channel 100
vfc700 vfc697
vfc701 vfc697
vfc702 san-port-channel 100

show running:

npv traffic-map server-interface vfc699 external-interface san-port-channel 100
npv traffic-map server-interface vfc700 external-interface vfc697
npv traffic-map server-interface vfc701 external-interface vfc697
npv traffic-map server-interface vfc702 external-interface san-port-channel 100

Nexus 1000v on Windows 2012 R2 – quicknotes

nsm logical network NEDWORK
nsm network segment pool NEDWORK_SP
member-of logical network NEDWORK
nsm ip pool template VLAN1
ip address
nsm network segment NEDWORK_NW
switchport access vlan 1
member-of network segment pool NEDWORK_SP
ip pool import template VLAN1
switchport access vlan 1
publish network segment NEDWORK_NW
port-profile type vethernet NEDWORK_INT
no shutdown
state enabled
publish port-profile
port-profile type ethernet NEDWORK_PPP
no shutdown
state enabled
nsm network uplink UPLINK
allow network segment pool NEDWORK_SP
import port-profile NEDWORK_PPP
publish network uplink UPLINK

no nsm network uplink UPLINK
no nsm network segment NEDWORK_NW
no nsm network segment pool NEDWORK_SP
no nsm logical network NEDWORK
no nsm ip pool template VLAN1
no port-profile type ethernet NEDWORK_PPP
no port-profile type vethernet NEDWORK_INT

Study notes

N7K-1# debug ip icmp
N7K-1# debug-filter ip icmp packet vrf management

Also, if you are doing a debug, you should redirect it to a log file:

N7K-1# debug logfile icmp
N7K-1# debug ip icmp
N7K-1# show debug logfile icmp

You will now apply the access list ProtectVM as an outbound-rule to the virtual Ethernet interfaces
(veth) of the existing VMs running Windows 7. Here the concept of port-profiles comes very handy in
simplifying the work. As the Veth interface of the Windows 7 VM leverage the port profile VM-Client,
adding the access list to this port profile will automatically update all associated Veth interfaces and
assign the access list to them.
Nexus1000V(config-acl)# port-profile VM-Client
Nexus1000V(config-port-prof)# ip port access-group ProtectVM out
As a result access to both open ports within your Virtual Machine has been blocked.

Note: The directions “in” and “out” of an ACL have to be seen from the perspective of the Virtual Ethernet
Module (VEM), not the Virtual Machine. Thus “in” specifies traffic flowing in to the VEM from the VM,
while “out” specifies traffic flowing out from the VEM to the VM.