Some interesting notes on the latest ACI 2.1.1h firmware.
When configuring two Layer 3 external networks on the same node, the loopbacks need to be configured separately for both Layer 3 networks.
All endpoint groups (EPGs), including application EPGs and Layer 3 external EPGs, require a domain. Interface policy groups must also be associated with an Attach Entity Profile (AEP), and the AEP must be associated with domains. Based on the association of EPGs to domains and of the interface policy groups to domains, the ports and VLANs that the EPG uses are validated. This applies to all EPGs including bridged Layer 2 outside and routed Layer 3 outside EPGs. For more information, see the Cisco Fundamentals Guide and the KB: Creating Domains, Attach Entity Profiles, and VLANs to Deploy an EPG on a Specific Port article.
When creating static paths for application EPGs or Layer 2/Layer 3 outside EPGs, the physical domain is not required. Upgrading without the physical domain will raise a fault on the EPG stating “invalid path configuration.”
When contracts are not associated with an endpoint group, DSCP marking is not supported for a VRF with a vzAny contract. DSCP is sent to a leaf along with the actrl rule, but a vzAny contract does not have an actrl rule. Therefore, the DSCP value cannot be sent.
The Cisco Discovery Protocol (CDP) is not supported in policies that are used on FEX interfaces.
In release 1.1(4e) was a bug present that caused issues with uploading/downloading firmware via the GUI.
A workaround is using WGET from the APIC CLI and download the firmware from a http server to the /tmp directory
After downloading just use the “firmware repository” command to add the firmware to the repository.
After doing this, you can use the GUI to upgrade the firmware for the APIC and switches, as you were used to.
apic1# cd /tmp
bootflash flashenc logrotate.status snmpd2.pid vrf-init.log vrf-set-spineproxy.log
apic1# wget http://10.249.112.134/aci/aci-n9000-dk188.8.131.52k.bin
–2016-01-03 16:50:06– http://10.249.112.134/aci/aci-n9000-dk184.108.40.206k.bin
Connecting to 10.249.112.134:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 558322599 (532M) [application/octet-stream]
Saving to: `aci-n9000-dk220.127.116.11k.bin’
58% [======================================================================================> ] 327,484,640 1.81M/s eta 2m 5s h100%[====================================================================================================================================================>] 558,322,599 1.81M/s in 5m 0s
2016-01-03 16:55:06 (1.77 MB/s) – `aci-n9000-dk18.104.22.168k.bin’ saved [558322599/558322599]
apic1# firmware repository
apic1# firmware repository add aci-n9000-dk22.214.171.124k.bin
Syncing… might take a bit if the image is large or many pending filesystem buffers
Firmware image aci-n9000-dk126.96.36.199k.bin is added to the repository
implementing NSX. – business wants to be Amazon-like. Do more with less. Abstract, Pool,Automation is key. Across compute networking and storage.
Both need access to same environment. how?
RBAC, integration with AD groups
Modify existing role for network admins and server admins are administrator
1. restrict per DVS. NSX groups for Network Admins. VMkernels, system traffic, etc for Server admins.
> network folder, modify permissions,
2. RBAC with a single DVS (preferred methode)
> just give read-only on a portgroup level to network admins. (like vmotion, mgmt, nfs, etc)
On a VM level, RBAC on VM’s. Network admins get access on Folder level (F5, LB, NSX). Server admin get no access/read-only
A lot of customers want to be Amazon-like. SDDC is a used for this. NSX is the SDN part of the SDDC model.
NSX momentum, over 150 customers.
How are these customers using NSX today: Three main use cases
1. Self-Service IT (Portal) – DevOps Cloud and On-boarding M&A
2. Data Center Automation – Micro-segmentation of App – Simplifying Compute Silo
3. DMZ Deployments
NSX is not a product, it is a platform. how?
Operations, Security, Physical + Virtual (L2 L3 gateway) Application Delivery (LB, Wan Optimization)
Service Insertion through Gateway, VTEP.
The VMware NSX Distributed Firewall can be used for micro-segmentation. There are no choke points and there is scale-out performance up to 20 Gbps.
It acts like firewall on the vNic. Each vNic has it’s own rule set. Performance is close to line-rate. Traffic Redirection is possible to 3th party.
Of course there is the Rest API.
DFW is a stateful engine. During a VMotion the state table is migrated tand is in place before the VM arrives on the destination host
If you use SAN pin groups in UCS manager. it wil translate to the NPV traffic map feature on the CLI. You can see that with the “show npv traffic-map” command on the FI (connect nxos)
show npv traffic-map
UCS-SB60-A(nxos)# show npv traffic-map
NPV Traffic Map Information:
vfc699 san-port-channel 100
vfc702 san-port-channel 100
npv traffic-map server-interface vfc699 external-interface san-port-channel 100
npv traffic-map server-interface vfc700 external-interface vfc697
npv traffic-map server-interface vfc701 external-interface vfc697
npv traffic-map server-interface vfc702 external-interface san-port-channel 100