There are options to integrate L4 – L7 devices, like firewalls or load balancers (Cisco ASA, F5, Citrix Netscaler, etc), into Cisco ACI. These integrations can be done in a managed mode, with a device package, or unmanaged mode. Both modes are available if you are using Cisco ACI with VMware vCenter integration.
When you are using Cisco ACI with Microsoft Hyper-V, you cannot integrate any L4 – L7 device yet (Q1 2016). The options to integrate these devices are not available if you select an SCVMM domain.
More to come..
Cisco ACI is a great product, which I’ve implement at some customers already. I’ve seen the product grow in the last year from something “not production ready” to an stable product which can be used in production environments. But like all new products, there are still some limitations around which can be a struggle during implementations. The VMware integration into ACI is done and complete, the Hyper-V implementation is still pretty new and some features are missing. I’m sure that the Hyper-V implementation will be more complete in the next major ACI release, but at this point in time you need to know about the limitations which are still around.
Cisco Live Berlin 2016 was held last week, 15 – 19 February 2016. I was one of the 12000 attendees of the event and this blog post is a short review about my Cisco Live trip.
The Venue was huge. There are a lot of huge halls with a lot of connecting halls. It’s easy to get lost, even easier then it was in Milan last year. But like every year, there are a lot of signs with directions placed all around the venue and a lot of Cisco people (this year in orange sweaters) are located on almost every corner to show you the direction.
At Cisco Live Europe 2016, I’ve heard a few interesting things about Cisco ACI. Down here, a few notes about the things I’ve heard (Non-nda):
- Stretched fabric design: 3 site deployment is coming in Q2 2016. Sites are connected in a triangle
- Multi-pod deployment is coming in Q3 2016
- Multipod config is not managed by APIC and configured manually
- Multipod uses 40 or 100Gb/s links
- Multipod requires a higher MTU if using a service provider to handle VXLAN headers of 50 bytes
- OSPF peering with service provider required
- If you’re using DWDM or dark fiber WAN connections, the maximum RTT can be 10 msec
- QoS at service provider to prioritize APIC cluster communication
The Cisco Champions for 2016 are announced today and I am proud an very honoured to be selected as a Cisco Champion for the 3th year in a row!
For more information about the Cisco Champion program, click here.
As another bonus this year, my colleague Rob Heygele is selected as Cisco Champion for the 2nd year in a row! Congrats to him and offcourse to all other fellow Champions of 2016! See you soon!
There are a lot of blog posts around about the Cisco ACI technology and design tips and tricks. If you want to know more about ACI, please read the Cisco ACI Fundamentials
This post describes your first steps to create and installation of a ACI fabric. Our example design will look like this:
Our network will exist in only one datacenter with two spine switches, two leaf switches and two APIC controllers. The spine and leaf switches are connected with 40Gb/s, the APIC controllers are multihomed with 1Gb/s links.