Cisco Nexus 7000 – Basic Design Case Study and Lessons Learned

As a senior-level engineer with my company, I have the opportunity to do some basic system design. It’s not the kind of experience I would get with a VAR or a larger enterprise, but I count my blessings every chance I get to install and play with new gear.

We deployed a Nexus 7000 in our main datacenter three years ago for 10Gbps connectivity, and we’re now getting around to doing the same thing in our collocated DR site.  Due to tech advancements, though, it doesn’t make sense for us to use identical hardware for the DR location.  Here I’ll compare the old to the new and some of the lessons learned while getting the new one set up.

Our older Nexus 7010 uses Sup1 supervisor engines and M1-series line cards.  We started with a single VDC (virtual device context) model, then later added an L2-only VDC to introduce mass in-line firewall functionality. Having all M1 line cards made this really easy. We’re still running NX-OS v5.1.5 because we’ve had no particular reason to upgrade. Installation was made easier with help from our Cisco partner.

Now there are M1, M2, F1, F2, F2e, and F3 line card models that use different architectures.  I’ve been reading entire slide decks from Cisco Live that talk about how certain features can be implemented with particular combinations of models of line cards. Combining that plethora of information, along with our requirements, presents a formidable challenge. Add on the fact that we MAY WANT to do certain things in the future (like OTV for instance) and it’s even more interesting.

Our new N7K, which is also a 7010, has dual Sup2 supervisors along with M1 and F2e line cards. The M1 cards (model M148GT-11L) provide 48-port copper 1Gbps RJ45 connections, and the F2e cards (model F248XP-25E) are for 1/10Gbps connections using either fiber optics transceivers or twinax cables. One key thing I’ve learned in my cram course on N7K modules is that we will need NX-OS v6.2 in order to support the same VDC model we already use in production. When running in this “proxy routing” mode, the F2e ports defer the L3 decisions to the M1 cards in the same VDC. In my case there’s also a key takeaway: we cannot connect other routers to F2e ports when using M1 for proxy routing.

Screenshot 2014 06 26 08 24 49

All our existing routers in the same location are 1Gbps only so can be connected to the M1 cards, but we’ll have to keep this in mind for future connections. We may need to create an F2e-only VDC in the future if we want to terminate 10Gbps routers. I welcome your comments if you have experience with this.

The resources I’ve been using include some very smart folks on Twitter such as Ron Fuller (@ccie5851) and David Jansen (@ccie5952). Ron and David, as well as countless others, referred me to the F2e and M Series Design Guide for NX-OS 6.2. Honestly, I might not have known about this doc had it not been for Ron’s apparent omnipresence on Twitter.  Many made references to and the great presentations there.  Also, here’s a relevant discussion on Cisco’s Support Forums site:

 As always, hit me up on Twitter @swackhap if you have questions or comments. Or leave them below this post.


Cisco Live Wednesday Lessons Learned

My first session today was BRKARC-3472, NX-OS Routing Architecture and Best Practices presented by Arkady Shapiro, Technical Marketing Engineer (TME) for NX-OS and Nexus 7000. I thought Arkady was very entertaining and engaging as he delved into the depths of L3 on the N7K. Some of my key takeaways (may or may not be important in your line of work):
  1. Routes can be leaked between VRFs by enabling “feature pbr” and setting up route-maps with “match ip” statements and linking them with “set vrf” commands. (ref: slide 50)
  2. Routes can be leaked with VRF-lite without an MPLS license by redistributing IGP into BGP and using “route-target export” and “route-target import” commands under the BGP routing configuration of each VRF. (ref: slide 52)
  3. Auto-cost reference bandwidth by default is 100Mbps in IOS but 40Gbps in NX-OS.
  4. BGP best-practice is to use “aggregate-address a.b.0.0/16” under BGP routing configuration. Do NOT use “network a.b.0.0/16” under BGP routing configuration. Do NOT use “ip route a.b.0.0/16 Null0” under VRF. The reason is that if “network” statement matches a static route to null0, MPLS traffic to that route may be dropped. (ref: slide 92)
For lunch I had the opportunity to spend time with some of Solarwinds Head Geeks (@headgeeks) for two lunch-n-learn styled presentations. The first session, called “Don’t Forget The Superglue,” was introduced by Carlos Carvajal (Market Strategy) and presented mainly by Patrick Hubbard (The Head Geek). The reference to “superglue” alluded to the tools that Solarwinds offers to help in day-to-day running of the network and IT in general. Tools mentioned included:
  1. Web Help Desk – automated ticketing, asset management, knowledge base, communication
  2. Network Configuration Manager (NCM) – automatic config backup, realtime change alerts, compliance reporting
  3. Firewall Security Manager (FSM) – Java-based, runs on workstation, automated security and compliance audits, firewall change impact modeling, rule/object cleanup and optimization, can download configs from firewalls directly or from NCM
  4. Network Topology Mapper (NTM) – successor to LanSurveyor – network discovery, mapping, reporting, can export maps to Orion and open them in Orion Atlas
The second session covered some recent updates to Orion Network Performance Monitor (NPM) v10.5. Again introduced by Carlos Carvajal, this was presented by Michal Hrncirik, Product Manager for several of Solarwinds’ applications. A couple key items that interested me:
  1. Interface discovery can be filtered for import – for instance, you can tell it to only select trunk ports and not access ports on switches, then it will show you a list of all ports and the devices they belong to so you can manually uncheck ones you don’t want to import.
  2. Route monitoring – NPM will poll routes from the routing table. Although Michal said EIGRP isn’t yet supported, I have actually seen EIGRP routes pulled from my IOS and NX-OS routers. The IOS routers showed them labeled as EIGRP (I think) and NX-OS showed them as “Cisco IGRP” in Orion. I’m pretty excited about the possible alerts we can set up with this type of monitoring.
Many thanks to Kellen Christensen (@ChrisTekIT) for taking the time to talk with me about his experience with Palo Alto firewalls.