Cisco Live Thursday Lessons Learned

My first session today was BRKRST-3114, The Art of Network Architecture, presented by Denise Donahue (@denise_donohue), Russ White, and Scott Morris (@ScottMorrisCCIE). They talked about how architecture is “the intersection of business and technology” and went into detail about how to better understand a customer by doing a SWOT analysis (stands for Strengths, Weaknesses, Opportunities, and Threats). Having been in the Air Force for over 5 years I really appreciated that Russ, who is also an Air Force veteran, introduced the audience to the concept of an OODA loop (Observe, Orient, Decide, Act). In the military, we were taught that you want to shrink your OODA loop to be smaller than your enemy’s in order to defeat them. Similarly in business, you want to shrink your OODA loop smaller than your competition by best employing IT resources to help your customer succeed.
 
I was able to spend some more time in the World of Solutions expo where I visited some areas of the Cisco booth. I’m working on a project to replace some access switches as well as their aggregation point. When I mentioned the plan to use Catalyst 3750X switches for access, I was asked “why not 3850s?” Based on my conversation with the engineer, the Catalyst 3850s (see data sheet here) come in 24- and 48-port variants and have 3 options for uplink module: 4x1G, 2x10G, and 4x10G. The 3850 is the same price as the 3750X and has better performance capabilities with these caveats:
  1. Can only stack up to 4 currently (should be updated in Fall 2013)
  2. Not every feature supported by 3750X is supported by 3850 yet
  3. The 3850 runs IOS XE whereas the 3750X runs IOS
For the aggregation, I believe the best option to support 27 network closets, each with 2x10Gbps uplinks, would be a pair of 4500X switches (see data sheet here) configured as a VSS pair. Each 4500X can be ordered with either 16 or 32 onboard 10G ports and includes an expansion slot to support an additional 8x10G ports for a max total of 40 ports of 10G. Each 4500X would be ordered with 32 ports (and no expansion module) to support 27 closets plus 2x10G uplinks to the core Nexus 7k. This is another great example of how spending 10 minutes at Cisco Live can save literally hours of research online and/or discussion with my account team.
 
My last session of Cisco Live was the annual end-of-the-week panel presentation and discussion with the NOC team. Session PNLNMS-3000, titled Cisco Live Network and NOC, was moderated by Jimmy-Ray Purser (@JimmyRay_Purser) of Techwise TV. I took the opportunity to live-blog the event using the hashtags #clus and #noc. Below is a transcript of the live tweets in reverse chronological order. (Sorry, I couldn’t figure an easy way to reverse them.) This year’s show went VERY well for the NOC team, particularly for wireless. Well done Cisco Live! Thanks to Keith Parsons (@KeithRParsons) for referring me to http://allmytweets.net to easily copy and paste them here.
  • .@JimmyRay_Purser did a great job moderating this panel #clus #noc 
  • Applause for question managers that have been answer questions in the #clus app #noc 
  • Q: How many boxes got stolen this year? A: 1 classroom switch and an AP and switch loaned to vendor #clus #noc 
  • Question: was there a noticeable uptick in HTTPS over HTTP over last year? Answer: Yes #clus #noc 
  • They used @Splunk to help with security analysis of firewall logs, etc. #clus #noc 
  • The esteemed #clus #noc panel http://t.co/ho2jPjDpZg 
  • Mobile app developed outside of Cisco, delay due to CA cert used and not the network (maybe a cert check?) #clus #noc 
  • Things were rushed with the mobile app, lessons learned, they plan to make experience smoother next year #clus #noc 
  • HTTP data is still being processed for top websites used, NetMan might publish blogpost about it when done #clus #noc 
  • All other controllers for session rooms and hallways ran v7.3MR #clus #noc 
  • WoS controllers started on v7.3, needed more tweaks based on devices seen, so moved down to v7.2 to gave the “knob” needed #clus #noc 
  • They have months of WebEx sessions in advance to prep for show #clus #noc 
  • Collaboration done over Google Docs in many cases to share IP address info, etc; used Push-to-talk radio to communicate on-site #clus #noc 
  • IPv4 used exclusively for NetMan, IPv6 only used for DHCP #clus #noc 
  • no IPv4 was provided in WoS wireless to ensure stability and reduce the load that would have been needed for IPv6 multicast #clus #noc 
  • Jimmy-Ray is taking questions. Anybody? #clus #noc 
  • “Thank you for exercising our network and attending Cisco Live” #clus #noc 
  • Network was 100% reliable for the duration of the show #clus #noc #applause 
  • video streaming exceeded HTTP for traffic breakdown #clus #noc 
  • Vendors would sometimes shut off things, including switches in rooms, to help save power #oops #clus #noc 
  • Intelligent Automation – allowed users to use web portal to switch a port to a particular vlan without knowing details #clus #noc 
  • switches would use EEM to figure out themselves what VLAN they were on by pinging all possible gateways then self-configure #clus #noc 
  • Used EEM to set port descriptions based on CDP neighbors plugged in (embedded automation) #clus #noc 
  • used Cisco Prime LMS to help provision IDF and room switches #clus #noc 
  • …Prime Infrastructure, StealthWatch, Plixer; syslog also sent to FreeBSD and forwarded to interested parties #clus #noc 
  • Flex Netflow sent from 6500 core and dist switches to FreeBSD VM “exploder” which forwarded to other collectors… #clus #noc 
  • SNMPv3 authPriv (SHA/DES) with ACLs, NAM 2304 appliance used to traffic volume and utilization #clus #noc 
  • Joe Clarke – Network Mgmt – very impressed with a lot of Network Academy folks he worked with #clus #noc 
  • peak 10k IOPs, peak data rate 140MB/s #clus #noc 
  • Colo storage: Sunnyvale NetApp FAS2240-4 26 TB total cap, mirrored to it from local DC each night for backups #clus #noc 
  • 12 TB provisioned to VMware x2 mirrored to HA partner, 28% saved on dedup, 8.6TB used on disk #clus #noc 
  • 18TB provisioned to VMs (mostly thick provisioned); 6TB saved by thin provisioning; 14TB physical capacity avail #clus #noc 
  • Self-paced labs used virtual desktops running on NetApp storage with UCS #clus #noc 
  • All recordings from all sessions go to this storage, higher workload than last year, video surveillance stored on UCS local disk #clus #noc 
  • NetApp FAS31240 HA Pair, 2x DS2246 Disk Shelves, same equipment as last year #clus #noc 
  • Patrick Strick – NetApp in Datacenter #clus #noc 
  • Physical safety and security – 6001 events consumed, 12 physec tickets, monitoring based on motion detection #clus #noc 
  • security analytics: 1.2B events sysloged; 12 events resulted in FW blocks #clus #noc 
  • Adam Baines – remote monitoring services: core fault mgmt, security event, physical safety and security video #clus #noc 
  • Bus cams used DMVPN over LTE, worked very well #clus #noc 
  • He has some interesting footage of us coming back from CAE last night on the buses #clus #noc 
  • Able to analyze lines of people to help optimize for future events #clus #noc 
  • 6TB data storage consumed for video surveillance, 35 mobile cams on hotel shuttles, running on UCS in DC #clus #noc 
  • Physical Security with Lionel Hunt, worked with John Chambers head of security, 45 cameras deployed, 2Mbps per camera #clus #noc 
  • Some people doing call-home to botnets – check your stuff #clus #noc 
  • maxed around 1000 conns/sec, FWs never passed 7% CPU #clus #noc 
  • 26.5 TB transferred through firewalls through the week #clus #noc 
  • No firewall failover even when cables were removed and replaced during full production at 800Mbps of throughput #clus #noc 
  • Secure Edge Architecture, ASAs deployed in transparent mode active/standby HA, failover only occurs when 2 ints failed #clus #noc 
  • ASA5585X SSP-60, 2 pair, IPS-SSP-60 (4) for IPv4; ASA5585-X SSP-20, 1 pair, IPS-SSP-20 (2), for IPv6 #clus #noc 
  • Security – Per Hagen; CSM 4.4, Cisco Cyber Threat Defense #clus #noc 
  • Apple 6K clients, Intel 2k clients, Samsung 953 clients total for week #clus #noc 
  • 60% clients on 2.4GHz, 1 on 802.1b, 171 802.11a, 300 802.11g #noc #clus 
  • Peaked at 13.4K clients Tues and Wed, today crossed 10K clients on wireless, 293 per AP for the big rooms #clus #noc 
  • 180x3502P w/Air-ANT25137NP-R stadium antennas to cover keynote and WoS #clus #noc 
  • 300×3602 APs in hallways/sessions rooms in OCCC, 110×3602 APs in Peabody, 87 in-house APs for some cove ration in OCCC #clus #noc 
  • 7×58 controllers for session rooms, hallways, and Peabody; 3×5508 controllers for Keynote and WoS areas; 4xMSE 7.5 for Location #clus Noc 
  • Mir Alami – wireless – TME, very happy about how well things went this year #clus #noc 
  • EEM scripts and Twitter’s API were used to tweet from @CiscoLive2013 account for distribution Switch #clus #noc 
  • Quad redundancy with Quad Sup SSO, new feature as of May, 15.7K unique IPv4 macs, 7.8K unique IPv6 macs #clus #noc 
  • …Flex Netflow on Sup2T for IPv4 and IPv6 traffic; 1TB of multicast traffic during show #clus #noc 
  • VSS Quad-Sup SSO and Multichassis Etherchannel, OSPF and BGP for IPv4 and IPv6, SNMPv3, CoPP, Syslog, etc for NetMan…#clus #noc 
  • Connection was also provided to Peabody’s 4500 switch(es) for their meeting rooms #clus #noc 
  • 2x6509E VSS, Sup2T, 40G backbone; Dist: 2x6513E + 2x6504E, Sup2T, 40G Ethernet #clus #noc 
  • Divya has done several shows last few years including Interop core #clus #noc 
  • Next up: Divya Rao, Switching Backbone #clus #noc 
  • Multi-hop FCOE used in DC with N7004 pair but ran into problems…solution was multiple VDC #clus #noc cc/ @drjmetz @ccie5851 
  • IPv4 220K PPS Denver, 74K PPS Sunnyvale; IPv6 12.7K PPS…8% traffic was IPv6 on avg #clus #noc 
  • Local AS 64726…”thank you for stressing my network”…940Mbps from Denver, 615Mbps from Sunnyvale peaks #clus #noc 
  • RPKI validation tested this year with SoBGP for IPv4 and IPv6 for full Internet routing table #clus #noc 
  • Sunnyvale, Denver uplink sites for Centurylink #clus #noc 
  • Networking Academy had 40 people here all week #clus #noc 
  • CenturyLink ISP had rep on-site all week. Savvis provided DC services #clus #noc 
  • Routing and DC: Patrick Warichet #clus #noc 
  • 8 panelists will each present for 7 mins #clus #NOC 
  • PNLNMS-3000 Cisco Live Network and NOC, with Jimmy-Ray Purser #clus 
Advertisement

Swack’s Cisco Live To-Do List

Cisco live2

My company pays a lot of money to send me here to Cisco Live. That’s likely the case for you as well (if you’re also here). I’ve had a list at past conferences of what I wanted to accomplish but never really published it outside my head. This year I’m holding myself more accountable and putting it here.  Many are things I could do quite easily back in the office if I didn’t have distractions. Now I can focus AND talk to the smartest folks in the industry about how they do business. Here’s some of the many things I hope to accomplish this year.

1. Better understand the Catalyst 4500 series and how I can use them as an aggregation point for 10-gig connected closet switches. I’ve never really worked with them so getting a better idea of how they work, benefits and drawbacks, and deployment options is key. How else could I provide resilient aggregation for 27 network closets with 2x10G links each?

2. Learn AMAP (as much as possible) about 802.1x and how Cisco switches and phones handle it. What are the deployment methods and models? How can we use certificates or other methods like MAC Authentication Bypass (MAB) for Cisco VoIP phones where we have a client connected behind the phone? What are the capabilities of Cisco Secure ACS and Cisco Identity Services Engine (ISE) and how do they compare with other RADIUS methods such as Aruba Networks Clearpass Policy Manager (CPPM) or just a simple Windows RADIUS server?

3. Talk more in detail with Solarwinds Head Geeks and other smart engineers about how the latest version of Orion NPM Route Polling works. How can we map over 1200 locations using Orion so our retail support teams can better take advantage of Orion’s power and knowledge? How can we use Orion NPM and NCM to possibly replace our existing legacy Linux-based config generation tool for store routers and provision them in an automated way?

4. How should I troubleshoot high received errors on ASA and router interfaces (specifically 7200 series)?

5. What are my options for expanding a pair of 5548UP Nexus switches as I keep adding FEX and running out of ports? If I add another pair I add another point of management (boo!). If I replace with 5596s how do I handle the transition and what can I get for trading in the 5548s?

6. How can I get our NXOS gear properly sending syslogs to our syslog server? (I already know this is a great question for the TAC folks that are here.)

7. Learn more about how IP Address Management (IPAM) vendors can prepare us for an 802.1x deployment, especially in terms of learning our existing MAC addresses for a MAB table. I’ve heard of Infoblox and BlueCat. Any others worth looking at?

8. Get familiar with Cisco’s Next Gen Firewall capabilities and how it compares to certain competitors, particularly Palo Alto Networks.

I welcome your comments/feedback below or directly on Twitter (@swackhap).

-Swack

The Way Of The Dinosaur


It’s been a
long time. I can’t remember how long, and I’m too lazy/busy to look it up. But somewhere around two (yep, count ’em, TWO!) years ago we had a major problem at work. One of our Cisco Catalyst 6509 core Ethernet switch had major problems. Turns out we had some bent pins on the backplane in slot 2. In laymen’s terms, the place where you plug the brains into the switch was broke. We still had one “brain” (a.k.a. supervisor module) but the redundant one couldn’t be used. The only solution to get our redundancy back? Replace the whole chassis.


Replacing an entire switch chassis is NOT a small job. There were literally hundreds of servers connected to this switch in the data center. So we set out on a very. long. journey. We got a replacement chassis from Cisco and sloooooooowly began moving one server network connection at a time from the old switch to the new switch.

Fast forward to today. Thanks to a big push in the last few days by some coworkers and me, we currently have only 7 more connections on this switch. And if things go according to plan, they’ll all be changed to the new switch by Saturday afternoon. (Yeah, I have to go to work on Saturday. And it’s supposed to be nice weather, too! Bummer…)

Some might not see the significance of this accomplishment, but those of us that have worked on it over these many months are psyched! We’ve scheduled a ceremonial power-off ceremony for Monday afternoon. Two of us will switch off the dual redundant power supplies, and everyone present will have the opportunity to disconnect one of the many ancient RJ-21 Ethernet cable connections. It will be stupendous when this switch makes itself extinct, and we can go on with our other more exciting, less mundane, projects.