4. What is DevOps all about?
Cloud Consumers want the following, and these are driving network virtualization:
- Ability to deploy apps at scale and with little preplanning (provisioning speed and efficiency)
- Mobility to move workloads between different geographies and providers (investment protection and choice)
- Flexibility to create more diverse architectures in a self service manner (rich L3-L7 network services)
- Management Plane = NSX Manager – programmatic web services api to define logical networks
- Control Plane = Control Cluster
- Clustered App runs on x86 servers, controls and manages 1000s of edge switching devices, does NOT sit in data plane
- Data Plane = OVS/NVS
- Open vSwitch (OVS) vmWare-led open source project
- NSX vSwitch (NVS) is a software vSwitch in ESXi kernel
- Switch software designed for remote control and tunneling installed in hypervisors, NSX gateways or hardware VTEP devices
- Can work with vSphere, KVM, XenServer
- vSwitch in each hypervisor controlled through API by Controller Cluster
- NSX manager uses this API, so does cloudstack, openstack, CMS/CMP, VMware
- To get between physical and virtual networks, Open vSwitch NSX Gateway or HW Partner VTEP Device is used
- NSX Controller Cluster establishes an overlay network
- Multiple tunneling protocols including STT, GRE, VXLAN
- Packets encapsulate with Logical Switch info
- The tunneling protocol is NOT network virtualization, rather, it is a component of it
- Automated network provisioning
- Inter rack or inter DC connectivity
- P2V and V2V migration
- Burst or migrate enterprise to cloud
The Whiteboard snapshot above was drawn to demonstrate the basic components of NSX and how VMs communicate using the virtual overlay netowrk
The example uses ESXi on left and KVM hypervisor on right (HV1 and HV2)
- Each connected to IP fabric
- 3 controllers drawn in the middle
- Intelligent Edge NVS installed on ESXi and OVS installed on KVM
- Controllers talk with ESXi on vmkernel management interface, something similar with KVM
- Addresses assigned that used for encapsulation and direct communication between hypervisors: 172.16.20.11/24 on left, 172.16.30.11/24 on right
- Customer A is green, they have a VM on each hypervisor (192.168.1.11 on left, 192.168.1.12 on right)
- Customer B is red, they have VM on each hypervisor with SAME IP ADDRESSES – logically separated similar to VRFs (I didn’t get a picture of this–sorry)
- Controller cluster controls virtual ports, so they can programmatically control QoS, Security, Distributed Routing
Started out a productive day with my first-ever Fritatta and some delicious croissants at breakfast in Moscone South. Having seen the debacle of “breakfast” at last year’s VMworld, the seating this year was at least an improvement with areas available in both Moscone South and West.
I went to the General Session at 9am, but as I was seated towards the back I couldn’t see the bottom of the screens. There were no screens overhead, only 3 or 4 large screens up front. In addition, the vmworld2013 wireless SSID was nowhere to be seen. The Press SSID (vmwaremedia) was available but locked down. Attempts to use my AT&T MyFi were stifled due to the overwhelming RF interference in the area. And I had AT&T cell coverage but no throughput. Having seen how well wireless CAN be delivered at Cisco Live, even in this kind of space for 20,000+ people, I was very disappointed. I decided to go watch the Keynote from the Hang Space, but that was full to capacity with a line waiting to get in. I finally gave up and walked over to Moscone West, 3rd floor, and sat at a charging station watching the live stream while waiting for my first breakout session. (Kudos at least for the stream working.)
My first session was “Moving Enterprise Application Dev/Test to VMware’s internal Private Cloud — Operations Transformation (OPT5194).” This was a great story of how leadership from the top pushed VMware to implement Infrastructure as a Service (IaaS). Kurt Milne (@kurtmilne) (VMware Director of CloudOps) and Venkat Gopalakrishnan (VMware Director of IT) shared lessons learned during VMware’s internal implementation of a service catalog and the automation of processes which used to require manual intervention by cross-functional teams over the course of weeks. The process of standing up a new Software Development Life Cycle (SDLC) series of dev/test/uat/stage/prod environments has been greatly automated and provisioning time reduced from 4 weeks to 36 hours and they plan to reduce it to 24 hours in the near future. If you’re going through a similar journey in your organization, this session is a must see when recordings and slides are released after the conference. I believe the session was also live-tweeted by @vmwarecloudops.
The other session I attended today was the very popular “What’s New in VMware vSphere” presented by Mike Adams (http://blogs.vmware.com/vsphere/author/madams). We reviewed some of the new features released in vSphere 5.1 last year as well as some of the changes made for vSphere 5.5 this year. Some key takeaways for me (your mileage may vary):
- vSphere is now wrapped up with Operations Management, i.e., vCenter Operations Manager (vCOPS). Referred to as “vSphere with Operations Management” it’s now available in the Standard, Enterprise, and Enterprise+ flavors, each of which includes vCOPS Standard. See snapshot of feature breakout and license cost.
- vCloud Suite variations all include vSphere Enterprise+, vCloud Director (vCD), and vCloud Networking and Security (vCNS). The individual flavors depend on the version of vCOPS and vCloud Automation Center (vCAC) which are Standard, Advanced, and Enterprise. In addition, the Enterprise SKU also includes vCenter Site Recovery Manager (vC SRM).
- vSphere Web Client is replacing vSphere Windows Client, so we “better get comfortable with it.” If I understand correctly, vSphere 5.5 includes support for all functionality in the Web Client now but not the Windows Client.
- New features in vSphere 5.5 include: VMDK file support up to 62TB, 4TB memory per host, 4096 vCPUs per host.
- vSphere Replication allows full copying of workloads, including the VMFS files, without shared storage. This perhaps saves the cost of more expensive synchronous or asynchronous storage replication, but has a somewhat limited Recovery Point Objective (RPO) of about 15 minutes. Still, this may be a good fit for some organizations for DR (including mine).
In addition to the sessions I was able to complete three labs (between yesterday and today) all related to VMware’s recently announced vCloud Hybrid Service (vCHS). HOL-HBD-1301, HOL-HBD-1302, and HOL-HBD-1303 give a good introduction to the components and steps necessary to migrate workloads from a vSphere or vCloud Director environment in your own datacenter to the vCHS environment, as well as networking & security components and managing the service.
One big announcement during the morning General Session/Keynote was the release of VMware’s network virtualization product called NSX. This is the marriage of Nicira (an earlier VMware acquisition) and vCNS/vShield in a new product. As a network engineer by background and training, this is particularly interesting to me. I was able to start the NSX lab (HOL-SDC-1303) but couldn’t yet finish as I ran out of time. I plan to finish tomorrow. More to come on that.
I have to give a big thumbs-down to VMworld’s requirement that we all get our badges scanned as we enter lunch. I don’t remember this last year, nor have I ever seen this at any other conference I’ve attended. What gives? It’s hard to hold a herd of hungry humans back from the food!
Finally, I visited with some fine folks at the Rackspace booth in the Solutions Exchange, including Waqas Makhdum (@waqasmakhdum). I now understand that Rackspace’s Openstack platform uses a different hypervisor solution than VMware or Amazon EC2, but they offer guaranteed uptime with a phone number to call for support and apparently pretty reasonable costs for running a VM you control or even hosting the VM and just having you run your application on it. Also, I learned they offer VMware-based Managed Virtualization to allow you to “Set up a single-tenant VMware environment at our data center, rapidly provision VMs, and retain full control using the orchestration tools you’re familiar with.” (Ref: http://www.rackspace.com/managed-virtualization/)
I’m failing to mention all the great people I met and conversations but one would expect nothing less from a great conference!
In case you hadn’t heard, VMworld became “VMwait” today as I, along with quite a few other strong-willed geeks, waited well over seven (yes, that’s SEVEN) hours before being seated for our first Hands-On Lab (HoL). Despite the hardships sustained by all, including the folks in green shirts running the labs, we all came through it alive and stronger for it. To make it up to us, they decided to stay open until 10pm at which time no new folks could enter but those of us that were there could finish what we started. Proudly, I managed to get three labs in (at least most of them) before heading back to my hotel for the night (sorry v0dgeball and VMunderground, I couldn’t make it…maybe next year). Unfortunately, I heard some labs were still having problems even once they got the environment up and running. But luckily for me, I had fairly minimal issues and was able to learn lots!
I want to give a HUGE shout-out to Mr. Irish Spring who did an outstanding job listening to our feedback today and made sure we were supplied with refreshments when we got hungry and kept us informed.
Also many thanks to Ms. Jennifer Galvin who spent some time chatting with some of us, listening to our (mostly justifiable) grumbling about the experiences of the day.
In all fairness to VMware, I understand that some of the back-side tech being used this year is different than last year (indeed, some of it isn’t even being announced until Monday morning’s keynote). They took a risk and ended up having some problems. It’s certainly happened to me. I’m betting it has (or will) happen to you.
Hopefully tomorrow will be a better day for everyone involved with the labs.
My Sunday here at VMworld began with a good breakfast at a local bakery. I then headed to Moscone West shortly before the Hands-On Labs (HoL) were scheduled to open at 11am and was greeted with this scene:
I was able to navigate through the Traditional HoL crowd to the slightly shorter Bring Your Own Device (BYOD) line, indicated by this nice guy:
After the doors opened, I followed the line inside to the BYOD Check-In Desk. While in line, some very helpful green-shirted VMware folks explained how to prepare our machines for the HoL. After handing over my conference badge to the folks at the table, they entered me into the system and I proceeded to the BYOD Configuration Desk across the room:
I’d decided to take someone else’s advice and use my iPad to login to the http://vmwarecloud.com site from the HOL wifi (only available inside the HoL area). That way I can access the lab guide instructions on my tablet and then I could use my MacBook Pro to connect to the lab environment with the View Client. The waiting continued in the “Holding Tank” where I hung out with about 100 other folks waiting for my name to proceed up to the top of this screen:
While waiting, they had a small seating area set up where the folks that wrote the labs were presenting whiteboard sessions:
Once my name reached the top of the screen I headed to the Seating Desk where I obtained my password and access code to login to the HoL site.
With this single-use code in hand I was guided to the BYOD HoL seating area where I set up to do my first lab! Based on what I’ve heard from previous VMworlds, I think it’ll all be worth the wait.