Runt Post: HP Discover Notes

Last week I had the privilege of attending HP Discover in Barcelona and thought I’d hit the highlights while they were still fresh in my mind.

HP’s continued OpenFlow work stood out last week as HP has several applications leveraging OpenFlow available to customers now. One of the things that I find most appealing is that there are actually *campus* SDN applications, not just data center applications. Recently, data center has received all the love in networking innovation, leaving campus networks to the same old same old. Campus networks, however, represent a wide range of potential for SDN, so it’s nice to see some OpenFlow applications focused in that direction. The Network Optimizer application that dynamically allocates bandwidth for Lync experiences and the Network Protector app that leverages a TippingPoint Reputation database are the HP SDN applications I’ve heard the most about, but a look at the SDN App store shows there are quite a few others out there, customer ready and available.

HP’s Intelligent Management Center caught my attention when listening to the Packet Pusher’s episode on the platform and after talking to Chris Young about the product at Discover, I am all the more curious to get a demo up and running.  IMC not only allows for your basic network management and monitoring tasks, but also offers advanced features such as config validation and device configuration from a centralized management console. It won’t get rid of all your other single pains – err – panes of glass, but does look quite promising for centralizing network management in a way that doesn’t suck your will to live as some of the larger, more bloated platforms tend to do. Also, support of third party devices is big for anyone not running HP gear exclusively or even at all. The insight into ESXi servers also caught my interest as being super cool – a way to see into what those wily sys admins have done with their virtual switches while they blame your physical switches for the problem.

I also found the work HP Labs is doing to be quite fascinating.  Having an increased R&D budget as of late, the HP lab geeks are taking on some pretty cool projects. Much of their energy is being funneled into photonics and memristor technology projects, collectively referred to The Machine. Personally, the name “The Machine” sounds a bit over the top, but there is some serious science going on in this line of research and my geek DNA can’t wait to see what develops from these endeavors.

I had a highly enjoyable experience overall and loved getting to geek out over tech with some other seriously fabulous nerds – you should check them out as well because they are *awesome*.

Published: 12/9/2014


Disclaimer: While HP Networking was very generous to invite me to this fantastic event and pay my expenses, and I am very grateful for it, my opinions are totally my own, as all redheads are far too stubborn to have it any other way.



Tags: , , , , , , ,

Runt Post: Unity Connection 8.6 and SU5, there’s a .cop file for that

I ran into a fun* issue last week while upgrading Unity Connection from 8.6.(2a) to the latest patch SU5, and I would tell you to be sure to read the release notes on the subject, but as I found when calling TAC at 2am, the release notes don’t warn you about this particular issue.  See, after the upgrade, all those precious call handlers you painstakingly configured and tested over the past few years just don’t work. At all. Not even a little. In fact, the system just routes these calls to the Opening Greeting. Much to your frustration, you can see that the extensions are still defined on the call handlers just like before the upgrade, but after the patch the system rudely snubs your call handler configuration entirely.

As much as I am totally ruining the surprise of your discovering and experiencing this precious jewel of an error on your own and wondering what the heck to do about it, there is a .cop file that fixes the issue: ciscocm.cuc_86x_cdl3.cop.sgn (cco login required).   After uploading the file to both servers (assuming an HA environment), a reboot of the primary and then the secondary is all that’s left to do.

And people wonder why voice engineers drink…


Published 11/03/2014

*voice engineers have a slightly skewed definition of fun, precisely because of issues like this…

Note: there is a bug ID for this, CSCuq63776 (cco login also required) in case you are interested.  I have no idea if every upgrade that goes from 8.6(2a) will hit this particular bug, but at least now you won’t be surprised if you do…


Tags: , ,

The 8945 firmware upgrade dance

For awhile now I’ve been hitting the fabulous issue mentioned in this Cisco forum post for 8945 phones in which the phone basically stops incrementing the time display. Being as I only had a few of these deployed and unplugging/plugging them would fix the issue, I had put off upgrading them. I had plan to put off upgrading the firmware until the proverbial upgrade cows came home and sat around my desk mooing their demands for attention, but unfortunately my plans were udderly derailed*.

I discovered while reading various release notes that the usually cumbersome but predictable upgrade process would be a bit more involved and would need to be done before I started major version upgrades, otherwise my phones would be shiny dialtone-less bricks with no usable firmware to make their happy little phone lives better.

Cutting to the chase, here’s what you have to do if you happen to be in the same series of release boats that I’m paddling around in.  Keep in mind that in this particular situation I am starting with 8945 phones with SCCP894x.9-2-3-5 installed and CUCM version, your mileage may vary:

  • You must have the minimum Device Package 8.6.2(22030-1), which in this case I didn’t have. 8941 and 8945 phones won’t register at all if you try to install 9.3(1) or later without this device pack installed first. Yay.
  • Next, you’ve gotta do a step upgrade from 9.3(4) to 9.4(x) because in the grand tradition of voice upgrades being about as much fun as root canals, you can’t just upgrade straight from a 9.2 release to a 9.4 release. Double Yay.

There are a few key things to keep in mind when doing device pack and firmware installations.  First and foremost, always always always read the release notes. If something doesn’t seem clear or make sense, open a TAC case to clarify. This is way better than blowing up your server or phones because you assumed something the release notes didn’t cover or explain properly.

It should go without saying that you should always confirm you have a good backup before doing any file installations at all. I’m saying it anyway, check your backups, never assume they ran. Trust but verify, the emphasis being on verify.

Remember to copy down the values on the Device Defaults page before the upload of the files and to paste in old values if necessary directly after the upload. Check out this previous post for why you will thank me for this later.

Lastly, always remember to reboot after device package installs and to stop/start the TFTP service after firmware file installs.  These processes will save you some heartache and potentially a bruised skull from repeated head-desks when you finally realize you never did stop and start that service and the last 45 minutes of troubleshooting was for nothing.

*Anyone with a beef about my cow puns probably shouldn’t follow me on twitter either, the puns only get worse more fabulous from here… 

Published: 10/23/2014


Tags: , , , , , ,

Getting to know Cisco ACI…

Watching Cisco present on ACI at Networking Field Day 8 was a nice expansion to the introduction I received on the product almost a year ago at the ACI launch event in New York.  Now that APIC is shipping, companies can swap over from NX-OS mode to ACI mode and start playing with the magic that is application network profiles.

The basic components of ACI are the fashionable spine/leaf architecture that is all the rage these days, an APIC controller talking OpFlex southbound, and a switch operating system on Nexus 9Ks that interprets policy and forces it down to the endpoints.  Underneath the covers, each switch uses ISIS to build a routed topology from any VTEP (virtual tunnel endpoint) to any VTEP (basically from any leaf to any leaf). Rather than the controller programming routes and handling traffic forwarding, the controller focuses on pushing down policies that are understood and then implemented by the switches.

The concept of a self provisioning network also comes into play with the ACI solution, as does the concept of one big fabric to rule them all. The fabric can be zero touch provisioned, with the controller finding new switches brought online. The controller also acts as a single point of policy provisioning – the fabric itself scaling up to 12 spines, with multiple active controllers all sharing data for redundancy.

The heart of ACI really lies with the policy model and associated concepts.  ACI works by putting things into groups – usually done with identifiers like vlan/vxlan ID, subnet, 802.1q tag, or by physical/virtual port – and then these groups are assigned policy contracts which basically “turn on” connectivity between these groups, according to the rules of the policy assigned. The level of abstraction inherent in these contracts lend themselves well to automation and consistency in network policies, as well as allowing for a clean up process as applications are removed – therefore solving some of the what-the-heck-was-this-thing-nobody-remembers problems we engineers often encounter.

Application network profiles can be home brewed as well as provided in the form of device packages from vendors.  These device packages will automagically roll out the best practices for the application at hand, and if they are from an official partner, TAC will even handle support issues that arise from using the package. As Joe Onisick put it, “think of it as an automatically deployed Cisco validated design.”

There’s much more covered in the Networking Field Day 8 presentations, including service graphs – think service chaining but with flexibility for differing behaviors for various traffic groups, an API inspector that allows you to see the API code as you make calls through the GUI so you can create automation scripts from it, and atomic counters which allow for detailed health scores and packet tracking, but as I’m a sucker for a good demo, I’ll leave you with this, Paul Lesiak showing off APIC’s mad programmability skillz.


Published: 10/14/2014

For more links to ACI resources, you can check out my previous post, check out some excellent videos by both Lilian Quan and Joe Onisick on the subject (just go to youtube and seach for Cisco ACI), or check out Lauren Malhoit’s blog where there’s some good posts on getting to know ACI as welll.

Also, mucho bonus points to Lauren for not only being generally awesome all the time, but for also providing this ginger with a desperately needed Diet Coke as a caffeine source at this 8am presentation, an un-caffeinated ginger is a scary, scary thing.

Disclaimer: While Networking Field Day, which is sponsored by the companies that present, was very generous to invite me to this fantastic event and I am very grateful for it, my opinions are totally my own, as all redheads are far too stubborn to have it any other way.


1 Comment

Posted by on 2014/10/14 in ACI


Tags: , , , , , , , , , , , ,

Server, meet switch: a brief introduction to Pluribus Networks

When I think of what Pluribus Networks is doing, I get this image of a high performance server and switch wrapped together, with the bow on top being this extremely clever hypervisor, called Netvisor, talking directly down to the chips in the switches. These server-switches allow Pluribus to do some pretty nifty things when it comes to networking.

This slide from Alessandro Barbieri’s presentation at Networking Field Day 8 helps me visualize what they are doing in comparison to traditional switch architecture:



Network design-wise you will find that Pluribus is using the spine/leaf architecture that we are all coming to know and love. While there is no single recommended pod size, 12 to 24 racks were mentioned throughout their Network Field Day 8 presentations as the most commonly seen deployments.

These server-switches all make up a cluster – each with the same view of the network, talking to the others over TCP connections in a peer to peer fashion.  There is no centralized controller in this architecture, and each node in the cluster uses a three phase commit process to keep information synchronized with its peers. This means that either all the nodes are on board with a change, or the change just doesn’t happen. Much better than trying to rollback a change that wasn’t 100% successful across all nodes. The cluster is managed and appears as one big fabric, again a common theme that SDN is delivering on.

One of the cool things that Pluribus has focused their technology on is real time and stored analytics, the ability to “record” the network traffic and, in Pluribus’ case, this doesn’t require a separate fabric or taps. This is a pretty huge distinction considering the cost of monitoring alternatives and the time/effort that could be spent to maintain a separate monitoring fabric.

Pluribus’ technology does allow you to slice up your network into tenants and there is a strong emphasis on programmability, automation, and integration with OpenStack. Your basic “cloud in a box” – a box you will have plenty of resources to run advanced L4-7 services on.

This video from Networking Field Day 8 demonstrates just some of the analytics possible, but I’d also recommend checking out the demos from Networking Field Day 7 to see a bit more product-in-action videos.  I also really like this write up by @mrtugs – he does an excellent job further explaining the architecture and exploring the possibilities that stem from this server-switch goodness.


Published: 10/7/2014


Disclaimer: While Networking Field Day, which is sponsored by the companies that present, was very generous to invite me to this fantastic event and I am very grateful for it, my opinions are totally my own, as all redheads are far too stubborn to have it any other way.


1 Comment

Posted by on 2014/10/07 in Pluribus


Tags: , , , , ,

Runt Post: Big Tap Monitoring and its Wireshark goodness

photo 3

Anyone reading my blog posts or tweets knows that I am huge fan of Wireshark and all its packet capturing greatness, so let me point you to this great Big Tap video from Networking Field Day 8  where Sunit Chauhan demonstrates how you can troubleshoot a client issue using Big Tap Monitoring Fabric, even generating an impromptu packet capture in the process. The ease of the process is beautiful, just beautiful.



Skip to the 13 min mark to start the troubleshooting fun. After that, you’ll want to watch those first 13 minutes to find out how the magic is done.

Big Switch Networks Big Tap Monitoring Fabric from Stephen Foskett on Vimeo.


Published: 10/3/2014

Disclaimer: While Networking Field Day, which is sponsored by the companies that present, was very generous to invite me to this fantastic event and I am very grateful for it, my opinions are totally my own, as all redheads are far too stubborn to have it any other way.


Tags: , , , , , ,

The Big Picture – Big Cloud Fabric

I will readily admit that the use of SDN as a buzzword makes me want to drive into oncoming traffic. Having endured so much SDN hype, I’m practically giddy when I get to see SDN fleshed out with actual real-life-you-can-touch-it’s-not-just-vaporware products.

Big Switch’s Networking Field Day 8’s presentations focused on showing off Big Cloud Fabric as being practical, usable, and even better, here today – well, shipping at the end of the month, but you get the idea. I loved that not only did Big Switch give all the delegates access to an invite-only beta lab, but that the lab totally rocked for getting to know the product and for laying hands on the technology that would be presented.*

Now, onto the Big basic ideas of Big Cloud Fabric (pun intended of course): the main components include white box switches running Switch Light OS code, controllers speaking OpenFlow** with proprietary extensions down to these switches, and a REST API that sits at the heart of where the magic the happens.

Big Switch Cloud Fabric has purposely narrowed its focus to data center implementations, more specifically a pod design that scales up to 16 racks. Each server connects to each of two TOR leaf switches and those leaves in turn connect to each of the spine switches, with up to 6 spines.  These spine switches can be connected to upstream services like firewalls, load balancers, and IPS devices. A primary and a backup controller run the show and all the switches, preloaded with ONIE as a bootstrap image, reach out and get their OS from the controller, much like the concept of PXE boot.

You should also be aware of the Big Switch concepts of Logical Segments, think VLANs for tenants, and Logical Routers, think layer 3 routing within the same tenant. The concept of a System Router allows for tenant to tenant communication, but otherwise, tenant A talks to tenant A’s stuff, and tenant B talks to tenant B’s stuff.

Okay, now for the “you had my curiosity. But now you have my attention” part – what does it look like, how do I make it work? What cool features do I get with this?

Well, Rob Sherwood, CTO, classifies the world into three types of engineers, and Big Switch tailored the product to address each:

The “I am a network admin, you can take my CLI when you pry it from my cold dead hands.”
The “I have been working with vCenter most my life, I don’t understand this CLI stuff, just give me a GUI to make this work.”
The “If there is not an API, then I don’t want to care, I’m from the DevOps end of the world.”

I tend to fall into category one, so I really enjoyed working the beta lab from the CLI. Here’s a couple of screen shots of what common tasks look like – notice nothing too scary or outlandish, pretty human understandable. If that human happens to be a network engineer.

show link
Big Switch clearly spent some time on GUI design as well, below are examples of common tasks from that angle. I also recommend watching this video to get a good idea of product features and layout – as a bonus you even get to see how the product helps Rob troubleshoot while he’s doing the demo.

photo 2A

photo 5A

Lastly, also in the words of Rob Sherwood, “the REST API is treated as a first class citizen” so whatever automation you want to start playing with that leverages the API shouldn’t make you feel like it was a bolt on afterthought.

Below are some cool features of Big Cloud Fabric design that are worth mentioning, some I’ve already alluded to:

  • hitless upgrades – you lose capacity during upgrades, but not connectivity, and we’re talking minutes not hours.
  • zero touch switch configuration – no logging in box per box; the GUI will even tell you if you cabled something wrong
  • control and data plane separation – packets keep moving when controllers go down, but note that new switches to the fabric are out of luck when all controllers are down, there is no default forwarding behavior
  • service chaining to multiple devices – you can send packets out to multiple devices, like a cluster of firewalls or load balancers, and when they come back the controller recognizes that fact and sends them where they need to go next.
  • test path feature- track down the exact logical and physical path packets are taking and what policies they hit
  • OpenStack – plugin integration with OpenStack Neutron

So that’s Big Cloud Fabric in a nutshell. A software defined delicious nutshell. That you could actually eat.


*Never underestimate the power of giving engineers the ability to run your product in a lab scenario, nothing helps engineers “get it” better than doing the configuration themselves and seeing the product in action. The effort Big Switch put into the beta lab demo clearly showed, kuddos to them.

**Big Switch spends a good amount of time in their presentations helping people get comfortable with the notion of OpenFlow and their particular architecture, including using comparisons to traditional supervisor/line card models, references to terms like VRFs/VLANs that relate to traditional networking, and debunking some common OpenFlow myths. Definitely worth watching the suite of recorded videos here for more details.

Disclaimer: While Networking Field Day, which is sponsored by the companies that present, was very generous to invite me to this fantastic event and I am very grateful for it, my opinions are totally my own, as all redheads are far too stubborn to have it any other way.

Published 9/29/2014

1 Comment

Posted by on 2014/09/29 in Big Cloud Fabric, Big Switch


Tags: , , , , , ,


Get every new post delivered to your Inbox.

Join 263 other followers