RSS

The Big Picture – Big Cloud Fabric

I will readily admit that the use of SDN as a buzzword makes me want to drive into oncoming traffic. Having endured so much SDN hype, I’m practically giddy when I get to see SDN fleshed out with actual real-life-you-can-touch-it’s-not-just-vaporware products.

Big Switch’s Networking Field Day 8’s presentations focused on showing off Big Cloud Fabric as being practical, usable, and even better, here today – well, shipping at the end of the month, but you get the idea. I loved that not only did Big Switch give all the delegates access to an invite-only beta lab, but that the lab totally rocked for getting to know the product and for laying hands on the technology that would be presented.*

Now, onto the Big basic ideas of Big Cloud Fabric (pun intended of course): the main components include white box switches running Switch Light OS code, controllers speaking OpenFlow** with proprietary extensions down to these switches, and a REST API that sits at the heart of where the magic the happens.

Big Switch Cloud Fabric has purposely narrowed its focus to data center implementations, more specifically a pod design that scales up to 16 racks. Each server connects to each of two TOR leaf switches and those leaves in turn connect to each of the spine switches, with up to 6 spines.  These spine switches can be connected to upstream services like firewalls, load balancers, and IPS devices. A primary and a backup controller run the show and all the switches, preloaded with ONIE as a bootstrap image, reach out and get their OS from the controller, much like the concept of PXE boot.

You should also be aware of the Big Switch concepts of Logical Segments, think VLANs for tenants, and Logical Routers, think layer 3 routing within the same tenant. The concept of a System Router allows for tenant to tenant communication, but otherwise, tenant A talks to tenant A’s stuff, and tenant B talks to tenant B’s stuff.

Okay, now for the “you had my curiosity. But now you have my attention” part – what does it look like, how do I make it work? What cool features do I get with this?

Well, Rob Sherwood, CTO, classifies the world into three types of engineers, and Big Switch tailored the product to address each:

The “I am a network admin, you can take my CLI when you pry it from my cold dead hands.”
The “I have been working with vCenter most my life, I don’t understand this CLI stuff, just give me a GUI to make this work.”
The “If there is not an API, then I don’t want to care, I’m from the DevOps end of the world.”

I tend to fall into category one, so I really enjoyed working the beta lab from the CLI. Here’s a couple of screen shots of what common tasks look like – notice nothing too scary or outlandish, pretty human understandable. If that human happens to be a network engineer.

show link
ConfigurationCLI
Big Switch clearly spent some time on GUI design as well, below are examples of common tasks from that angle. I also recommend watching this video to get a good idea of product features and layout – as a bonus you even get to see how the product helps Rob troubleshoot while he’s doing the demo.

photo 2A

photo 5A

Lastly, also in the words of Rob Sherwood, “the REST API is treated as a first class citizen” so whatever automation you want to start playing with that leverages the API shouldn’t make you feel like it was a bolt on afterthought.

Below are some cool features of Big Cloud Fabric design that are worth mentioning, some I’ve already alluded to:

  • hitless upgrades – you lose capacity during upgrades, but not connectivity, and we’re talking minutes not hours.
  • zero touch switch configuration – no logging in box per box; the GUI will even tell you if you cabled something wrong
  • control and data plane separation – packets keep moving when controllers go down, but note that new switches to the fabric are out of luck when all controllers are down, there is no default forwarding behavior
  • service chaining to multiple devices – you can send packets out to multiple devices, like a cluster of firewalls or load balancers, and when they come back the controller recognizes that fact and sends them where they need to go next.
  • test path feature- track down the exact logical and physical path packets are taking and what policies they hit
  • OpenStack – plugin integration with OpenStack Neutron

So that’s Big Cloud Fabric in a nutshell. A software defined delicious nutshell. That you could actually eat.

 

*Never underestimate the power of giving engineers the ability to run your product in a lab scenario, nothing helps engineers “get it” better than doing the configuration themselves and seeing the product in action. The effort Big Switch put into the beta lab demo clearly showed, kuddos to them.

**Big Switch spends a good amount of time in their presentations helping people get comfortable with the notion of OpenFlow and their particular architecture, including using comparisons to traditional supervisor/line card models, references to terms like VRFs/VLANs that relate to traditional networking, and debunking some common OpenFlow myths. Definitely worth watching the suite of recorded videos here for more details.

Disclaimer: While Networking Field Day, which is sponsored by the companies that present, was very generous to invite me to this fantastic event and I am very grateful for it, my opinions are totally my own, as all redheads are far too stubborn to have it any other way.

Published 9/29/2014

 
1 Comment

Posted by on 2014/09/29 in Big Cloud Fabric, Big Switch

 

Tags: , , , , , ,

Runt Post: Quality troubleshooting, what it looks like

In my previous post, I shared some of the cool stuff ThousandEyes is doing with VoIP.  I also wanted to draw attention to this cool video of Mohit Lad, co-founder and CTO of ThousandEyes, using his own product to troubleshoot an outage event on the fly: http://vimeo.com/105805525

There are very few ways to show off your product better than this type of demonstration. Mohit troubleshoots with expertise, clearly in his element. The tools cater well to his methodical troubleshooting process and both are quite impressive. Plus the routing loop he finds is just darn cool.

photo 100000 (8)

Watch it, you’ll love watching a master at work, I know I did.

Published: 9/26/2014

Disclaimer: While Networking Field Day, which is sponsored by the companies that present, was very generous to invite me to this fantastic event and I am very grateful for it, my opinions are totally my own, as all redheads are far too stubborn to have it any other way.

 

 

Tags: , , , ,

All Eyes on voice…

ThousandEyes announced something they called “new and shiny” at Networking Field Day 8 and it definitely caught my attention – not just because the word shiny was used* – but also because voice was the target of the announcement. I am quite used to being the literal and figurative red headed stepchild at networking events due to my involvement in that oh-so-unsavory world of voice** – but this Networking Field Day, voice got some well deserved attention.

ThousandEyes makes a product that via the use of Enterprise and Cloud Agents – those are active probes set out and about in your network, SaaS networks, and around the globe, you can gather some extremely detailed information regarding network performance, even when you don’t own all the pieces of the infrastructure along the path.

ThousandEyes is now leveraging that capability to ease the pain that is voice troubleshooting.  Using probes that emulate RTP traffic, you can gather data that can be used for capacity and voice quality planning purposes, as well as for troubleshooting voice performance issues.

Say you are planning to bring a new site online and route voice to and from this new branch.  Now you can collect detailed information that shows you how much this will suck (or perhaps not suck) *before* you go and purchase all the equipment for your design.

Say you are having trouble with voice between already established sites. This solution can help you identify capacity issues, jitter issues, and even DSCP remarking issues.  That last one really makes me smile.  How often is voice wrecked just because consistent QoS isn’t applied across all devices in the network?  No need to answer that out loud, we all know…

So here’s an idea of what a voice test creation would look like.  You can see there is a codec selection option, DSCP setting, and even a de-jitter buffer option that can be tweaked for the testing.

photo 100000 (24)

Below is an idea of what kind of data is being presented back. I really like that you can jump to the BGP path visualization and other layer tools just as usual with the product. Feel free to watch the short video here for the full show and tell.

photo 100000 (28)

Now it’s important to remember that this isn’t actually running tests on “real” calls being made in your network. While ThousandEyes makes a point of crafting probes to look and feel as much like actual application traffic as possible, it’s still not a live call.  I did ask about workflow integration with some tool like Wireshark or something similar and got a to-be-continued type answer.  In my vision, you would set alerts when thresholds were met that would kick off capture processes of live calls. Then you would correlate the .pcap files with this data to get a complete picture of the network. That way when the Director’s call to his beloved Aunt Erna drops and he wants to blame your really expensive phone system, you will have plenty of evidence to suggest that Aunt Erna just hasn’t mastered the art of speaker phone on her cell. Talk about a happy world.

Published 9/22/2014

*using the word shiny is a pretty good way to get my attention.  Using actual things that are shiny, even better.

**hating on voice is a well-known pastime for those engineers too afraid to touch it. ;)

Disclaimer: While Networking Field Day, which is sponsored by the companies that present, was very generous to invite me to this fantastic event and I am very grateful for it, my opinions are totally my own, as all redheads are far too stubborn to have it any other way.

 

Tags: , , , , , ,

LinkSprinter as a learning tool

At Cisco Live 2014, Tech Field Day provided a great opportunity to hear about Fluke Networks’ TruView. Fluke Networks also passed out cute* little LinkSprinter 200s, a tool which I have found to be incredibly handy in teaching IT newbies to troubleshoot basic client connectivity.

The LinkSprinter does all the basic things you could manage to accomplish with a laptop and/or a test phone, but it does so in a convenient little form factor, and more importantly – in a language new help desk interns can understand: light up, easy to read icons.

Now before I make this device sound like it’s designed just for beginners and/or idiots to use, I have to say I find the tool to be a huge help when I need to troubleshoot basic connectivity issues, especially when the network drop in question requires a ladder just to get to it (please hold all short jokes and applause till the end). The LinkSprinter pretty much does away with the need balance a laptop on a ladder or find a ridiculously long patch cable just to run basic connectivity tests.

What I really like, though, is how this tool helps new-to-IT technicians learn the troubleshooting process with more confidence and certainly in their processes. Not surprisingly, interns tend to goof up things like cabling and configuring their laptops correctly for testing IP connectivity.  They forget that 169.254.x.x isn’t really a good thing (no matter how many times you tell them), the concept of a default gateway is totally foreign, and PoE just confuses the heck out of the them.

The LinkSprinter runs these checks (and more) and by removing the initial confusion of newbies setting up their own test devices up correctly, the new guy/gal can focus on correlating actual symptoms with an accurate diagnosis of what is happening, instead of mixing up the initial problem with self-created problems.  Now it goes without saying that if you are going to use this tool with green engineers, you should proceed to explain to them what it is that the LinkSprinter is doing and how it is reaching its conclusions**.  I have found the Wi-Fi ability to connect to the device and see the test results to be an excellent tutor in this.

Also, once the intern starts getting their feet under them, showing them how to run the same types of tests with a laptop or other networking tools shouldn’t be overlooked. The LinkSprinter is an excellent primer for these situations but shouldn’t replace good old-fashioned training and know-how.

A few visuals to give you an idea of what this thing looks like physically and from the web interface – I like that this thing can tell you the switchport, switch name, vlan assignments, as well as the DHCP server address:

 

           

And if you create an account online, you can keep track of your past tests, which is pretty snazzy as well:

LinkSprinterCloud

*totally valid descriptor of networking gear and far less disturbing than calling network hardware sexy

**you can also tell them it’s dark, voodoo magic, especially if they are going into voice work, it will get them used to standard voice processes.

Disclaimer: While Fluke Networks was very generous to grant me a shiny LinkSprinter200 and I am very grateful for it, my opinions are totally my own, as all redheads are far too stubborn to have it any other way.

Published 8/7/2014

 

Tags: , , ,

When simple changes aren’t…tales of MGCP to h.323 conversions

Maintenance windows have a way of reminding you that simple changes aren’t always so simple.

Take a recent after hours task of switching over some MGCP gateways to h.323.  The primary inbound gateway was already h.323 and the MGCP gateways were – well, MGCP – so it made sense to make everything uniform and do the conversion.

So I changed all my calls to route in and out the primary gateway, which was already h.323, and set about making my changes.  In case you haven’t done this before, here is a brief outline of the process – not meant to be a step-by-step, all inclusive list, just a general idea of the process.

On the gateway side:

-set isdn switch type on the router (you can get this from the MGCP configuration in CUCM)
-bind h323 to source interface
-create inbound and outbound dial peers on the router
-remove MGCP bind command and other MGCP configuration (you will need to shutdown the voice port to do this)
-reconfigure the T1 controller
-confirm MULTIPLE_FRAME_ESTABLISHED
-put in place any translation patterns required
-add commands for calling name and/or Facility IE support if required

On the Cisco Unified Call Manager side:

-create the gateway 
-add the gateway to a route group
-add the gateway to a test route list
-create a few route patterns to send calls out the gateway
-add gateway to production route list(s)

My plan was proceeding perfectly up until I needed to send a long distance call out one of the recently converted gateways.  The call received a reorder tone from the carrier even though the debug isdn q931 output showed my call going out with the proper digit format. I knew long distance calls were working going out this gateway before, so I was pretty sure it had to be something with my configuration even though it looked like a carrier issue.  

After comparing configurations with the current h.323 gateway (PRI to the same carrier), uttering a lot of fairly creative yet non repeatable curses, and wasting an hour of my life with a clueless carrier tech who swore I wasn’t sending the 1 required for long distance, the obvious finally hit me.  And it was annoyingly painful.

See, I had made the assumption that long distance calling had been tested out the primary gateway when it was installed.  I foolishly believed that outbound long distance via the primary h.323 gateway had been tested at installation, as this is pretty much standard and *should* be part of any voice gateway install process.  Once I wised up and tested that theory, however, I realized that all long distance calling to the carrier was broken from every gateway, including the old h.323 one which I hadn’t changed anything on.  Knowing this couldn’t be the result of my conversion efforts, I was now able to think through what the real source of the issue could be.

When you send the carrier the right digit format and yet they emphatically insist you most definitely aren’t, you are likely hitting an issue I blogged about in my first ever post.  In some cases, a carrier switch reads your ISDN plan type and takes digit stripping action based on it. They seem to be completely unaware they are even doing it, so don’t expect the carrier to ever discover this is your problem. The solution is simply to set a translation pattern that changes the plan type to UNKNOWN and then the carrier switch doesn’t try to do you any favors and manipulate the digits. Problem solved.

I am now adding testing inbound and outbound calling from ALL gateways before making changes to my check list.  Pass the bourbon please.

Bonus material:
At the time, I didn’t think about the 8945 phones (and newer phones with video capabilities) that often require you add a specific command to the ISDN voice port, otherwise outbound calls from the device fail. I discovered this while finishing up my testing plan and was able to fix it before anyone noticed.  A very good reason to have a thorough testing plan no matter how small the changes being made. Here’s a link to a forum post on this type of issue and the command you need to fix it:

voice-port 0/1/0:23
bearer-cap speech

Also, your lucky day, some commands for calling name when carrier is using the Facility IE:

voice service voip
h323
h225 display-ie ccm-compatible
!
interface Serial0/1/0:23
isdn supp-service name calling
isdn outgoing display-ie

Published 7/2/2014

 

Tags: , , , , , , , , , , , ,

The Sparkly Side of Cisco Live 2014

In case you were living under a rock in the networking world, Cisco Live 2014 happened last week and in a big way.  Thousands of geeks took over San Francisco and it’s safe to say a good time was had by all!

A few things made my week in particular quite excellent, one of those being the Cisco Champion(s) Program. The Champion events were excellent opportunities to get to know fellow engineers and have a good time laughing it up with friends.  A special thanks to my fellow ginger and partner in plotting world domination, Amy Lewis (@commsninja), who definitely leveled up her already extraordinary unicorn wrangling skills. Her own talk was also fantastic and you should check it out here.

IMG_0732  IMG_0721

Tech Field Day also organized some great events, including round table discussions for us blogger/social media types.  These pictures don’t do justice to the staggering amount of brain power gathered around the table, but you can check out the videos here.

        

I’d be remiss not to give an extra special thanks to these two guys, Tom (@networkingnerd) and Stephen (@sfoskett) for all the work they do to build community in networking.

 

IMG_0726

This Cisco Live was also the week of fabulous tweeps bringing me all sorts of fun and shiny presents.  From a Batgirl t-shirt from Jeff (@fryguy_pa) to actual sparkly bats from Tom (@networkingnerd) and Denise (@denisefishburne), I love that this group laughs and jokes together.  A sparkly PVDM from Erik (@ucgod) and a tiara (also from @denisefishburne) rounded off the gifts. The opportunity to dub Pete Lumbis (@tacCCDE) as the king of TAC, complete with a tiara and a sword, certainly made my week.

IMG_0700IMG_0692IMG_0693

IMG_1036

Last but not least, the awesome events organized by @ciscolive @kathleenmudge @rbakker and the rest of the Cisco Social Media team, allowed for some quality time with awesome engineers.  These are just a few pics below. From the looks of it, by the end of the week dignity had clearly taken a vacation, not that there was much of it to start with in this crowd.

IMG_0777IMG_0735IMG_0694

IMG_0729IMG_0706IMG_0722

IMG_0731IMG_0741

IMG_0813

Good times with truly good friends. Can’t wait till next year!

Special shout out to Kale Blankenship (@vcabbage) for creating a CLUS set of Cards Against Humanity. This one is my favorite:

IMG_0717

And congratulations to @bcjordo who got his CCIE digits on Monday, way to go!

 

 
6 Comments

Posted by on 2014/05/27 in Cisco Live, Cisco Live 2014

 

Tags: , , ,

Threats of Tornados and Redistribution, Oh my: The final day of Narbik’s Training Class

Today started out a typical day in Narbik’s class – lots of labbing and mind-bending topologies, today’s subject being QoS. Until today, I have never sat through a lecture on or read a book about QoS that wasn’t just a slight step up from a root canal in terms of pain and frustration. However, today’s lab demos really helped clarify what various methods of QoS were really doing, and as a bonus, I didn’t feel an overwhelming desire to be anesthetized while listening to the explanation.  My loose grasp on previously confusing terms like single rate, two color policers versus dual rate, three color policers was significantly strengthened.

We then made an attempt to talk redistribution.  Now apparently Narbik has a history with redistribution, when he tried to talk about it in Poland, they ended up with a fresh coat of snow on the ground, and today Dallas nearly took a tornado for it. The warning sirens went off, literally, and we all got to trudge down to the first floor of the hotel and cram into a small space with a few hundred other guests. As Narbik himself said, “sh** happens when you do redistribution.” Hilarious and so true!  When we got back to class, we looked at about a half dozen scenarios where redistribution sent a perfectly good network into the crapper. Incredibly fascinating to see and start to really understand the vulnerabilities inherent in the processes.

Now a typical Narbik class would not have ended after that, stories of Thursday nights are legendary – the night usually ending about 4:30am or so I am told. Since, however, we missed our window for the last mock lab attempts while slumming around in the tornado shelter, we decided to finish the lectures for the entire week and class came to an early close at a mere 7pm. Fortunately all the students get an opportunity to make up the mock lab attempt on our own within the next couple of months, so it all works out.

I can honestly characterize this class as one of the best educational experiences I have had.  Much of this stems from the philosophy of learning espoused by Narbik himself and shows in the quality of his teaching and methods. His pronounced idea that he is creating and shaping better engineers, not just folks that can pass a test really resonates with me.  He talked about how engineers are like artists and that how we do our work is our signature. He went on to say how we should pride ourselves in the quality of our work even when no one is watching or grading it. I just couldn’t agree more.

On that note, I’d like to thank Eman Conde and CCIE Flyer one last time for giving me the opportunity to sit the class, couldn’t be more grateful for the experience!

Published 5/8/2014

Disclaimer: While Eman and CCIE Flyer were very generous to grant me a seat in this class and I am very grateful for it, my opinions are totally my own, as all redheads are far too stubborn to have it any other way.

 

 
1 Comment

Posted by on 2014/05/08 in CCIE, ccie boot camp

 

Tags: , , ,

 
Follow

Get every new post delivered to your Inbox.

Join 240 other followers