Seeing Tetration in action – NFD16

One of the highlights of Network Field Day 16 was a Cisco Tetration presentation by Tim Garner. Launched by Cisco last June, Tetration is a heavy lifting, data crunching platform that soaks up telemetry on all your network packets, uses machine learning algorithms on that data, and produces security policies templates based on the flow information received. This process gives engineers in-depth analytics, an impressive level of visibility, and supplies automagically crafted baseline security policies.  The latter truly shines when you are working with developers and application owners who have absolutely no clue what server needs to talk to what other server(s), much less what ports are required to do so securely.

With Tetration, you can use hardware sensors in the form of Nexus 9K switches with an -X in the SKU, or you can use software agents that can be installed just about anywhere. Or you can use a combination of both.  These sensors look at every single packet going in and out and generate telemetry packets that get shuffled off to Tetration where the real magic happens.

In addition to software agents and hardware sensors that natively generate Tetration metadata packets, you can also stream data from load balancers, firewalls, and other networking devices.  Some devices such as Citrix and F5 are natively supported, but others might take your doing a little work to get the data into a format that Tetration will accept – JSON being one of the acceptable formats.

Another interesting option for getting metadata into Tetration is the use of virtual machines set up as ERSPAN destinations.  Each VM can take in up to 40 gig of traffic, generate telemetry data for this traffic, and stream the data to the Tetration cluster.  Tetration can also take in NetFlow data using this VM method as a NetFlow receiver. NetFlow data is sampled though, so Tetration would not be seeing metadata on every packet as with the other options listed.

Once the data gets to the Tetration cluster, the snazzy machine learning algorithms built into the box start telling you cool things like what hosts are talking to what hosts and what “normal” network behavior looks like, and thereby, what abnormal network behavior would look like.

If your development servers should never be talking to your production servers, Tetration can tell you not only if that what’s happening now, but also if that behavior changes in the future.  Using a Kafka broker* you can have Tetration feed notifications to applications such as Splunk or Phantom, which can in turn communicate with hardware and software devices that perform actions such as host isolation when anomalous traffic is detected.

The automatic whitelists built by Tetration will require some care and feeding by an engineer. Importing policies from ACI is also an option as well. Tetration generated whitelists can be reviewed and tweaked, and an audit of what will be blocked when implementing or making policy changes is an excellent job preserving idea. Checking policies against the four to six months of network traffic data stored by the cluster gives you a good sense of what to expect when enforcement is actually turned on. That being said, you can also run your policies in audit mode for a few months to see what traffic hits the crafted policies.

If you want to see Tetration in action, I highly recommend this video below. The demo starts at about 16 minutes, but Tim Garner is such an excellent presenter, you’ll be glad you watched the whole thing.

 

*Kafka broker service was new to me, basically it’s a notification message bus, I used a few of these links below to get the idea:

https://sookocheff.com/post/kafka/kafka-in-a-nutshell/

https://kafka.apache.org/quickstart

https://www.cloudkarafka.com/blog/2016-11-30-part1-kafka-for-beginners-what-is-apache-kafka.html

 

Disclaimer: While Networking Field Day, which is sponsored by the companies that present, was very generous to invite me to this fantastic event and I am very grateful for it, my opinions are totally my own, as all redheads are far too stubborn to have it any other way.

 

Published 10/6/2017

Preserving and managing intent using Apstra AOS

Apstra’s Networking Field Day 16 presentations highlighted key issues engineers face every day.  The traditional ways of spinning up configurations and validating expected forwarding behavior falls short of the needs of networks of any size. For anyone who has encountered a network where the documentation was woefully outdated or practically non-existent, and whose design requirements and purpose were deduced purely from rumors and vague supposition, Apstra offers AOS and its Intent Store.

More than just configuration management, the idea of Apstra is to manage not just the building of consistent configurations abstracted from the specific hardware, but also to provide a controlled manner in which to address and manage network design revisions throughout the network’s life cycle.

Changing business needs impart necessary modifications to device dependencies and performance, Apstra addresses these revisions by maintaining a single source of truth – the documented intent* – and providing tools to validate this intent.  As Derick Winkworth said “it’s about moving the network [design] from being all in someone’s head” to making the design something consistent, tangible, and best of all, something that can be queried, and by extension, verified.

Under the covers, Apstra makes use of graph theory, and for those who’d rather not Google that, the upshot is nodes get added and relationships get tied to nodes.  The structure allows for a flexible schema that lends itself to ever-changing quantities of connections and also to new types of inter-dependencies between objects.

For example, Apstra added the ability to create links between network nodes and the applications that run through them.  This is done through some DevOps wizardary which this video highlights well, and the additional relationship mappings allow the network operator to query for application paths and diagnosis traffic flow issues.

For a short how-is-this-useful-to-me, I highly recommend this explanation by Damien Garros on using Apstra to shorten the time it takes to deploy crazy amounts of security zones, validate them, and monitor them. Snazzy stuff for any engineer who has ever faced a security auditor.

 

Disclaimer: While Networking Field Day, which is sponsored by the companies that present, was very generous to invite me to this fantastic event and I am very grateful for it, my opinions are totally my own, as all redheads are far too stubborn to have it any other way.

 

*Intent is all the buzz these days, back in my day we called it policy or design requirements. But I’ll try to avoid waving my arms and shouting get off my LAN… 🙂

Published 9/25/2017