Thursday, February 16, 2017

Test Driven Infrastructure - Validating Layer 1 Networking with Nagios

Previously we've talked about how we use Nagios / Icinga for three broad types of monitoring at LMAX: alerting, metrics, and validation. The difference between our definitions of alerting and validation is a fine one and it more has to do with the importance of the state of the thing we are checking and the frequency in which we check it. An example of what I consider an "Alert" is if Apache is running or not on a web server. However the version of Apache might be something I "Validate" with Nagios as well, but I wouldn't bother checking this every few minutes and if there was a discrepancy I wouldn't react as fast as if the entire Apache service was down. It's a loose distinction but a distinction none the less.

The vast majority of our network infrastructure is implemented physically in a data centre by a human being. Someone has to go plug in all those cables, and there's usually some form of symmetry, uniformity and standard to how we patch things that gives Engineers like me warm fuzzy feelings. Over many years of building our Exchange platforms we've found that going back to correct physical work costs a lot of time, so we like to get it right the first time, or, be told very quickly if something is not where it's expected to be. Thus enters our Test Driven Networking Infrastructure - our approach uses Nagios / Icinga as the validation tool, Puppet as the configuration and deployment engine, LLDP as the protocol on which everything runs on top of, and Patch Manager as the source of truth.

Validating Network Patching

I've written about our Networking Puppet module before and how we use it to separate our logical network design from it's physical implementation. The same Puppet Networking module also defines the monitoring and validation for our network interfaces. Specifically this is defined inside Puppet Class networking::monitoring::interface, which has a hard dependency on LMAX's internal Nagios module which unfortunately at this time is not Open Source (and would be one long blog post of it's own to explain).

So since you can't see the code I'll skip over all the implementation and go straight to the result. Here is what our Puppet Networking module gives us in terms of alerts:



Pretty self explanatory. Here's the end result of our networking infrastructure validation, with server names and switch names obfuscated:



However a green "everything-is-ok" screenshot is probably not a helpful example of why this is so useful, so here's some examples of failing checks from out build and test environments:




To summarise the above, our validation fails when:
  • we think an interface should be patched somewhere but it's not up or configured
  • an interface is patched in to something different than to what it should be
  • an interface is up (and maybe patched in to something) but not in our source of truth
Next I'll describe how the Nagios check works. Combined with a specific provisioning process which I describe below, the above checks give us Test Driven Infrastructure that helps us quickly correct physical patching errors.

How The Nagios Check Actually Works

The idea behind the check is for the Nagios server to first retrieve what the server says the LLDP neighbour of each interface is, then compare this with it's own source of truth and raise an appropriate OK, WARNING or CRITICAL check result.

Nagios knows what interfaces to check for because Puppet describes every interface to monitor. Nagios makes an SNMP call to the server, getting back CSV output that looks like this:

em1,yes,switch01,1/31,10,Brocade ICX6450-48
em2,yes,switch02,1/31,10,Brocade ICX6450-48

The fields are:
  1. interface name
  2. link
  3. remote LLDP device name
  4. remote LLDP device port
  5. VLAN
  6. remote LLDP device model
A version of this script is up on GitHub here. It contains a lot of conditional logic to handle the LLDP information for different vendor hardware. For example certain Brocade switches don't mention the word "Brocade" so we infer that from the MAC address. Different switches use different fields for the same information as well, and the script parses the right field based on the remote side model type, eg: Brocades and Linux Kernels put the Port ID in the "descr" field but other devices put it in the "id" field.

The Nagios check cross references this data against it's own records which is the "source of truth" file, which looks like this:

server01,em1,switch01,0/31
server01,em2,switch02,0/31

The Nagios check script has some smarts built in to handle logical implementations that don't model well in Patch Manager. One of the complexities is stacked switches. The LLDP information from the server will describe a stacked switch port as something like "3/0/10", where 3 is the Stack ID. In Patch Manager it would get confusing if we labelled every device in a stack the same, so instead we name them switch1-3 where the "-3" indicates the stack number. The Nagios script looks for and parses this as Stack ID.

Our TDI Workflow

The Nagios checks are the critical part of a much larger workflow which gives us Test Driven Infrastructure when we provision new machines. The workflow follows the steps below roughly, and I go into each step in more detail in the following sections:
  1. Physical design is done in Patch Manager, including placement in the rack and patching connections
  2. Connections are exported from Patch Manager into a format that our Nagios servers can parse easily
  3. Logical design is done in Puppet - Roles are assigned and necessary data is put in Hiera
  4. Hardware is physically racked and the management patches are put in first
  5. Server is kickstarted and does it's first Puppet run, Nagios updates itself and begins to run checks against the new server
  6. Engineers use the Nagios checks as their test results, fixing any issues
As you might have deduced already the workflow is not perfectly optimised; the "tests" (Nagios checks) come from Puppet, so you need a machine to be installed before you get any test output. Also we need at least some patching done in order to kickstart the servers before we can get feedback on any of the other patching.

Physical Design in Patch Manager

We use Patch Manager's Software-As-A-Service solution to model our physical infrastructure in our data centres. It is our source of truth for what's in our racks and what connections are between devices. Here's an example of a connection (well, two connections really) going from Gb1 in a server, through a top of rack patch panel, and into a switch:



Exporting Patch Manager Connections

Having all our Nagios servers continually reach out to the Patch Manager API in order to search for connections is wasteful, considering that day to day the data in Patch Manager doesn't change much. Instead we export the connections in patch manager and at the same time filter to remove any intermediate patch panels or devices we don't care about - we only want to know about both ends of the connection. Each Nagios server has a copy of the "patchplan.txt" file, which is an easy to parse CSV that looks like this:

server01,em1,switch01,0/31
server01,em2,switch02,0/31


Logical Design In Puppet

As part of creating the new server in Puppet, the networking configuration is defined and modelled in line with what has been planned in Patch Manager. So for example if a Dell server has it's first two on board NICs connected to management switches in Patch Manager, somewhere in Puppet a bonded interface will be defined with NICs em1 and em2 as slaves (which are the default on board NIC names on a Dell Server).

How we model our logical network design in Puppet is covered in to much more detail here.

Hardware is Physically Racked

Obviously someone needs to go the data centre and rack the hardware. If it's a large build it can take several days, or weeks if there's restricted time we can work in the data centre (like only on weekends). We try to prioritise the patching for management first so we're able to kickstart machines as quickly as possible.

Kickstarts and Puppet Runs

Once a new has done it's first Puppet run and it's catalog is compiled, a set of Exported Puppet Resources that describe Nagios checks for this server are available for collection. The Puppet runs on our Nagios servers will collect all these resources and turn them into relevant Nagios configuration files and begin running these service checks.

Make the Red and Yellow go Green

Since this is a newly built server it's expected that a lot of the validation style Nagios checks will fail, especially if only the management networks are patched but our Puppet code and Patch Manager is expecting other NICs to be connected. Our engineers use the Nagios check results for the new server as the feedback for our Test Driven Infrastructure approach to provisioning new servers - make the tests pass (make the red and yellow go green) and the server is ready for production.

No comments:

Post a Comment