Tuesday, August 17, 2010

Episode #108: Acess List Listing

Hal's turn in the mailbag

Loyal reader Rick Miner sent us an interesting challenge recently. He's got several dozen Cisco IOS and PIX configuration files containing access-list rules. He'd like to have an easy way to audit the access-lists across all the files and see which rules are common to all files and where rules might be missing from some files.

Basically we're looking through the files for lines that start with "access-list", like this one:

access-list 1 deny   any log

However, access-lists can also contain comment lines (indicated by the keyword "remark"), and we don't care about these:

access-list 1 remark This is a comment

We also want to be careful to ignore any extra spaces that may have been introduced for readability. So the following two lines should be treated as the same:

access-list 1 deny   any log
access-list 1 deny any log

Rick sent us a sample access-list, which he'd sanitized with generic IP addresses, etc. I created a directory with a few slightly modified versions of his original sample-- giving me 5 different files to test with.

Now I love challenges like this, because they always allow me to construct some really fun pipelines. Here's my solution:

$ grep -h ^access-list rules0* | grep -v remark | sed 's/  */ /g' | 
sort | uniq -c | sort -nr

5 access-list 4 permit any
...
4 access-list 1 deny any log
...

First I use grep to pull the access-list lines out of my sample files (named rules01, rules02, ...). Normally when you run grep against multiple files it will prepend the file name to each matching line, but I don't want that because I plan on feeding the output to sort and uniq later. So I use the "-h" option with grep to suppress the file names.

Next we have a little cleanup action. I use a second grep command to strip out all the "remark" lines. The output then goes to sed to replace instances of multiple spaces with a single space. Note that the sed substitution is "s/<space><space>*/<space>/g", though it's a little difficult to read in this format.

Finally we have to process our output to answer Rick's question. We sort the lines and then use "uniq -c" to count the number of occurrences of each rule. The second sort gives us a descending numeric sort the lines using the number of instances of each rule as the sort criteria. Since I'm working with five sample files, rules like "access-list 4 permit any" must appear in each file (assuming no duplicate rules, which seems unlikely). On the other hand, "access-list 1 deny any log" appears to be missing from one file.

But which file is our rule missing from? One way to answer this question is to look for the files where the rule is present:

$ grep -l 'access-list 1 deny any log' rules0*'

Wait a minute! What just happened here? We should have gotten four matching files! Well remember how we canonicalized the lines by converting multiple spaces to a single space? Let's try making our rule a bit more flexible:

$ grep -l 'access-list *1 *deny *any *log' rules0*'
rules01
rules02
rules03
rules05

That's better! We use the stars to match any number of spaces and we find our rules. "grep -l" (that's an "el" not a "one") means just display the matching file names and not the matching lines so that we can easily see that "rules04" is the file missing the rule.

But what if you were Rick with dozens of files to sort through. It wouldn't necessarily be clear which files weren't included in the output from our grep command. It would be nicer if we could output the names of the files that were missing the rule, rather than listing the files that included the rule. Easier done than said:

$ sort <(ls) <(grep -l 'access-list *1 *deny *any *log' rules0*) | uniq -u
rules04

"<(...)" is an output substitution that allows you to insert the output of a command in a spot where you would normally expect to use a filename. Here I'm using sort to merge the output of ls, which gives me a list of all files in the directory, with our previous command for selecting the files that contain the rule we're interested in. "uniq -u" gives you the lines that only appear once in the output (the unique lines). Of course these are the files that appear in the ls output but which are not matched by our grep expression, and thus they're the files that don't contain the rule that we're looking for. And that's the answer we wanted.

You can do so much with sort and uniq on the Unix command line. They're some of my absolute favorite utilities. I've laid off the sort and uniq action because Windows CMD.EXE didn't have anything like them and it always made Ed grumpy when I pulled them out of my tool chest. But now that we've murdered Ed and buried him in a shallow grave out backbid farewell to Ed, I get to bring out more of my favorites. Still, I fear this may be another one of those "character building" Episodes for Tim. Let's watch, shall we?

Tim's got plenty of character:

Alright Hal, you used to push Ed around, but you are going to have a slightly tougher time pushing me around. And not just because I wear sticky shoes.

This is going to be easy. Ready to watch, Hal?

PS C:\> ls rules* | ? { -not (Select-String "access-list *1 *deny *any *log" $_) }

Directory: C:\temp

Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 8/16/2010 9:44 PM 1409 rules05.txt
Files whose name begin with "rules" are piped into our filter. The Where-Object filter (alias ?) uses a logical Not in conjunction with Select-String to find files that do not contain our search string. The search string used is the same as that used by Hal.

Now to crank it up a few notches...

But what if we have a file containing our gold standard, and we wanted to compare it against all of our config files to find ones that don't comply with our standard. Your wish is my command (unless your with involves a water buffalo, a nine iron, and some peanut butter).

PS C:\> cat gold.txt | select-string -NotMatch '^ *$' | % { $_ -replace "\s+", "\s+" } | % {
$a = $_;
ls rules* | ? { -not (select-string $a $_) }
} | get-unique


Directory: C:\temp

Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 8/16/2010 9:44 PM 1409 rules02.txt
-a--- 8/16/2010 9:44 PM 1337 rules05.txt
In the command above, the first line of our command gets each non-blank line of our gold config, and changes and spaces into \s+ for use in our search string. The \s+ is the regular expression equivalent of "one or more spaces". Now that we have generated our search string, lets search each file like we did earlier. Finally, we use the Get-Unique cmdlet to remove duplicates.

Hal, you may have buried Ed, but you haven't killed me off...yet.