Wednesday, December 31, 2014

Episode #180: Open for the Holidays!

Not-so-Tiny Tim checks in with the ghost of Christmas present:

I know many of you have been sitting on Santa's lap wishing for more Command Line Kung Fu. Well, we've heard your pleas and are pushing one last Episode out before the New Year!

We come bearing a solution for a problem we've all encountered. Ever try to delete or modify a file and receive an error message that the file is in use? Of course you have! The real problem is trying to track down the user and/or process that has the file locked.

I have a solution for you on Windows, "openfiles". Well, sorta. This time of year I can't risk getting on Santa's bad side so let me add the disclaimer that it is only a partial solution. Here's what I mean, let's look for open files:

C:\> openfiles

INFO: The system global flag 'maintain objects list' needs
      to be enabled to see local opened files.
      See Openfiles /? for more information.


Files opened remotely via local share points:
---------------------------------------------

INFO: No shared open files found.

By default when we run this command it gives us an error that we haven't enabled the feature. Wouldn't it be nice if we could simply turn it on and then look at the open files. Yes, it would be nice...but no. You have to reboot. This present is starting to look a lot like a lump of coal. So you need know that you will encounter the problem before it happens so you can be ready for it. Bah-Humbug!

To enable "openfile" run this command:

C:\> openfile /local on

SUCCESS: The system global flag 'maintain objects list' is enabled.
         This will take effect after the system is restarted.

...then reboot.

Of course, now that we've rebooted the file will be unlocked, but we are prepared for next time. So next time when it happens we can run this command to see the results (note: if you don't specify a switch /query is implied):

C:\> openfiles /query

Files Opened Locally:
---------------------

ID    Process Name         Open File (Path\executable)                       
===== ==================== ==================================================
8     taskhostex.exe       C:\Windows\System32
224   taskhostex.exe       C:\Windows\System32\en-US\taskhostex.exe.mui
296   taskhostex.exe       C:\Windows\Registration\R00000000000d.clb
324   taskhostex.exe       C:\Windows\System32\en-US\MsCtfMonitor.dll.mui
752   taskhostex.exe       C:\Windows\System32\en-US\winmm.dll.mui
784   taskhostex.exe       C:\..\Local\Microsoft\Windows\WebCache\V01tmp.log
812   taskhostex.exe       C:\Windows\System32\en-US\wdmaud.drv.mui
...

Of course, this is a quite long list. You can use use "find" or "findstr" to filter the results, but be aware that long file names are truncated (see ID 784). You can get a full list by changing the format with "/fo LIST". However, the file name will be on a separate line from the owning process and neither "find" nor "findstr" support context.

Another oddity, is that there seems to be duplicate IDs.

C:\> openfiles /query | find "888"
888   chrome.exe           C:\Windows\Fonts\consola.ttf
888   Lenovo Transition.ex C:\..\Lenovo\Lenovo Transition\Gui\yo_btn_g3.png
888   vprintproxy.exe      C:\Windows\Registration\R00000000000d.clb

Different processes with different files, all with the same ID. This means that when you disconnect the open file you better be careful.

Speaking of disconnecting the files, we can do just that with the /disconnect switch. We can disconnect by ID (ill advised) with the /id switch. We can also disconnect all the files based on the user:

C:\> openfiles /disconnect /a jacobmarley

Or the file name:

C:\> openfiles /disconnect /op "C:\Users\tm\Desktop\wishlist.txt" /a *

Or even the directory:

C:\> openfiles /disconnect /op "C:\Users\tm\Desktop\" /a *

We can even run this against a remote system with the /s SERVERNAME option.

This command is far from perfect, but it is pretty cool.

Sadly, there is no built-in capability in PowerShell to do this same thing. With PowerShell v4 we get Get-SmbOpenFile and Close-SmbOpenFile, but they only work on files opened over the network, not on files opened locally.

Now it is time for Mr. Scrooge Pomeranz to ruin my day by using some really useful, built-in, and ENABLED features of Linux.

It's a Happy Holiday for Hal:

Awww, Tim got me the nicest present of all-- a super-easy Command-Line Kung Fu Episode to write!

This one's easy because Linux comes with lsof, a magical tool surely made by elves at the North Pole. I've talked about lsof in several other Episodes already but so far I've focused more on network and process-related queries than checking objects in the file system.

The simplest usage of lsof is checking which processes are using a single file:

# lsof /var/log/messages
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF    NODE NAME
rsyslogd  1250 root    1w   REG    8,3 13779999 3146461 /var/log/messages
abrt-dump 5293 root    4r   REG    8,3 13779999 3146461 /var/log/messages

Here we've got two processes that have /var/log/messages open-- rsyslogd for writing (see the "1w" in the "FD" column, where the "w" means writing), and abrt-dump for reading ("4r", "r" for read-only).

You can use "lsof +d" to see all open files in a given directory:

# lsof +d /var/log
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF    NODE NAME
rsyslogd  1250 root    1w   REG    8,3 14324534 3146461 /var/log/messages
rsyslogd  1250 root    2w   REG    8,3   175427 3146036 /var/log/cron
rsyslogd  1250 root    5w   REG    8,3  1644575 3146432 /var/log/maillog
rsyslogd  1250 root    6w   REG    8,3     2663 3146478 /var/log/secure
abrt-dump 5293 root    4r   REG    8,3 14324534 3146461 /var/log/messages

The funny thing about "lsof +d" is that it only shows you open files in the top-level directory, but not in any sub-directories. You have to use "lsof +D" for that:

# lsof +D /var/log
COMMAND     PID   USER   FD   TYPE DEVICE SIZE/OFF    NODE NAME
rsyslogd   1250   root    1w   REG    8,3 14324534 3146461 /var/log/messages
rsyslogd   1250   root    2w   REG    8,3   175427 3146036 /var/log/cron
rsyslogd   1250   root    5w   REG    8,3  1644575 3146432 /var/log/maillog
rsyslogd   1250   root    6w   REG    8,3     2663 3146478 /var/log/secure
httpd      3081 apache    2w   REG    8,3      586 3146430 /var/log/httpd/error_log
httpd      3081 apache   14w   REG    8,3        0 3147331 /var/log/httpd/access_log
...

Unix-like operating systems track open files on a per-partition basis. This leads to an interesting corner-case with lsof: if you run lsof on a partition boundary, you get a list of all open files under that partition:

# lsof /
COMMAND     PID      USER   FD   TYPE DEVICE SIZE/OFF     NODE NAME
init          1      root  cwd    DIR    8,3     4096        2 /
init          1      root  rtd    DIR    8,3     4096        2 /
init          1      root  txt    REG    8,3   150352 12845094 /sbin/init
init          1      root  DEL    REG    8,3           7340061 /lib64/libnss_files-2.12.so
init          1      root  DEL    REG    8,3           7340104 /lib64/libc-2.12.so
...
# lsof / | wc -l
3500

Unlike Windows, Linux doesn't really have a notion of disconnecting processes from individual files. If you want a process to release a file, you kill the process. lsof has a "-t" flag for terse output. In this mode, it only outputs the PIDs of the matching processes. This was designed to allow you to easily substitute the output of lsof as the arguments to the kill command. Here's the little trick I showed back in Episode 22 for forcibly unmounting a file system:

# umount /home
umount: /home: device is busy
# kill $(lsof -t /home)
# umount /home

Here we're exploiting the fact that /home is a partition mount point so lsof will list all processes with files open anywhere in the file system. Kill all those processes and Santa might leave coal in your stocking next year, but you'll be able to unmount the file system!

Monday, June 30, 2014

Episode #179: The Check is in the Mail

Tim mails one in:

Bob Meckle writes in:

I have recently come across a situation where it would be greatly beneficial to build a script to check revocation dates on certificates issued using a certain template, and send an email to our certificate staff letting them know which certificates will expire within the next 6 weeks. I am wondering if you guys have any tricks up your sleeve in regards to this situation?

Thanks for writing in Bob. This is actually quite simple on Windows. One of my favorite features of PowerShell is that dir (an alias for Get-ChildItem) can be used on pretty much any hierarchy, including the certificate store. After we have a list of the certificates we simply filter by the expiration date. Here is how we can find expiring certificates:

PS C:\> Get-ChildItem -Recurse cert: | Where-Object { $_.NotAfter -le (Get-Date).AddDays(42) -and $_.NotAfter -ge (Get-Date) }

We start off by getting a recursive list of the certificates. Then the results are piped into the Where-Object cmdlet to filter for the certificates that expire between today and six weeks (42 days) from now (inclusive). Additional filtering can be added by simply modifying the Where-Object filter, per Bob's original request. We can shorten the command using aliases and shortened parameter names, as well as store the output to be emailed.

PS C:\> $tomail = ls -r cert: | ? { $_.NotAfter -le (Get-Date).AddDays(42) -and $_.NotAfter -ge (Get-Date) }

Emailing is quite simple too. PowerShell version 4 adds a new cmdlet, Send-MailMessage. We can send the certificate information like this:

PS C:\> Send-MailMessage -To me@blah.com -From cert@blah.com -Subject "Expiring Certs" -Body $tomail

Pretty darn simple if you ask me. Hal, what do you have up your sleeve?

Hal relies on his network:

Tim, you're getting soft with all these cool PowerShell features. Why when I was your age... Oh, nevermind! Kids these days! Hmph!

If I'm feeling curmudgeonly, it's because this is far from a cake-walk in Linux. Obviously, I can't check a Windows certificate store remotely from a Linux machine. So I thought I'd focus on checking remote web site certificates (which could be Windows, Linux, or anything else) from my Linux command line.

The trick for checking a certificate is fairly well documented on the web:

$ echo | openssl s_client -connect www.google.com:443 2>/dev/null | 
     openssl x509 -noout -dates
notBefore=Jun  4 08:58:29 2014 GMT
notAfter=Sep  2 00:00:00 2014 GMT

We use the OpenSSL built-in "s_client" to connect to the target web server and dump the certificate information. The "2>/dev/null" drops some extraneous standard error logging. The leading "echo" piped into the standard input makes sure that we close the connection right after receiving the certificate info. Otherwise our command line will hang and never return to the shell prompt. We then use OpenSSL again to output the information we want from the downloaded certificate. Here we're just requesting the "-dates"-- "-noout" stops the openssl command from displaying the certificate itself.

However, there is much more information you can parse out. Here's a useful report that displays the certificate issuer, certificate name, and fingerprint in addition to the dates:

$ echo | openssl s_client -connect www.google.com:443 2>/dev/null | 
     openssl x509 -noout -issuer -subject -fingerprint -dates
issuer= /C=US/O=Google Inc/CN=Google Internet Authority G2
subject= /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
SHA1 Fingerprint=F4:E1:5F:EB:F1:9F:37:1A:29:DA:3A:74:BF:05:C5:AD:0A:FB:B1:95
notBefore=Jun  4 08:58:29 2014 GMT
notAfter=Sep  2 00:00:00 2014 GMT

If you want a complete dump, just use "... | openssl x509 -text".

But let's see if we can answer Bob's question, at least for a single web server. Of course, that's going to involve some date arithmetic, and the shell is clumsy at that. First I need to pull off just the "notAfter" date from my output:

$ echo | openssl s_client -connect www.google.com:443 2>/dev/null | 
     openssl x509 -noout -dates | tail -1 | cut -f2 -d=
Sep  2 00:00:00 2014 GMT

You can convert a time stamp string like this into a "Unix epoch" date with the GNU date command:

$ date +%s -d 'Sep  2 00:00:00 2014 GMT'
1409616000

To do it all in one command, I just use "$(...) for command output substitution:

$ date +%s -d "$(echo | openssl s_client -connect www.google.com:443 2>/dev/null | 
                     openssl x509 -noout -dates | tail -1 | cut -f2 -d=)"
1409616000

I can calculate the difference between this epoch date and the current epoch date ("date +%s") with some shell arithmetic and more command output substitution:

$ echo $(( $(date +%s -d "$(echo | openssl s_client -connect www.google.com:443 2>/dev/null | 
                                openssl x509 -noout -dates | tail -1 | cut -f2 -d=)") 
            - $(date +%s) ))
5452606

Phew! This is already getting pretty long-winded, but now I can check my result against Bob's 6 week threshold (that's 42 days or 3628800 seconds):

$ [[ $(( $(date +%s -d "$(echo | openssl s_client -connect www.google.com:443 2>/dev/null | 
                              openssl x509 -noout -dates | tail -1 | cut -f2 -d=)") - $(date +%s) )) 
    -gt 3628800 ]] && echo GOOD || echo EXPIRING
GOOD

Clearly this is straying all too close to the borders of Scriptistan, but if you wanted to check several web sites, you could add an outer loop:

$ for d in google.com sans.org facebook.com twitter.com; do 
    echo -ne $d\\t; 
    [[ $(( $(date +%s -d "$(echo | openssl s_client -connect www.$d:443 2>/dev/null | 
                            openssl x509 -noout -dates | tail -1 | cut -f2 -d=)") - $(date +%s) ))
       -gt 3628800 ]] && echo GOOD || echo EXPIRING; 
done
google.com	GOOD
sans.org	GOOD
facebook.com	GOOD
twitter.com	GOOD

See? It's all good, Bob!

Monday, May 26, 2014

Episode #178: Luhn-acy

Hal limbers up in the dojo

To maintain our fighting trim here in the Command Line Kung Fu dojo, we like to set little challenges for ourselves from time to time. Of course, we prefer it when our loyal readers send us ideas, so keep those emails coming! Really... please oh please oh please keep those emails coming... please, please, please... ahem, but I digress.

All of the data breaches in the news over the last year got me thinking about credit card numbers. As many of you are probably aware, credit card numbers have a check digit at the end to help validate the account number. The Luhn algorithm for computing this digit is moderately complicated and I wondered how much shell code it would take to compute these digits.

The Luhn algorithm is a right-to-left calculation, so it seemed like my first task was to peel off the last digit and be able to iterate across the remaining digits in reverse order:

$ for d in $(echo 123456789 | rev | cut -c2- | sed 's/\(.\)/\1 /g'); do echo $d; done
8
7
6
5
4
3
2
1

The "rev" utility flips the order of our digits, and then we just grab everything from the second digit onwards with "cut". We use a little "sed" action to break the digits up into a list we can iterate over.

Then I started thinking about how to do the "doubling" calculation on every other digit. I could have set up a shell function to calculate the doubling each time, but with only 10 possible outcomes, it seemed easier to just create an array with the appropriate values:

$ doubles=(0 2 4 6 8 1 3 5 7 9)
$ for d in $(echo 123456789 | rev | cut -c2- | sed 's/\(.\)/\1 /g'); do echo $d ${doubles[$d]}; done
8 7
7 5
6 3
5 1
4 8
3 6
2 4
1 2

Then I needed to output the "doubled" digit only every other digit, starting with the first from the right. That means a little modular arithmetic:

$ c=0
$ for d in $(echo 123456789 | rev | cut -c2- | sed 's/\(.\)/\1 /g'); do 
    echo $(( ++c % 2 ? ${doubles[$d]} : $d )); 
done
7
7
3
5
8
3
4
1

I've introduced a counting variable, "$c". Inside the loop, I'm evaluating a conditional expression to decide if I need to output the "double" of the digit or just the digit itself. There are several ways I could have handled this conditional operation in the shell, but having it in the mathematical "$((...))" construct is particularly useful when I want to calculate the total:

$ c=0; t=0; 
$ for d in $(echo 123456789 | rev | cut -c2- | sed 's/\(.\)/\1 /g'); do 
    t=$(( $t + (++c % 2 ? ${doubles[$d]} : $d) )); 
done
$ echo $t
38

We're basically done at this point. Instead of outputting the total, "$t", I need to do one more calculation to produce the Luhn digit:

$ c=0; t=0; 
$ for d in $(echo 123456789 | rev | cut -c2- | sed 's/\(.\)/\1 /g'); do 
    t=$(( $t + (++c % 2 ? ${doubles[$d]} : $d) ));
$ echo $(( ($t * 9) % 10 ))
2

Here's the whole thing in one line of shell code, including the array definition:

doubles=(0 2 4 6 8 1 3 5 7 9); 
c=0; t=0; 
for d in $(echo 123456789 | rev | cut -c2- | sed 's/\(.\)/\1 /g'); do 
    t=$(( $t + (++c % 2 ? ${doubles[$d]} : $d) )); 
done; 
echo $(( ($t * 9) % 10 ))

Even with all the extra whitespace, the whole thing fits in under 100 characters! Grand Master Ed would be proud.

I'm not even going to ask Tim to try and do this in CMD.EXE. Grand Master Ed could have handled it, but we'll let Tim use his PowerShell training wheels. I'm just wondering if he can do it so it still fits inside a Tweet...

Tim checks Hal's math

I'm not quite sure how Hal counts, but I when I copy/paste and then use Hal's own wc command I get 195 characters. It is less than *2*00 characters, not long enough to tweet.

Here is how we can accomplish the same task in PowerShell. I'm going to use a slightly different method than Hal. First, I'm going to use his lookup method as it is more terse then doing the extra match via if/then. In addition, I am going to extend his method a little to save a little space.

PS C:\> $lookup = @((0..9),(0,2,4,6,8,1,3,5,7,9));

This mutli-dimensional array contains a lookup for the number as well as the doubled number. That way I can index the value without an if statement to save space. Here is an example:

PS C:\> $isdoubled = $false
PS C:\> $lookup[$isdoubled][6]
6
PS C:\> $isdoubled = $true
PS C:\> $lookup[$isdoubled][7]
15

The shortest way to get each digit, from right to left, is by using regex (regular expression) match and working right to left. A longer way would be to use the string, convert it to a char array, then reverse it but that is long, ugly, and we need to use an additional variable.

The results are fed into a ForEach-Object loop. Before the objects (the digits) passed down the pipeline are handled we need to initialize a few variables, the total and the boolean $isdoubled variables in -Begin. Next, we add the digits up by accessing the items in our array as well as toggling the $isdoubled variable. Finally, we use the ForEach-Object's -End to output the final value of $sum.

PS C:\> ([regex]::Matches('123456789','.','RightToLeft')) | ForEach-Object 
-Begin { $sum = 0; $isdoubled = $false} -Process { $sum += $l[$isdoubled][[int]$_.value];  $d = -not $d } 
-End { $sum }

We can shorten the command to this to save space.

PS C:\> $l=@((0..9),(0,2,4,6,8,1,3,5,7,9));
([regex]::Matches('123456789','.','RightToLeft')) | %{ $s+=$l[$d][$_.value];$d=!$d} -b{$s=0;$d=0} -en{$s}

According to my math this is exactly 140 characters. I could trim another 2 by removing a few spaces too. It's tweetable!

I'll even throw in a bonus version for cmd.exe:

C:\> powershell -command "$l=@((0..9),(0,2,4,6,8,1,3,5,7,9));
([regex]::Matches("123456789",'.','RightToLeft')) | %{ $s+=$l[$d][$_.value];$d=!$d} -b{$s=0;$d=0} -en{$s}"

Ok, it is a bit of cheating, but it does run from CMD.

Hal gets a little help

I'm honestly not sure where my brain was at when I was counting characters in my solution. Shortening variable names and removing all extraneous whitespace, I can get my solution down to about 150 characters, but no smaller.

Happily, Tom Bonds wrote in with this cute little blob of awk which accomplishes the mission:

awk 'BEGIN{split("0246813579",d,""); for (i=split("12345678",a,"");i>0;i--) {t += ++x % 2 ? d[a[i]+1] : a[i];} print (t * 9) % 10}'

Here it is with a little more whitespace:

awk 'BEGIN{ split("0246813579",d,""); 
            for (i=split("12345678",a,"");i>0;i--) {
                t += ++x % 2 ? d[a[i]+1] : a[i];
            } 
            print (t * 9) % 10
          }'

Tom's getting a lot of leverage out of the "split()" operator and using his "for" loop to count backwards down the string. awk is automatically initializing his $t and $x variables to zero each time his program runs, whereas in the shell I have to explicitly set them to zero or the values from the last run will be used.

Anyway, Tom's original version definitely fits in a tweet! Good show, Tom!

Wednesday, April 30, 2014

Episode #177: There and Back Again

Hal finds some old mail

Way, way back after Episode #170 Tony Reusser sent us a follow-up query. If you recall, Episode #170 showed how to change files named "fileaa", "fileab", "fileac", etc to files named "file.001", "file.002", "file.003". Tony's question was how to go back the other way-- from "file.001" to "fileaa", "file.002" to "fileab", and so on.

Why would we want to do this? Heck, I don't know! Ask Tony. Maybe he just wants to torture us to build character. Well we here at Command Line Kung Fu fear no characters, though we may occasionally lose reader emails behind the refrigerator for several months.

The solution is a little scripty, but I did actually type it in on the command line:

    c=1
    for l1 in {a..z}; do 
        for l2 in {a..z}; do 
            printf -v ext %03d $(( c++ ))
            [[ -f file.$ext ]] && mv file.$ext file$l1$l2 || break 2
        done
    done

There are two nested loops here which work together to create our alphabetic file extensions. $l1 represents the first of the two letters, ranging from 'a' to 'z'. $l2 is the second letter, also ranging from 'a' to 'z'. Put them next to each other and you get "aa", "ab", "ac", etc.

Like Episode #170, I'm using a counter variable named $c to track the numeric file extension. So, for all you computer science nerds, this is a rather weird looping construct because I'm using three different loop control variables. And weird is how I roll.

Inside the loop, I re-use the code from Episode #170 to format $c as our three-digit file extension (saved in variable $ext) and auto-increment $c in the same expression. Then I check to see if "file.$ext" exists. If we have a "file.$ext", then we rename it to "file$l1$l2" ("fileaa", "fileab", etc). If "file.$ext" does not exist, then we've run out of "file.xxx" pieces and we can stop looping. "break 2" breaks out of both enclosing loops and terminates our command line.

And there you go. I hope Tim has as much fun as I did with this. I bet he'd have even more fun if I made him do it in CMD.EXE. Frankly all that PowerShell has made him a bit sloppy...

Tim mails this one in just in time from Abu Dhabi

Wow, CMD huh? That trip to Scriptistan must have made him crazy. I think a CMD version of this would violate the laws of physics.

I'm on the other side of the world looking for Nermal in Abu Dhabi. Technically, it is May 1 here, but since the publication date shows April I think this counts as our April post. At least, that is the story I'm sticking to.

Sadly, I too will have to enter Scriptistan. I have a visa for a long stay on this one. I'll start by using a function to convert a number to Base26. The basis of this function is taken from here. I modified the function so we could add leading A's to adjust the width.

function Convert-ToLetters ([parameter(Mandatory=$true,ValueFromPipeline=$true)][int] $Value, [int]$MinWidth=0) {
    $currVal = $Value
    if ($LeadingDigits -gt 0) { $currVal = $currVal + [int][Math]::Pow(26, $LeadingDigits) }
    $returnVal = '';
    while ($currVal -ge 26) {
        $returnVal = [char](($currVal) % 26 + 97) + $returnVal;
        $currVal =  [int][math]::Floor($currVal / 26)
    }
    $returnVal = [char](($currVal) + 64) + $returnVal;
     
    return $returnVal
}

This allows me to cheat greatly simplify the renaming process.

PS C:\> ls file.* | % { move $_ "file.$(Convert-ToLetters [int]$_.Extension.Substring(1) -MinWidth 3 )" }

This command will read the files starting with "file.", translate the extension in to Base26 (letters), and rename the file. The minimum width is configurable as well, so file.001 could be file.a, file.aa, file.aaa, etc. Also, this version will support more than 26^2 files.

Monday, March 31, 2014

Episode #176: Step Up to the WMIC

Tim grabs the mic:

Michael Behan writes in:

Perhaps you guys can make this one better. Haven’t put a ton of thought into it:

C:\> (echo HTTP/1.0 200 OK & wmic process list full /format:htable) | nc -l -p 3000

Then visit http://127.0.0.1:3000

This could of course be used to generate a lot more HTML reports via wmic that are quick to save from the browser. The downside is that in its current state is that the page can only be visited once. Adding something like /every:5 just pollutes the web page with mostly duplicate output.

Assuming you already have netcat (nc.exe) on the system the command above will work fine, but it will only work once. After the browser recieves the data the connection has been used and the command is done. To do this multiple times you must wrap it in an infinite For loop.

C:\> for /L %i in (1, 0, 2) do (echo HTTP/1.0 200 OK & wmic process list full /format:htable) | nc -l -p 3000

This will count from 1 to 2 and count by 0, which will never happen (except for very large values of 0). We could use the wmic command to request this information from the remote machine and view it in our browser. This method will authenticate to the remote machine instead of allowing anyone to access the information.

C:\> wmic /node:joelaptop process list full /format:htable > joelaptopprocesses.html && start joelaptopprocesses.html

This will use your current credentials to authenticate to the remote machine, request the remote process in html format, save it to a file, and finally open the file in your default viewer (likely your browser). If you need to use separate credentials you can specify /user:myusername and /password:myP@assw0rd.

Hal, your turn, and I want to see this in nice HTML format. :)

Hal throws up some jazz hands:

Wow. Tim seems a little grumpy. Maybe it's because he can make a simple web server on the command line but has no way to actually request data from it via the command line. Don't worry Little Tim, maybe someday...

Heck, maybe Tim's grumpy because of the dumb way he has to code infinite loops in CMD.EXE. This is a lot easier:

$ while :; do ps -ef | nc -l 3000; done

Frankly, most browsers will interpret this as "text/plain" by default and display the output correctly.

But the above loop got me thinking that we could actually stack multiple commands in sequence:

while :; do
    ps -ef | nc -l 3000
    netstat -anp | nc -l 3000
    df -h | nc -l 3000
    ...
done

Each connection will return the output of a different command until you eventually exhaust the list and start all over again with the first command.

OK, now let's deal with grumpy Tim's request for "nice HTML format". Nothing could be easier, my friends:

$ while :; do (echo '<pre>'; ps -ef; echo '</pre>') | nc -l 3000; done

Hey, it's accepted by every major browser I tested it with! And that's the way we do it downtown... (Hal drops the mic)

Friday, February 28, 2014

Episode #175: More Time! We Need More Time!

Tim leaps in

Every four years (or so) we get an extra day in February, leap year. When I was a kid this term confused me. Frogs leap, they leap over things. A leap year should be shorter! Obviously, I was wrong.

This extra day can give us extra time to complete tasks (e.g. write blog post), so we are going to use our shells to check if the current year is a leap year.

PS C:\> [DateTime]::IsLeapYear(2014)
False

Sadly, this year we do not have extra time. Let's confirm that this command does indeed work by checking a few other years.

PS C:\> [DateTime]::IsLeapYear(2012)
True
PS C:\> [DateTime]::IsLeapYear(2000)
True
PS C:\> [DateTime]::IsLeapYear(1900)
False

Wait a second! Something is wrong. The year 1900 is a multiple of 4, why is it not a leap year?

The sun does not take exactly 365.25 days to get around the sun, it is actually 365.242199 days. This means that if we always leaped every four years we would slowly get off course. So every 100 years we skip the leap year.

Now you are probably wondering why 2000 had a leap year. That is because it is actually the exception to the exception. Every 400 years we skip skipping the leap year. What a cool bit of trivia, huh?

Hal, how jump is your shell?

Hal jumps back

I should have insisted Tim do this one in CMD.EXE. Isn't is nice that PowerShell has a IsLeapYear() built-in? Back in my day, we didn't even have zeroes! We had to bend two ones together to make zeroes! Up hill! Both ways! In the snow!

Enough reminiscing. Let's make our own IsLeapYear function in the shell:

function IsLeapYear {
    year=${1:-$(date +%Y)};
    [[ $(($year % 400)) -eq 0 || ( $(($year % 4)) -eq 0 && $(($year % 100)) -ne 0 ) ]]
}

There's some fun stuff in this function. First we check to see if the function is called with an argument ("${1-..."). If so, then that's the year we'll check. Otherwise we check the current year, which is the value returned by "$(date +%Y)".

The other line of the function is the standard algorithm for figuring leap years. It's a leap year if the year is evenly divisible by 400, or divisible by 4 and not divisible by 100. Since shell functions return the value of the last command or expression executed, our function returns whether or not it's a leap year. Nice and easy, huh?

Now we can run some tests using our IsLeapYear function, just like Tim did:

$ IsLeapYear && echo Leaping lizards! || echo Arf, no
Arf, no
$ IsLeapYear 2012 && echo Leaping lizards! || echo Arf, no
Leaping lizards!
$ IsLeapYear 2000 && echo Leaping lizards! || echo Arf, no
Leaping lizards!
$ IsLeapYear 1900 && echo Leaping lizards! || echo Arf, no
Arf, no

Assuming the current year is not a Leap Year, we could even wrap a loop around IsLeapYear to figure out the next leap year:

$ y=$(date +%Y); while :; do IsLeapYear $((++y)) && break; done; echo $y
2016

We begin by initializing $y to the current year. Then we go into an infinte loop ("while :; do..."). Inside the loop we add one to $y and call IsLeapYear. If IsLeapYear returns true, then we "break" out of the loop. When the loop is all done, simply echo the last value of $y.

Stick that in your PowerShell pipe and smoke it, Tim!

Tuesday, January 28, 2014

Episode #174: Lightning Lockdown

Hal firewalls fast

Recently a client needed me to quickly set up an IP Tables firewall on a production server that was effectively open on the Internet. I knew very little about the machine, and we couldn't afford to break any of the production traffic to and from the box.

It occurred to me that a decent first approximation would be to simply look at the network services currently in use, and create a firewall based on that. The resulting policy would probably be a bit more loose than it needed to or should be, but it would be infinitely better than no firewall at all!

I went with lsof, because I found the output easier to parse than netstat:

# lsof -i -nlP | awk '{print $1, $8, $9}' | sort -u
COMMAND NODE NAME
httpd TCP *:80
named TCP 127.0.0.1:53
named TCP 127.0.0.1:953
named TCP [::1]:953
named TCP 150.123.32.3:53
named UDP 127.0.0.1:53
named UDP 150.123.32.3:53
ntpd UDP [::1]:123
ntpd UDP *:123
ntpd UDP 127.0.0.1:123
ntpd UDP 150.123.32.3:123
ntpd UDP [fe80::baac:6fff:fe8e:a0f1]:123
ntpd UDP [fe80::baac:6fff:fe8e:a0f2]:123
portreser UDP *:783
sendmail TCP 150.123.32.3:25
sendmail TCP 150.123.32.3:25->58.50.15.213:1526
sendmail TCP *:587
sshd TCP *:22
sshd TCP 150.123.32.3:22->121.28.56.2:39054

I could have left off the process name, but it helped me decide which ports were important to include in the new firewall rules. Honestly, the output above was good enough for me to quickly throw together some workable IP Tables rules. I simply saved the output to a text file and hacked things together with a text editor.

But maybe you only care about the port information:

# lsof -i -nlP | awk '{print $9, $8, $1}' | sed 's/.*://' | sort -u
123 UDP ntpd
1526 TCP sendmail
22 TCP sshd
25 TCP sendmail
39054 TCP sshd
53 TCP named
53 UDP named
587 TCP sendmail
783 UDP portreser
80 TCP httpd
953 TCP named
NAME NODE COMMAND

Note that I inverted the field output order, just to make my sed a little easier to write

If you wanted to go really crazy, you could even create and load the actual rules on the fly. I don't recommend this at all, but it will make Tim's life harder in the next section, so here goes:

lsof -i -nlP | tail -n +2 | awk '{print $9, $8}' | 
    sed 's/.*://' | sort -u | tr A-Z a-z | 
    while read port proto; do ufw allow $port/$proto; done

I added a "tail -n +2" to get rid of the header line. I also dropped the command name from my awk output. There's a new "tr A-Z a-z" in there to lower-case the protocol name. Finally we end with a loop that takes the port and protocol and uses the ufw command line interface to add the rules. You could do the same with the iptables command and its nasty syntax, but if you're on a Linux distro with UFW, I strongly urge you to use it!

So, Tim, I figure you can parse netstat output pretty easily. How about the command-line interface to the Windows firewall? Remember, adversity builds character...

Tim builds character

When I first saw this I thought, "Man, this is going to be easy with the new cmdlets in PowerShell v4!" There are a lot of new cmdlets available in PowerShell version 4, and both Windows 8.1 and Server 2012R2 ship with PowerShell version 4. In addition, PowerShell version 4 is available for Windows 7 SP1 (and later) and Windows Server 2008 R2 SP1 (and later).

The first cmdlet that will help us out here is Get-NetTCPConnection. According to the help page this cmdlet "gets current TCP connections. Use this cmdlet to view TCP connection properties such as local or remote IP address, local or remote port, and connection state." This is going to be great! But...

It doesn't mention the process ID or process name. Nooooo! This can't be. Let's look at all the properties of the output objects.

PS C:\> Get-NetTCPConnection | Format-List *

State                    : Established
AppliedSetting           : Internet
Caption                  :
Description              :
ElementName              :
InstanceID               : 192.168.1.167++445++10.11.22.33++49278
CommunicationStatus      :
DetailedStatus           :
HealthState              :
InstallDate              :
Name                     :
OperatingStatus          :
OperationalStatus        :
PrimaryStatus            :
Status                   :
StatusDescriptions       :
AvailableRequestedStates :
EnabledDefault           : 2
EnabledState             :
OtherEnabledState        :
RequestedState           : 5
TimeOfLastStateChange    :
TransitioningToState     : 12
AggregationBehavior      :
Directionality           :
LocalAddress             : 192.168.1.167
LocalPort                : 445
RemoteAddress            : 10.11.22.33
RemotePort               : 49278
PSComputerName           :
CimClass                 : ROOT/StandardCimv2:MSFT_NetTCPConnection
CimInstanceProperties    : {Caption, Description, ElementName, InstanceID...}
CimSystemProperties      : Microsoft.Management.Infrastructure.CimSystemProperties

Dang! This will get most of what we want (where "want" was defined by that Hal guy), but it won't get the process ID or the process name. So much for rubbing the new cmdlets in his face.

Let's forget about Hal for a second and get what we can with this cmdlet.

PS C:\> Get-NetTCPConnection | Select-Object LocalPort | Sort-Object -Unique LocalPort
LocalPort
---------
    135
    139
    445
   3587
   5357
  49152
  49153
  49154
  49155
  49156
  49157
  49164

This is helpful for getting a list of ports, but not useful for making decisions about what should be allowed. Also, we would need to run Get-NetUDPEndpoint to get the UDP connections. This is so close, yet so bloody far. We have to resort to the old school netstat command and the -b option to get the executable name. In episode 123 we needed parsed netstat output. I recommended the Get-Netstat script available at poshcode.org. Sadly, we are going to have to resort to that again. With this script we can quickly get the port, protocol, and process name.

PS C:\> .\get-netstat.ps1 | Select-Object ProcessName, Protocol, LocalPort | 
   Sort-Object -Unique LocalPort, Protocol, ProcessName

ProcessName   Protocol      Localport
-----------   --------      ---------
svchost       TCP           135
System        UDP           137
System        UDP           138
System        TCP           139
svchost       UDP           1900
svchost       UDP           3540
svchost       UDP           3544
svchost       TCP           3587
dasHost       UDP           3702
svchost       UDP           3702
System        TCP           445
svchost       UDP           4500
...

It should be pretty obvious that the port 137-149 and 445 should not be accessible from the internet. We can filter these ports out so that we don't allow these ports through the firewall.

PS C:\> ... | Where-Object { (135..139 + 445) -NotContains $_.LocalPort }
ProcessName   Protocol      Localport
-----------   --------      ---------
svchost       UDP           1900
svchost       UDP           3540
svchost       UDP           3544
svchost       TCP           3587
dasHost       UDP           3702
svchost       UDP           3702
svchost       UDP           4500
...

Now that we have the ports and protocols we can create new firewall rules using the new New-NetFirewallRule cmdlet. Yeah!

PS C:\> .\get-netstat.ps1 | Select-Object Protocol, LocalPort | Sort-Object -Unique * | 
 Where-Object { (135..139 + 445) -NotContains $_.LocalPort } | 
 ForEach-Object { New-NetFirewallRule -DisplayName AllowedByScript -Direction Outbound 
 -Action Allow  -LocalPort $_.LocalPort -Protocol $_.Protocol }
Name                  : {d15ca484-5d16-413f-8460-a29204ff06ed}
DisplayName           : AllowedByScript
Description           :
DisplayGroup          :
Group                 :
Enabled               : True
Profile               : Any
Platform              : {}
Direction             : Outbound
Action                : Allow
EdgeTraversalPolicy   : Block
LooseSourceMapping    : False
LocalOnlyMapping      : False
Owner                 :
PrimaryStatus         : OK
Status                : The rule was parsed successfully from the store. (65536)
EnforcementStatus     : NotApplicable
PolicyStoreSource     : PersistentStore
PolicyStoreSourceType : Local
...

These new firewall cmdlets really make things easier, but if you don't have PowerShellv4 you can still use the old netsh command to add the firewall rules. Also, the Get-Netstat will support older version of PowerShell as well, so this is nicely backwards compatible. All we need to do is replace the command inside the ForEach-Object cmdlet's script block.

PS C:\> ... | ForEach-Object { netsh advfirewall firewall add rule 
 name="AllowedByScript" dir=in action=allow protocol=$_.Protocol 
 localport=$_.LocalPort }