Tuesday, February 23, 2010

Episode #83: Faster. Higher. Stronger.

Tim goes for Gold:

The Olympics are in full swing. The world's finest athletes are doing everything they can to shave fractions of a second; and not get caught doping. This episode we try to shave off precious seconds from our tasks, and not get caught with our (Irish) coffee at work (again).

We all have those commands that we use regularly, and accessing those commands more quickly would save some time. With PoweShell we can do just that by using Set-Alias.

PS C:\> Set-Alias -Name ss -Value Select-String


The Name and Value paramters are positional, so we don't have to use the parameter names. This command does the same thing:

PS C:\> Set-Alias ss Select-String


We have created our alias, now how can we use it? If we wanted to search all the files in a directory for the word "test" we had to use this command:

PS C:\> gci | Select-String test


...but now we can use this shorter command:

PS C:\> gci | ss test


The second command is half the length of the first command command. That is a nice efficiency gain.

What if we regularly checked the event log to see the five latest items? Obviously a short command would save us some time, but see what happens when we try to create an alias.

PS C:\> Set-Alias g5 Get-WinEvent -MaxEvents 5
Set-Alias : A parameter cannot be found that matches parameter name 'MaxEvents'.


That didn't work, but we can do it, we just need to use a function.

PS C:\> function Get-Last5Events { Get-WinEvent -MaxEvents 5 }
PS C:\> Get-Last5Events

TimeCreated ProviderName Id Message
----------- ------------ -- -------
2/23/2010 8:12:1... Service Control ... 7000 The Diagnostic S...
2/23/2010 8:12:1... Microsoft-Window... 135 The Diagnostic P...
2/23/2010 8:12:1... Service Control ... 7000 The Diagnostic S...
2/23/2010 8:12:1... Microsoft-Window... 135 The Diagnostic P...
2/23/2010 8:11:4... Service Control ... 7036 The Computer Bro...


The name we picked for our function is a bit long, so let's use Set-Alias to create an alias for the function.

PS C:\> Set-Alias g5 Get-Last5Events


So we've shaved a few seconds off of our commands, now on to the doping.

In PowerShell we can use snap-ins and modules to extend the shell. There are modules and snap-ins for managing Active Directory, Group Policy, Diagnostics, Exchange 2007 and 2010, SharePoint 2010, IIS 7, VMware, and many more servers and services.

Snap-ins load sets of cmdlets and providers. Modules, which are only available in v2, can include cmdlets, providers, functions, variables, aliases, and much more. Modules are easier to create than snap-ins and they appear destined to replace snapins as the main way to extend PowerShell. You programmers can think of modules as a "class" while a snap-in is just a collection of functions.

Here is how a snap-in is loaded:

PS C:\> Add-PSSnapin VMware.VimAutomation.Core


Wildard characters can be used in the module name. I use it since I can't ever remeber to type VMware.VimAutomation.Core, so I just type *vmware*. Let's see what cmdlets have been added.

PS C:\> Get-Command -Module *vmware*

CommandType Name Definition
----------- ---- ----------
Cmdlet Add-VMHost Add-VMHost [-Name] <String> ...
Cmdlet Add-VMHostNtpServer Add-VMHostNtpServer [-NtpSer...
Cmdlet Apply-VMHostProfile Apply-VMHostProfile [-Entity...
Cmdlet Connect-VIServer Connect-VIServer [-Server] <...
...


Now, let's load a module and then see what new cmdlets are available:

PS C:\> Import-Module GroupPolicy
PS C:\> Get-Command -Module GroupPolicy

CommandType Name Definition
----------- ---- ----------
Cmdlet Backup-GPO Backup-GPO -Guid <Guid> -Pat...
Cmdlet Copy-GPO Copy-GPO -SourceGuid <Guid> ...
Cmdlet Get-GPInheritance Get-GPInheritance [-Target] ...
Cmdlet Get-GPO Get-GPO [-Guid] <Guid> [[-Do...
...


We have these new aliases, functions and cmdlets, but we don't want to go through the same setup everytime. We can automatically load these by editing our profile, but where is it?

There are four profiles, but we typically only work with the one stored in $profile. This file may not exist, so you may need to create it.

PS C:\> $profile
C:\Users\tim\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1


We can edit the file with good ol' notepad.

PS C:\> notepad $profile


Once we have it open we can add all of our aliases, functions, snap-ins and modules.

Set-Alias -Name ss -Value Select-String
function Get-Last5Events { Get-WinEvent -MaxEvents 5 }
Add-PSSnapin VMware.VimAutomation.Core
Get-Command -Module GroupPolicy
Clear-Host
Write-Host "Welcome to Tim's shell..."


This is a good spot to add those commands and functions that aren't built into PowerShell such as Test-Hash and Get-Netstat.

Let's see how the other shells compete.

Hal's Been There, Done That

"See how other shells compete"? Kid, Unix shells were medalling in the Olympic Aliasing event before Powershell was even a gleam in some demented Microsoft developer's eye.

Setting up aliases in bash is straightforward. Here's how I make it easy to access the history command in my shell:

$ alias h=history
$ h
...
501 alias h=history
502 h
$ type h
h is aliased to `history'

Notice that the type command will tell you when a given command is an alias (or a shell built-in, or a regular command, or...), and thus is preferrable to commands like which.

Aside from using aliases to shorten commands that I use frequently, I'll often use aliases as a way of making sure I get the program I want, even when I type the wrong thing:

alias more=less
alias mail=mutt

And, yes, I still regularly use mutt to read my email. You got a problem with that?

Aliases can always have multiple arguments, you just need to be careful with your quoting:

alias clean='rm -f *~ .*~'
alias l='ls -AF'
alias tless='less +G'


Aliases can do pretty much any shell code you can imagine, but they do have one shortcoming: any variables you put into an alias are expanded when the alias is read during the start-up of your shell. You can't generally have variables that get interpolated when the alias is executed. For example, I wanted to create an alias that did a "cd" to whatever directory the last file I accessed lives in. But that requires run-time interpolation in order to know the last file accessed, so I can't do it in an alias.

However, you can use shell functions for this:

$ function f { cd `dirname $_`; }
$ cp ~/.bashrc ~/stuff/testing/src/clkf/aliases/bashrc.example
$ f
$ pwd
/home/hal/stuff/testing/src/clkf/aliases

I called the function "f" for "follow"-- as in, "copy that file to a directory and then follow it over there". It's hugely useful.

You'll notice that the function uses the magical bash builtin variable "$_", which is always set to the last argument of the previous command. So my function only works if the last argument of the last command is a file name, but that's good enough for my purposes. If I had specified a directory instead of a file name in the cp command, then I could have just used "cd !$" instead of "f":

$ cp ~/.bashrc ~/stuff/testing/src/clkf/aliases
$ cd !$
cd ~/stuff/testing/src/clkf/aliases

If you're wondering what the heck that "!$" is all about, you need to go back and check out Episode #14.

The best place to put your aliases and shell functions is in your ~/.bashrc file. That way, they'll get read every time you fire off a shell.

Hmmm, I hope this isn't another one of those "easy for Unix (and Powershell), hard for CMD.EXE" kinds of things. Ed always gets so cranky about those...

Ed's Not Overly Cranky Today
Cranky? Moi? Nevah. Well, mostly never. Ok... sometimes.

Despite my non-cranky demeanor, support for aliases in the cmd.exe shell isn't particularly strong. In this corner of the shell-o-sphere, we typically apply two common approaches to mimicking aliases: doskey macros and small batch files. Let's look at each.

Didja shudder just a little bit when I mentioned doskey? Yeah, it's very old skool, but it can help us get some work done. To define a macro, you could simply run:
C:\> doskey <MacroName>=<Macro>
For example, if you want to display your running processes by simply running "ps", you could run:
C:\> doskey ps=wmic process list brief
C:\> ps

HandleCount Name Priority ProcessId ThreadCount WorkingSetSize
0 System Idle Process 0 0 1 24576
426 System 8 4 97 581632
---SNIP---
That simple macro is nice, but it's got a big limitation we've got to overcome by expanding our macro knowledge. To see this limitation, let's try running our new macro and searching its output for cmd.exe:
C:\> ps | find "cmd.exe"
HandleCount Name Priority ProcessId ThreadCount WorkingSetSize
0 System Idle Process 0 0 1 24576
426 System 8 4 97 581632
---SNIP---
Doh! We only got the full output of our ps macro, without our search taking effect. What happened? Well, the macro substitution by default just ignores anything that follows it on the command line, unless we define the macro to end with $*, which holds the remainder of the command line typed in after the macro. By putting it at the end of our macro, everything typed after the macro will be executed after macro expansion. So, a better ps would be:
C:\> doskey ps=wmic process list brief $*
C:\> ps | find "cmd.exe"
23 cmd.exe 8 676 1 1462272
28 cmd.exe 8 4008 1 2785280
That's what Momma likes.

Next, to more closely mimic what Hal does above with shell history by simply running the "history" command or the "h" command, you could use:
C:\> doskey history=doskey /history $*
C:\> doskey h=doskey /history $*

C:\> h
Note that I did use $* here, in case someone wants to pipe our output to another command or to redirect it into a file. If you just run the macro with nothing after it, $* contains nothing, so it executes as you'd expect... no harm, no foul. You can even split up the command line options following your macro, referring to each component as $1 up to $9, using each independently. And, if you want to have multiple commands in a macro, enter them as command1$Tcommand2, rather than using & between them. But, if you really want to use &, modern versions of Windows allow you to define the macro as command1 ^& command2. Likewise, if you want to do any piping in doskey macros, make sure you use a ^|.

If you ever want to list all of the macros you've created, run:
C:\> doskey /macros
It should be noted that these macros take precedent over any normal commands you type at cmd.exe, so you could really make your shell useless with them. For example, if you run:

c:\> doskey dir=echo No Way Dude!

c:\> dir
No Way Dude!
You've lost access to the dir command. To undefine a macro, simply use doskey to redefine a macro name with nothing after the equals sign:
C:\> doskey dir=
C:\> dir
Volume in drive C has no label.
Volume Serial Number is 442A-03DE
---SNIP---
Now, these little toy macros we've defined above are cute, but let's get into some macros that really save us some serious time. Faithful readers of this blog (Hi Mom!) know that many of my episodes include certain constructs, like making a command run forever or invoking delayed variable expansion. Let's look at some macros for those:
C:\> doskey forever=for /L %z in (1,0,2) do @$*

C:\> forever ipconfig /displaydns & ping -n 2 127.0.0.1 > nul
Here, I've created a macro called "forever" that invokes a FOR /L loop, set to count from 1 to 2 in steps of zero. That way, it'll run forever. In the DO clause of my loop, I turn off the display of commands (@) and run whatever follows the invocation of forever. Note that I have to define a variable for my FOR loop, and I've chosen %z. That's because I want to minimize the chance I'll have a collision with any variables I might choose for my command to forever-ize. If I had used a %i in my macro, I'd really be thrown for a loop (no pun intended) if I used %i in my follow-up command. With FOR /F loops that have multiple tokens, variables are dynamically allocated alphabetically, so I hang out at the end of the alphabet here to lower the chance of collision.

You could put your ping delay inside the forever macro, but I find it best to keep it out, giving me more flexibility as I define my commands. Note that the command I'm running after forever here will run ipconfig to dump the DNS cache, followed by a one-second delay introduced by pinging myself twice (first ping happens immediately, followed by another ping one second later).

And, to perform delayed variable expansion (described in Episode #12), I could define a macro of:
C:\> doskey dve=cmd.exe /v:on /c "$*"
Note that you can put stuff in a macro _after_ the $* remainder of your command-line invocation, as I'm putting in a close quotation mark here to make sure that my full command gets executed in the context of delayed variable expansion. Now, I can do crazy stuff like this from Episode 58, filling the screen with random numbers to look a little like the Matrix:
C:\> dve for /L %i in (1,0,2) do @set /a !random!
It's important to note that any environment variables whose value changes in your command must be referred to using !variable_name!, instead of the more traditional %variable_name%.

Ahhh... "But wait," you say. After running dve in this example, I manually typed out a FOR /L loop that was simply my "forever" macro from before. Couldn't I just do a dve followed by a forever? Let's try it:
C:\> dve forever set /a !random!
'forever' is not recognized as an internal or external command, operable program
or batch file.
No sir. You can't nest these suckers. Also, they have a problem if they aren't the first command included on your command line. Consider:
C:\> echo hello & ps
hello
'ps' is not recognized as an internal or external command,
operable program or batch file.
So, we've got some pretty serious limitations here*, but, as Tim points out, they can shave off a precious few seconds, especially for things we often start our command-lines with, such as the forever FOR loop or delayed variable expansion.

Wanna see a really whacked out macro? This one was provided by Brian Dyson, a cmd.exe warrior like no other (and a long-lost twin of my friend Mike Poor, but that is a story for another day). Brian showed me a macro in which he emulates the !command feature of bash, which Hal alludes to above. Check out this amazing action, which I quote from Brian:

For the hard-core DOSKEY macro lover, a '!' DOSKEY macro that acts (somewhat like) Bash, running the last command:
c:\> doskey !=doskey /history ^| findstr /r /c:"^$*" ^| findstr /rv /c:"^!" ^>
"%TEMP%\history.txt" ^&^& ((for /f "tokens=*" %a in (%TEMP%\history.txt) do
@set _=%a) ^&^& call echo.%_^% ^& call call ^%_^%) ^& if exist "%TEMP%\history.txt"
del "%TEMP%\history.txt"
Here we get the history and pipe it into a `findstr` command searching
for commands that begin with the arguments to `!`. We remove any
previous `!` command and redirect everything into a temporary file. (I
couldn't find a work-around for the temporary file). If the final
findstr was successful, parse through the temporary file and set '_' to
the last command (like Bash). If this was successful, then echo out the
command and call it via double `call`. Finally, clean up the temporary
history file.

Dude! Nice. Now, to invoke Brian's macro, you have to run ! followed by a space, followed by a command or letters and it will invoke the last instance of whatever previous command started with the letters you type. Check it out:
C:\> ! di
dir
Volume in drive C has no label.
Volume Serial Number is 442A-03DE

Directory of c:\
---SNIP---
That space between the ! macro and the first part of the command we want to run is very important. Without it, cmd.exe tries to find a command called !di and bombs. With it, Brian's macro kicks in and expands it to the most recent command that starts with those letters.

Note that a given set of macros only applies to the shell in which it is created. Your macros aren't carried to other currently running shells, nor do they even apply to child shells that you spawn. If you exit your shell, they are gone. If you want to make your macros permanent, you should first define them, as we show above and put all those definitions in a file. Then, you can export them into a file, by running:
C:\> doskey /macros > macros.txt
You can call the file anything you want, but I like calling it macros.txt because it's easy to remember. Then, at any time, you can import these macros by running:
C:\> doskey /macrofile=macros.txt
If you want to apply your macros to every cmd.exe you launch going forward, you can place your macros.txt in a convenient place on your system (such as your common user home directory or even in system32). Then, put a command like the following into any of your autostart locations:

%systemroot%\system32\doskey.exe /macrofile=%systemroot%\system32\macros.txt

For macros, I typically put them in the autostart entry associated with the command shell itself, namely the Autorun Registry key at HKCU\Software\Microsoft\Command Processor. You can define this key by running:
C:\> reg add "hkcu\software\microsoft\command processor" /v Autorun /t reg_sz /d
"%systemroot%\system32\doskey.exe /macrofile=%systemroot%\system32\macros.txt"
Be careful! If you already have a command set to autorun via this key, you may want to append the command to your already-existing one by simply inserting an & between the two commands. The reg command will prompt you to confirm or reject the overwrite if you already have something there.

*To get around some (but not all) of the limitations of doskey macros, you could alternatively use small bat files placed in %systemroot%\system32 to kick off familiar commands. For example, you could run:
C:\> echo @wmic process list brief > %systemroot%\system32\ps.bat
C:\> ps
HandleCount Name Priority ProcessId ThreadCount WorkingSetSize
0 System Idle Process 0 0 1 24576
404 System 8 4 97 774144
---SNIP---
The advantages of doing your command-line shrinkage with bat files is that you can now run them as little commands themselves, anywhere in your command invocation:
C:\> echo hello & ps
hello
HandleCount Name Priority ProcessId ThreadCount WorkingSetSize
0 System Idle Process 0 0 1 24576
404 System 8 4 97 745472
---SNIP---
The downside of this approach is that your bat file must contain whole commands, not just the start of a command, like we can do with macros. That's why my forever and dve examples above work so well as macros and not as bat files. They are the starter of something else, but not whole commands to themselves, as is the ps and history examples we've touched on here.

So, back to Tim's juicing metaphor. You get to pick your poison with cmd.exe and alias-like behavior. While both methods have some limitations, shell jocks can use macros or bat files to shave off a few precious seconds and boost your performance.

Seth (our Go-To-Mac Guy) Shoots & Scores:
Saving time? Who wants to save time? I thought the whole point of CLI tricks was to make things as painful and as drawn out as possible? Oh wait, that's the job of some of the curlers in Vancouver.

When you start talking about an alias on a Mac, the first thing any Mac user is going to think of is a GUI pointer, otherwise known in Mac land as an Alias. These have existed long before the days of OS X and while not as powerful as say Unix Symbolic Links, they do have some nice features. For example, if you move a target file of an alias, the alias will still work as long as the target is on the same file system. Sadly though, they don't work from the command line. Use good old ln for that.

But I get off topic. I'm disappointed in Hal; for all his talk about being quick on the draw and not letting the boss see you enjoying that wonderful concoction of roasted bean water and barley, he doesn't take his shell fu to the next level.

Hal left us with:
$ function f { cd `dirname $_`; }
$ cp ~/.bashrc ~/stuff/testing/src/clkf/aliases/bashrc.example
$ f
$ pwd
/home/hal/stuff/testing/src/clkf/aliases
Great! But who's to say we can't use aliases?
$ alias lastdir='function f { cd `dirname $_`; }; f; echo Your working directory is now $PWD'
$ cp ~/.bashrc ~/stuff/testing/src/clkf/aliases/bashrc.example
$ lastdir
Your working directory is now /home/hal/stuff/testing/src/clkf/aliases
Now you might ask, what's the point of wrapping a function into an alias, it's extra unneeded text in an already beautiful simple command! True, but it shows the flexibility of one line commands ( I love the semi-colon) and saves us an extra step when we load it into our Mac ~/.bash_profile file.

Aliases also allow us to use variables, you just have to remember to single quote and not double quote the command. Otherwise it will resolve the variable when you set the alias.
$ pwd
/Users/seth
$ alias shellfu="echo O Canada! Our $HOSTNAME and native $PWD; echo True patriot love in all they sons $SHELL"
$ shellfu
O Canada! Our HomePC and native /Users/seth
True patriot love in all they sons /bin/bash
$ cd ~/Desktop
$ shellfu
O Canada! Our HomeMac and native /Users/seth
True patriot love in all they sons /bin/bash

So even though we changed the current working directory, our alias is reporting the path that was active when we set the alias. But when we single quote the command:
$ pwd
/Users/seth
$ alias shellfu='echo O Canada! Our $HOSTNAME and native $PWD; echo True patriot love in all they sons $SHELL'
$ shellfu
O Canada! Our HomeMac and native /Users/seth
True patriot love in all they sons /bin/bash
$ cd ~/Desktop
$ shellfu
O Canada! Our HomeMac and native /Users/seth/Desktop
True patriot love in all they sons /bin/bash

Ok, so what have we learned? First, you can use variables in aliases. Secondly, you can use multiple arguments and commands in aliases. Let's make this even more interesting.

Suppose it's 2 am. You've just gotten word that a host is down on your network. What's the first thing you do? Personally I ping it.
$ ping -c 3 curlingrocks.ohcanada.com
PING curlingrocks.ohcanada.com (192.168.128.10): 56 data bytes
64 bytes from 192.168.128.10: icmp_seq=0 ttl=128 time=0.991 ms
64 bytes from 192.168.128.10: icmp_seq=1 ttl=128 time=0.558 ms
64 bytes from 192.168.128.10: icmp_seq=2 ttl=128 time=0.403 ms
...

Ok, so I know that 192.168.128.10 is the correct IP address for that server, so DNS is working and the host is responding to pings. I'm not sure what this server does or where it lives on my company network, but it's really important and the Boss wants it back up ASAP. So what do I do, run some more commands to see what services are running, etc. Probably faster than trying to dig through crummy documentation.

So with my nice little scan alias:

$ alias scan='hping --count 3 --fast --rawip $_; nslookup $_; echo Results 
from $HOSTNAME to $_; traceroute $_; nmap -n -A -PN $_'

After I ping it, I can immediately find out lots of other information about the host without having to wait by just typing in "scan". Now, the reason that the above works, and we don't have to get into functions like Hal did, is because the last argument in each command is the IP that we want and it doesn't change. $_ in the last nmap command is actually referring to the last argument of the traceroute command and so on. If this wasn't the case, we'd have to use a function like Hal showed us or assign our target IP it's own variable.

$ alias scan='TARGET=($_); echo Results from $HOSTNAME to $TARGET; hping 
--count 3 --fast --rawip $TARGET; nslookup $TARGET; traceroute $TARGET;
nmap -n -A -PN $TARGET'

Also remember if you alias your favorite command, and happen to name it something else useful‚ top perhaps. You can always ignore the alias with a backslash.

$ \top

Talk about an easy way to disguise the up'ers!

Tuesday, February 16, 2010

Episode #82: Hippy Barfday Spew Do You?

Ed's Got Sed (well, a little bit of it at least):

In a celebratory mood, I belt out:
C:\> cmd.exe /v:on /c "for /f "delims=" %i in ('echo Hippy barfday spew do you!')
do @set stuff=%i& echo !stuff! & set stuff=!stuff:i=a! & echo !stuff! & set
stuff=!stuff:arf=irth! & echo !stuff! & set stuff=!stuff:spew=to! & echo !stuff!
& set stuff=!stuff: do=! & echo !stuff! & set stuff=!stuff:yo=f! & echo. & echo
!stuff!"
Hippy barfday spew do you!
Happy barfday spew do you!
Happy birthday spew do you!
Happy birthday to do you!
Happy birthday to you!

Happy birthday to fu!

If you couldn't tell, we're celebrating the FIRST BIRTHDAY of this blog! Yes, we've already made it through one orbit of that big ball of gas at the center of our Solar System, and we're looking forward to even more. It was one year ago today that three merry command line bandits decided to take a fun little brawl of command-line one-upmanship from Twitter to blog format, so we could get into deeper fu. As a birthday present, my command above shows how we can perform string substitutions at the Windows cmd.exe command line.

I've never hid the fact that I've often longed for the sed command, which allows for nifty stream editing. Although it's got a ton of flexible features, one of sed's most common uses is replacing a string with another string in a stream of data, such as Standard Output. What's not to like? Well, the fact that we don't have a built-in equivalent in cmd.exe is one thing that's a bummer.

So, I got to thinking about this problem the other day, when I realized that I could do the substitution and replacement thing using string altering options in cmd.exe. I could take the data I want to alter, put it in a variable, and then change all occurrences of given strings to other strings using the notation:
set string=%string:original=replacement%

Or, if we use delayed environment variable expansion, we rely on:
set string=!string:original=replacement!

I've done just that above, starting by turning on delayed environment variable expansion (cmd.exe /v:on) to execute the command (/c) of FOR /F. My FOR /F command is designed to take the output of my echo command and put it in the variable %i. Alternatively, if I wanted to change text inside of a file, I could have used the "type filename" command instead of echo, iterating through each line of the file making substitutions. I turn off default parsing on spaces and tabs using "delims=", so that my whole line of data gets shoved into %i. This is all very routine stuff for life inside of cmd.exe, even on your barf^h^h^h^hbirthday.

Then, I move my iterator value into a variable that I can take action on (set stuff=%i). I now can use my string replacement technique to start altering that variable, in a shallow and pale (but useful) mimicking of but one of the features of sed.

I can change individual characters into other characters, such as "i" to "a":
set stuff=!stuff:i=a!

I can change multi-character substrings like barf into birth:
set !stuff:arf=irth!

I can replace whole words, changing "spew" into "to":
set stuff=!stuff:spew=to!

I can delete whole words, like " do":
set stuff=!stuff: do=!

This one might be worth a note. Here, I'm replacing something with nothing by placing nothing after the equals sign. The item I'm replacing is " do", with a space in it. Otherwise, I'd have a double space left behind. And, yes, I could have alternatively replaced "do " (with a space after it) with nothing. Or, I could have replaced " do " with " " using !stuff: do = !. There are many options.

And I can even take bigger strings and replace them with smaller strings:
stuff=!stuff:yo=f!

This is really cool and useful, but it does have some limitations. Note that every instance of the substring I specify is replaced, and there really is no means for just changing, for example, the first occurrence of the substring. Also, this doesn't work for non-printable ASCII characters. You have to be able to type it to get it into that syntax. I've also gotta shove everything into a string to make this work, but that's not so bad.

So, there you have it... a highly obfuscated command for wishing ourselves Happy Birthday.

Whatcha got Hal & Tim?

Tim blows out the candles:

Tim let's it rip:
PS C:\> "Hippy barfday spew do you!" | Tee-Object -Variable stuff; 
$stuff -replace "i","a" | Tee-Object -Variable stuff;
$stuff -replace "arf","irth" | tee -var stuff;
$stuff -replace "spew","to" | tee -va stuff;
$stuff -replace " do","" | tee -va stuff;
Write-Object; $stuff -replace "yo","f"


Hippy barfday spew do you!
Happy barfday spew do you!
Happy birthday spew do you!
Happy birthday to do you!
Happy birthday to you!

Happy birthday to fu!


One orbit around that big ball of gas huh? I'm sure there is joke related to Ed and his love of beans, but we are here to celebrate not disgust.

For those of you who followed the blog since the beginning, the original third bandit was Paul Asadoorian, not me. For those The Three Stooges fans out there, I guess you call me Curly, and Paul would be Shemp. Although, I can't decide if that would make Hal Moe or Larry. Let's get back to business and this week's PowerShell nyuk, nyuk, nyuk.

It is rather fitting that PowerShell is second this week. Cmd's string replacement is pretty weak and the syntax is terrible, while Linux is the opposite. PowerShell is somewhere in between, but much closer to the linux side of things.

Before we dig into the entire command above, we'll first do the string substituion without all the extra output.

PS C:\> "Hippy barfday spew do you!" -replace "i","a" 
-replace "arf","irth" -replace "spew","to" -replace " do",""
-replace "yo","f"


Happy birthday to fu!


The Replace operator is used to replace strings, duh! By default, the Replace operator is case insensitive, but to be explicitly case insensitive use the IReplace operator. For a case senstitive replace use the CReplace operator.

Now, let's do all of Ed's tricks:

Change individual characters into other characters, such as "i" to "a":
... -replace "i","a"

Change multi-character substrings like barf into birth:
... -replace "arf","irth"

Replace whole words, changing "spew" into "to":
... -replace "spew","to"

Delete whole words, like " do":
... -replace " do",""


We can even to tricks that Ed can't, by using regular expressions:
Replace "y" with "f" but only if it is the first character in a word:
PS C:\> "Happy birthday to do yu!" -replace "\sy"," f"
Happy birthday to do fu!

Swap the first two words in a line:
PS C:\> "birthday Happy to fu!" -replace "^(\w+)\s(\w+)","`$2 `$1"
Happy birthday to fu!


The last command uses regular expression groups. We won't go into the depths of regex, but in short, "\w+" will grab a word and "\s" will grab a space. The caret (^) is used to anchor the search to the beginning of the string, and the parentheses are used to define the groups. In the replacment portion we use `$1 and `$2 to represent (respectively) the first and second groups (words) found. Since we want to output them in reverse order we use "`$2 `$1" to put the second word before the first word.

Back to the original command:

PS C:\> "Hippy barfday spew do you!" | Tee-Object -Variable stuff; 
$stuff -replace "i","a" | Tee-Object -Variable stuff;
$stuff -replace "arf","irth" | tee -var stuff;
$stuff -replace "spew","to" | tee -va stuff;
$stuff -replace " do","" | tee -va stuff;
Write-Object; $stuff -replace "yo","f"


We want to display each change as it happens. To pull this off we will have to use the Tee-Object cmdlet. Similar to the linux's tee command, Tee-Object takes the command output and saves in in a file or variable, as well as sending it down the pipeline or to the console.

If we break it down, this command has three parts that are repeated.

<input object> | Tee-Object -Variable stuff | $stuff
-replace <original> <replacment>

We start with the input object "Hippy barfday spew do you!" and pipe it into Tee-Object (alias tee). The only reason we use Tee-Object is so we can display the output and work with it further down the pipeline. After tee, we do the replace. The output of the previous portion becomes the input for the next. Rinse and repeat.

Towards the end of command we throw in the Write-Object cmdlet (alias write, echo) to add the extra line break.

One quick thing to note, when using the Tee-Object cmdlet's Variable parameter, do not use a $. The parameter accepts a string, which is the name of the variable.

So that a more lucid version of Ed's highly obfuscated command, and now it is time for Hal to hand out the Birthday spankings.

MoeHal sedsSays

Huh, I was sure Ed was Curly. At Ed's current rate of hair loss, he's going to resemble Curly before too much longer.

Hmmm, my Windows colleagues are desperately trying to achieve some of the functionality of sed in their silly little command shells. Here's a hint guys: Cygwin is your friend. Then you could do things like this:

$ echo 'Hippy Barfday Spew Do You!' | 
sed 's/\(H\)i\(ppy B\)arf\(day \)Spew D\(o \)Yo\(u!\)/\1a\2irth\3T\4F\5/'

Happy Birthday To Fu!

The whole trick here is leveraging sub-expressions-- the text enclosed in "\(...\)"-- in the first part of the sed substitution expression. Essentially I'm using the sub-expressions to "save" the bits of the line that I want to keep. You'll notice the bits of our input string that I don't want are carefully kept outside the "\(...\)" boundaries.

You can refer to the contents of the sub-expressions on the righthand side of the substitution using the \1, \2, ... variables. Sub-expressions are numbered left to right by opening parenthesis-- this is an important distinction when you start doing crazy stuff like nested sub-expressions. In this case, however, all I have to do is output the contents of my sub-expressions in order with appropriate text in between them to form our final message.

So really I'm just using sed sub-expressions like a cookie cutter here to chop out the bits of the line I want. This functionality makes sed very useful as a surgical tool for reformatting text into a regular format. Another example of this comes from one of our earliest Episodes where I showed ShempPaul how to bring the sed madness to parse the output of the host command.

Now the problem is that these sed expressions always end up looking awkward because of all of the backwhacks floating around. If you have GNU sed handy, you can use the "-r" (extended regex) option. This allows you do create sub-expressions with "(...)", saving yourself a lot of backwhack abuse:

$ echo 'Hippy Barfday Spew Do You!' | 
sed -r 's/(H)i(ppy B)arf(day )Spew D(o )Yo(u!)/\1a\2irth\3T\4F\5/'

Happy Birthday To Fu!

Still ugly, but definitely more readable.

Thanks everybody for taking time out of your busy lives to keep up with our humble little blog in the past year. We'll save you a bit of the birthday cake!

Tuesday, February 9, 2010

Episode #81: From the Mailbag

Hal checks out the mail

We love getting email from readers of the blog. And we love getting cool shell hacks from readers even more. Recently, loyal reader Rahul Sen sent along this tasty little bit of shell fu:

How to search for certain text string in a directory and all its subdirectories, but only in files of type text, ascii, script etc:

$ grep 9898 `find /usr/local/tripwire -type f -print | xargs file |
egrep -i 'script|ascii|text' | awk -F":" '{print $1}'`

/usr/local/tripwire/te/agent/data/config/agent.properties:tw.local.port=9898
/usr/local/tripwire/te/agent/data/config/agent.properties:tw.server.port=9898


That's totally cool, Rahul!

Honestly, when I first looked at this I thought, "There's got to be a shorter way to do this." But the tricky part is the "only in files of type text, ascii, script, etc" requirement. This basically forces you to do a pass through the entire directory first in order to locate the relevant file types. Thus the complicated pipeline to pass everything through the file command and egrep.

A few minor improvements I might suggest:

  1. I'm worried that for a large directory you'll end up returning enough file names that you exceed the built-in argument list limits in the shell. So it might be better to use xargs again rather than backticks.

  2. I probably would have chosen to use sed at the end of the pipeline rather than awk, just to be more terse.

  3. You don't actually need "-print" on modern versions of find-- it's now the default. Only old-timers like me and Rahul end up doing "-print" all the time because we were trained to do so by old versions of find.


So my revised version would look like:

$ find /usr/local/tripwire -type f | xargs file |
egrep -i 'script|ascii|text' | sed 's/:.*//' | xargs grep 9898

Mmmm-hmmm! That's some tasty fu! Let's see what my Windows brethren have cooking...

Ed Unfurls:
This is a helpful little technique. Now, unfortunately at the Windows command line (man, if I only had a dime ever time I said that phrase), we do not have the "file" command to discern the type of file. But, fear not! We do have a couple of alternative methods.

For a first option, we could use a nifty feature of the findstr command to ignore files that have non-printable characters. When run with the /p option, findstr will ignore any files that contain high-end ASCII sequences, letting us skip over EXEs, DLLs, and other stuff. It's not as fine a grained scalpel as scraping the output of the the Linux file command for script, ascii, and text, but it'll serve us well as follows:

C:\> findstr /s /p 9898 *

Here, I'm using findstr to recurse the file system (/s) from wherever my current working directory is, skipping files with non-printable characters (/p), looking for the string 9898 in any file (*). If you want to get even closer to the original, we can specify a directory where we want to start the search using the /d: option as follows:

C:\> findstr /d:C:\windows /s /p 9898 *

Now, for our second option, there is another way to refine our search besides the /p option of findstr, getting us a little closer to the file types Rahul specified in Linux using the find command. It turns out that Microsoft actually put an indication of each file's type in the name of the file itself. You see, by convention, Windows file names have a dot followed by three letters that indicate the file type. Who knew?!?! :)

To map the desired functionality to Windows, we'll rely on file name suffixes to look inside of .bat, .cmd, .vbs, and .ps1 files (various scripts), .ini files (which often contain config info), and .txt files (which, uh... you know). What's more, many commands associated with searching files allow us to specify multiple file names, with wild cards, such as the dir command in this example:

C:\> dir *.bat *.cmd *.vbs *.ps1 *.ini *.txt

And, happy to say, dir isn't the only one that lets us look for multiple file names with wildcards. For my second solution to this challenge, I'm going to use a FOR /R loop. These loops recurse through a directory structure (/R, doncha know) setting an iterator variable to the name of each file that is encountered. Thus, we can use the following command as a rough equivalent to Rahul's Linux fu:

C:\> FOR /R C:\ %i in (*.bat *.cmd *.vbs *.ps1 *.ini *.txt) do @findstr 9898 "%i" && echo %i

Here, I'm running through all files found under C:\ and its subdirectories, looking inside of any file that has a suffix of .bat, .cmd, etc, running findstr on each file (whose name is stored in %i, which has to be surrounded in quotes for those cases when the value has one or more spaces in the file name) looking for 9898. And, if I successfully find a match, I echo out the file's name. Now, this output looks a little weird, because the file's name comes after each line that contains the string. But, that is a more efficient way to do the search. Otherwise, I'd have to introduce unnecessary complexity by using a variable and parsing to store the line of the file and print its name first, then print the contents. I'd certainly do that for prettiness in a script. But, at the command line by itself, I'd eschew the complexity and just go with what I've shown above to get the job done.

Now, there's a gazillion other ways to do this as well. For a third possibility, we could take the first option above (findstr) and use the multiple file suffix specification of option 2 (*.bat *.cmd *.vbs *.ps1 *.ini *.txt) to come up with:

C:\> findstr /d:C:\windows /s 9898 *.bat *.cmd *.vbs *.ps1 *.ini *.txt

I actually like this third approach best, because it's relatively easy to type, makes a bunch of sense, has nicer-looking output than the FOR /R option, and has better performance.

Fun, fun, fun! Thanks for the great suggestion, Rahul.

Whatcha got for us, Tim?


Tim delivers:

Sadly, PowerShell is missing the same "file" command as the standard Windows command line. Also, there isn't a PowerShell cmdlet similar to "findstr /p" either, but of course we could use FindStr since all the Windows commands are available in PowerShell. Ed already covered FindStr so we will just use just PowerShell cmdlets.

If we know that the files in question are in a specific directory, not subdirectories, there is a pretty simple command to find the files using the Select-String cmdlet.

PS C:\> Select-String 9898 -Path *.bat,*.cmd,*.vbs,*.ps1,*.ini,*.txt
-List | Select Path


Path
----
C:\temp\a.txt


According to the documentation "the Select-String cmdlet searches for text and text patterns in input strings and files. You can use it like Grep in UNIX and Findstr in Windows." Whoah, big difference there! Grep has much more robust regular expression support when compared to FindStr, and yes, PowerShell does give us rich regular expressions.

Back to the task at hand. We only care if the file contains the text in question, not how many times the text is found in the file. The List parameter is used as a time saver, since it will stop searching after it finds the first match in a file.

For each match, the default console output displays the file name, line number, and all text in the line containing the match. Of course the output is an object and we just want the file's path, so the results are piped into Select-Object (alias select) in order to get the full file path.

But we want to search though subdirectories too. To do that we have to use Get-ChildItem (alias dir, gci, ls).

PS C:\> Get-ChildItem -Include *.bat,*.cmd,*.vbs,*.ps1,*.ini,*.txt
-Recurse | Select-String 9898 -List | Select-Object path


Path
----
C:\temp\subfolder\b.txt
C:\temp\a.txt


The Recurse parameter specifies that the search should recursively search through subdirectories. The Include parameter retrieves only the files that match our filter. We could use the Exclude parameter if we wanted to search all files that aren't exe's or dll's.

PS C:\> Get-ChildItem -Exclude *.exe,*.dll -Recurse |
Select-String 9898 -List | Select-Object path


The command can be shortened even further since the full parameter name doesn't have to be used. As long the shortened parameter name isn't ambiguous a short version of the parameter name can be used. Since there is no other parameters that start with R we or I this command will work as well:

PS C:\> ls -i *.bat,*.cmd,*.vbs,*.ps1,*.ini,*.txt -r |
select 9898 -L | select path


There is a catch if you are using version 1 of PowerShell, the Select-String cmdlet natively doesn't take the input from Get-ChildItem and use it to specify the path. We have to use a For-EachObject loop (alias %) in order to accomplish the same task.

PS C:\> Get-ChildItem -Include *.bat,*.cmd,*.vbs,*.ps1,*.ini,*.txt
-Recurse | % { Select-String 9898 -List -Path $_.FullName } |
Select-Object path


A side note:
As you probably already know, Windows XP, Vista, and 2003 don't come with Powershell and require a separate install, but for the love of Pete, install it. Version 2 has been available for quite a while and there are many enhancements over v1 (and even more when compared to cmd). Windows 2008 R2 and Windows 7 come with v2 by default.

PowerShell v1 in Windows 2008 (R1) is an optional feature that needs to be enabled. It can be installed using the Windows command line by running this command:

C:\> ServerManagerCmd.exe -install PowerShell


This is probably the most useful Windows command available (sorry Ed).

Seth Matheson (Our Brand-Spankin'-New Mac OS X Go-To Guy) Interjects:

While it's great to have 3 or 4 commands to string together to get the output you want (heck, this is part of the joy of the power of 'nix, the freedom to do almost anything because you're not restricted to a specific command), sometimes it's nice to have one command that will do it for you, out of the box.

Enter Spotlight on the Mac. Spotlight is Apple's built in OS-level search that functions on meta tags and even though everyone rags on them for the GUI, Apple usually puts in a command-line function or two for our trouble.

$ mdfind evil

The above command will search file names and file contents for the string "evil". Simple enough right?

$ mdfind evil -onlyin /Users/ed

Still pretty simple, this will limit the search for "evil" in Ed's home directory. I'm sure we won't find anything, right Ed?

So back to the original challenge, what good would this be unless it can search for file type? Guess what, you can!

$ mdfind "evil kind:text" -onlyin /Users/ed

This will find all text files in Ed's home directory with the string "evil". Wonder what that'll come up with...

You can use kind for other things too, Applications, Contacts, Folders, etc.

This is only a very simple Spotlight search. You can go much further. Everything in OS X (10.4 and above) has meta tags that detail all sorts of interesting things about the files. Run the following to find out just what kind of data that Spotlight sees:

$ mdls "/Users/ed/Desktop/March CLK Fu.txt"

kMDItemContentCreationDate = 2010-02-09 17:54:00 -0500
kMDItemContentModificationDate = 2010-02-09 17:54:00 -0500
kMDItemContentType = "public.plain-text"
kMDItemContentTypeTree = (
"public.plain-text",
"public.text",
"public.data",
"public.item",
"public.content"
)
kMDItemDisplayName = "March CLK Fu.txt"
kMDItemFSContentChangeDate = 2010-02-09 17:54:00 -0500
kMDItemFSCreationDate = 2010-02-09 17:54:00 -0500
kMDItemFSCreatorCode = ""
kMDItemFSFinderFlags = 0
kMDItemFSHasCustomIcon = 0
kMDItemFSInvisible = 0
kMDItemFSIsExtensionHidden = 0
kMDItemFSIsStationery = 0
kMDItemFSLabel = 0
kMDItemFSName = "March CLK Fu.txt"
kMDItemFSNodeCount = 0
kMDItemFSOwnerGroupID = 501
kMDItemFSOwnerUserID = 501
kMDItemFSSize = 3
kMDItemFSTypeCode = ""
kMDItemKind = "Plain text"
kMDItemLastUsedDate = 2010-02-09 17:54:00 -0500
kMDItemUsedDates = (
2010-02-09 00:00:00 -0500
)

All fun things you can search by! For example:

$ mdfind "kMDItemFSOwnerGroupID == '501'"

Will get everything owned by UID 501 (usually the first created user on the system).

Note: It should be mentioned that most of the 'nix command line fu on this site will work on a Mac, it is after all BSD under the hood. That being said, sometimes we can save some time with the built in utilities, like Spotlight.

Tuesday, February 2, 2010

Episode #80: Time Bandits

Tim stomps in:

I have always wanted to time travel. Since it isn't possible to go back and kill Hitler I thought, maybe we can go back in time and change some files. Obviously, we don't have the technology to actually go back in time and make changes. However, what if we could make it appear that we went back in time by altering timestamps.

First, let's create a few files for our time warp. By the way, this next set of commands are all functionally equivalent.

PS C:\> Write-Output aaaa | Out-File a.txt
PS C:\> Write bbbb | Out-File b.txt
PS C:\> echo cccc | Out-File c.txt
PS C:\> echo dddd > d.txt


Now to see the time related properties available to us for the file object.

PS C:\> Get-ChildItem | Get-Member -MemberType Property | Where-Object { $_.Name -like "*time*" }

TypeName: System.IO.FileInfo

Name MemberType Definition
---- ---------- ----------
CreationTime Property System.DateTime CreationTime {get;set;}
CreationTimeUtc Property System.DateTime CreationTimeUtc {get;set;}
LastAccessTime Property System.DateTime LastAccessTime {get;set;}
LastAccessTimeUtc Property System.DateTime LastAccessTimeUtc {get;set;}
LastWriteTime Property System.DateTime LastWriteTime {get;set;}
LastWriteTimeUtc Property System.DateTime LastWriteTimeUtc {get;set;}


The Get-Member cmdlet (alias gm) is used to get the properties and methods available for an object that has been sent down the pipeline. In our case, the object sent down the pipeline is the file object. We just want to look at the properties (not methods, scriptproperties, etc) of the object, so we use the MemberType parameter for filtering. Then the Where-Object cmdlet (alias ?) is used to filter for properties with "time" in the name. The properties above are read/write, as shown by {get;set;}. Well, lookey there, we can set the timestamps!

An important side note: The Get-Member cmdlet is an extremely useful command. I can't begin to say how often I've used this command to find properties and methods available for an object. There are other MemberTypes, but we will have to cover those at a later time. For a full description check out the MemberType parameter on this Microsoft help page.

The xxxxxTime and xxxxxTimeUtc properties are actually the same property, the only difference is how they display the date in regards to the UTC (coordinated universal time) offset. In my case, the difference is 6 hours. For the sake of brevity the UTC times will be ignored since they are essentially redundant.

Let's take a look at our files.

PS C:\> gci | select Name, LastWriteTime, CreationTime, LastAccessTime

Name LastWriteTime CreationTime LastAccessTime
---- ------------- ------------ --------------
a.txt 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM
b.txt 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM
c.txt 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM
d.txt 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM


Now let's go back in time.

(gci a.txt).LastAccessTime = Get-Date ("1/1/2010")
(gci b.txt).CreationTime = Get-Date ("1/1/2010")
(gci c.txt).LastWriteTime = Get-Date ("1/1/2010")

Since there isn't a cmdlet for setting the time properties, we need to access properties in a different manner. To do this, we get the object and use the dot notation to access the property. The Get-Date cmdlet creates a valid date/time object for our new timestamp. Let's see how that worked.

PS C:\> gci | select Name, LastWriteTime, CreationTime, LastAccessTime

Name LastWriteTime CreationTime LastAccessTime
---- ------------- ------------ --------------
a.txt 1/23/2010 11:16:24 AM 1/23/2010 12:07:05 PM 1/1/2010 12:00:00 AM
b.txt 1/23/2010 11:16:24 AM 1/1/2010 12:00:00 AM 1/23/2010 12:07:05 PM
c.txt 1/1/2010 12:00:00 AM 1/23/2010 12:07:05 PM 1/23/2010 12:07:05 PM
d.txt 1/23/2010 11:16:24 AM 1/23/2010 12:07:05 PM 1/23/2010 12:07:05 PM


Interesting, the times have been changed, but can we forensically find a difference? After taking an image of the drive and using the istat tool from the SluethKit.org guys it is very obvious that something weird has happened. Let's take a look at an istat output snippet to see where the problem lies.


$STANDARD_INFORMATION Attribute Values:
Flags: Archive
Owner ID: 0
Security ID: 408 ()
Created: Fri Jan 01 00:00:00 2010
File Modified: Fri Jan 23 11:16:24 2010
MFT Modified: Fri Jan 23 11:20:43 2010
Accessed: Fri Jan 23 11:16:24 2010

$FILE_NAME Attribute Values:
Flags: Archive
Name: a.txt
Parent MFT Entry: 9946 Sequence: 10
Allocated Size: 0 Actual Size: 0
Created: Fri Jan 23 11:16:24 2010
File Modified: Fri Jan 23 11:16:24 2010
MFT Modified: Fri Jan 23 11:16:24 2010
Accessed: Fri Jan 23 11:16:24 2010


The PowerShell commands modify the STANDARD_INFO's created, file modified and accessed times. As you can see, there is a discrepancy in the Creation times between the FILE_NAME and STANDARD_INFO attribute values. Also, if you look at the STANDARD_INFO's MFT Modified date you can take a good guess as to when this change was made. The MFT Modified stamp is updated to the current system time whenever any of our PowerShell commands make a change to the file. The MFT Modified timestamp only marks the last change, so we could change all the dates to make it more confusing as to what change happened at that time.

While making the changes to the timestamps in PowerShell is effective when looking at the file system via the GUI or command line, not everything is hidden when looking at it with forensic tools.

While we can't go back in time and bump off Hitler, we did go back in time and take this functionality out of the standard Windows command line. OK, that's not really true. But what is true is that there is no way in cmd.exe to manipulate time stamps. So Ed won't be joining us with any cmd fu this week.

Hal has the touch:

Changing atimes and mtimes on Unix files is easy because we have the touch command. If you simply "touch somefile", then the atime and the mtime on that file will be updated to the current date and time (assuming you're the file owner or root).

But if you're root, you can also specify an arbitrary time stamp using the "-t" flag:

# touch -t 200901010000 /tmp/test
# alltimes /tmp/test
/tmp/test
atime: Thu Jan 1 00:00:00 2009
mtime: Thu Jan 1 00:00:00 2009
ctime: Sun Jan 24 05:33:56 2010

The ctime value is always set to the current date and time, because the touch command is updating the atime and mtime meta-data in the inode and ctime tracks the last meta-data update. By the way, don't bother looking for the alltimes command in your Unix OS. It's a little Perl script I wrote just for this Episode (download the script here).

A couple of our loyal readers wrote in to remind me of the stat command as an alternative to my alltimes script. stat has a different output format on different Unix-like OSes, but on Linux I could have done:

# stat /tmp/test | tail -3
Access: 2009-01-01 00:00:00.000000000 -0800
Modify: 2009-01-01 00:00:00.000000000 -0800
Change: 2010-02-03 15:26:33.279577981 -0800

Anyway, thenks everybody for the stat reminder!


The touch command also has "-a" and "-m" flags that allow you to selectively update only the atime or the mtime:

# touch -a -t 200909090909 /tmp/test
# touch -m -t 201010101010 /tmp/test
# alltimes /tmp/test
/tmp/test
atime: Wed Sep 9 09:09:00 2009
mtime: Sun Oct 10 10:10:00 2010
ctime: Sun Jan 24 05:49:29 2010

As you can see from the above example, touch is perfectly willing to set timestamps into the future as well as the past.

OK, so what about tweaking the ctime value? In general, setting the ctime on a file to an arbitrary value requires specialized, file system dependent tools. The good news(?) is that for Linux EXT file systems, the debugfs command will let us muck with inode meta-data. If you're dealing with other file system types or other operating systems, however, all I can say is good luck with your Google searching.

debugfs has a huge number of options that we don't have time to get into here. I'm just going to show you how to use set_inode_field to update the ctime value:

# debugfs -w -R 'set_inode_field /tmp/test ctime 200901010101' /dev/mapper/elk-root
debugfs 1.41.9 (22-Aug-2009)


The "-w" option specifies that the file system should be opened read-write so that we can actually make changes-- by default debugfs will open the file system in read-only mode for safety. We also need to specify the file system we want to open as the last argument. Sometimes this will be a disk partition device name like "/dev/sda1", but in my case I'm using LVM, so my disk devices have the "/dev/mapper/" prefix. If you're not sure what device name to use you can always run a command like "df -h /tmp/test" and look for the device name in the first column.

The "-R" option can be used to specify a single debugfs command to run in non-interactive mode. Note that there's also a "-f" option that allows you to specify a file of commands you want to run. If you leave off both "-R" and "-f" you'll end up in an interactive mode where you can run different commands at will.

In this case, however, we're going to use "-R" and run set_inode_field to set the ctime on /tmp/test. As you can see, you use a time stamp specification that's very similar to the one used by the touch command. And speaking of touch, we could use debugfs to "set_inode_field ... atime ..." or "set_inode_field ... mtime ..." instead of touch if we wanted to. This would allow us to update the atime/mtime values for a file without updating the ctime like touch does.

Anyway, now our ctime value should be updated, right? Let's check:

# alltimes /tmp/test
/tmp/test
atime: Wed Sep 9 09:09:00 2009
mtime: Sun Oct 10 10:10:00 2010
ctime: Sun Jan 24 05:49:29 2010

That doesn't look right! What's going on?

What's happening is that we've run afoul of the Linux disk cache. We actually have updated the information in the inode, but we've done it in such a way as to do an end-run around the normal Linux disk access routines, so our changes are not reflected in the in-memory file system cache. The good news is that (at least as of Linux kernel 2.6.16) there's a simple way to flush the inode cache:

# echo 2 > /proc/sys/vm/drop_caches
# alltimes /tmp/test
/tmp/test
atime: Wed Sep 9 09:09:00 2009
mtime: Sun Oct 10 10:10:00 2010
ctime: Thu Jan 1 01:01:00 2009

That looks better!

By the way, if you "echo 1 > /proc/sys/vm/drop_caches", that flushes the page cache. If you "echo 3 > /proc/sys/vm/drop_caches" it flushes both the page cache and the inode/dentry cache.

Tuesday, January 26, 2010

Episode #79: A Sort of List

Hal starts off:

Way back in Episode #11 I showed you a little trick for sorting directory listings by inode number. But it struck me recently that we hadn't talked about all of the other interesting ways you can sort directory listings.

For example, you can use "ls -S" to sort by file size:

$ ls -lS
total 6752
-rw-r----- 1 syslog adm 1271672 2010-01-18 05:36 kern.log.1
-rw-r----- 1 syslog adm 1016716 2010-01-18 05:39 messages.1
-rw-r----- 1 syslog adm 499580 2010-01-18 05:38 daemon.log.1
[...]

Add "-h" if you prefer to see those file sizes with human-readable units:

$ ls -lSh
total 6.6M
-rw-r----- 1 syslog adm 1.3M 2010-01-18 05:36 kern.log.1
-rw-r----- 1 syslog adm 993K 2010-01-18 05:39 messages.1
-rw-r----- 1 syslog adm 488K 2010-01-18 05:38 daemon.log.1
[...]

Also, adding "-r" (reverse sort) can be useful so that the largest files end up at the bottom of the directory listing, closer to your next command prompt:

$ ls -lShr
total 6.6M
[...]
-rw-r----- 1 syslog adm 488K 2010-01-18 05:38 daemon.log.1
-rw-r----- 1 syslog adm 993K 2010-01-18 05:39 messages.1
-rw-r----- 1 syslog adm 1.3M 2010-01-18 05:36 kern.log.1
$

You have to do much less scrolling around this way.

In addition to sorting by size, you can also sort by the so-called "MAC time" values: last modified (mtime), last access (atime), and last inode or meta-data update (ctime). By default, "ls -t" will sort by last modified time. This is another good one to use "-r" on so you can quickly find the most recently modified files in a directory:

$ ls -lrt
total 6752
[...]
-rw-r----- 1 syslog adm 86080 2010-01-18 08:10 kern.log
-rw-r----- 1 syslog adm 120492 2010-01-18 08:17 syslog
-rw-r----- 1 syslog adm 3310 2010-01-18 08:17 auth.log
$

If you want to sort by ctime you use "-c" in addition to "-t". However, to sort by atime you need to use "-u" ("-a" was reserved for something else, obviously):

$ ls -lrtu
total 6752
[...]
-rw-r--r-- 1 root root 219990 2010-01-18 08:00 udev
-rw-r--r-- 1 root root 120910 2010-01-18 08:00 Xorg.0.log
-rw-r----- 1 root adm 56275 2010-01-18 08:00 dmesg
$

Now let's see what my Windows brethren have up their sleeves, shall we?

Ed Responds:
Although not as full featured as the Linux ls command, the humble dir command offers us a bunch of options, allowing us to mimic pretty much everything Hal has done above. The main options we'll use here are:
  • /o followed by a one-character option that lets us specify a sort order (we'll use /os to sort by size and /od by date... with a - sign in front of the one character to reverse order)
  • /t, also followed by one character which lets us specify a time field we're interested in (the field options we have and their definitions, according to the dir command's help, are /tc for Creation time, /ta for Last Access time, and /tw for Last Written time).

So, to get a directory listing sorted by size (smallest to largest), we'd run:

C:\> dir /os

Want them reversed? We would use:

C:\> dir /o-s

Want those sizes in human readable form? Install Cygwin and use the ls command, for goodness sakes. This is the dir command we're talking about here. We don't need no stinkin' human readable format. Actually, the default output for dir does show commas in its size numbers, making things a little more readable than the stock Linux output.

To see directory contents listed by Last Written (which is what dir calls them... roughly the same as last modified times in Linux parlance), in reverse order (with the most recently modified near the top), you could execute:

C:\> dir /o-d /tw

But, like we see with the ls command, Last Written is the default, so you can leave off the /tw to get the same results.

Wanna sort by creation time, again in reverse? Use:

C:\> dir /o-d /tc

And, how about last access? You could go with:

C:\> dir /o-d /ta

It's a good thing that the /od and /o-d sort options pick up the proper timestamp specified by the /t option, or else we'd be forced to do some waaaaay ugly sort command nonsense. Whew!

Tim responds too:

To get a directory listing we use Get-ChildItem. The name is a bit odd, but it is a generic command and can be used to get the child items from any container such as the registry, file system, or the certificate store. Today we are just looking at the file system.

First, let's take a look at the aliases for this useful cmdlet.

PS C:\> Get-Alias -Definition Get-ChildItem

CommandType Name Definition
----------- ---- ----------
Alias dir Get-ChildItem
Alias gci Get-ChildItem
Alias ls Get-ChildItem


I typically use ls since it is 33% more efficient to type than dir. But I digress...

Let's sort by file size:

PS C:\> gci | sort length

Directory: C:\

Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 6/10/2009 4:42 PM 10 file1.txt
-a--- 6/10/2009 4:42 PM 24 file2.txt
-a--- 11/24/2009 3:56 PM 1442522 file3.zip


The Get-ChildItem cmdlet does not have sorting capability built in, none of the cmdlets do. But that is what the pipeline and the Sort-Object cmdlet are for.

Want to sort by file size in reverse order? Use the Descending parameter.

PS C:\> gci | sort length -descending

Directory: C:\

Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 11/24/2009 3:56 PM 1442522 file3.zip
-a--- 6/10/2009 4:42 PM 24 file2.txt
-a--- 6/10/2009 4:42 PM 10 file1.txt


We can sort by any property, including LastAccessTime, LastWriteTime, or CreationTime.

PS C:\> gci | sort LastWriteTime


We can even sort on two properties.

PS C:\> gci | sort LastWriteTime, Length

Directory: C:\

Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 6/10/2009 4:42 PM 24 file2.txt
-a--- 6/10/2009 4:42 PM 10 file1.txt
-a--- 11/24/2009 3:56 PM 1442522 file3.zip


The files will first be sorted by write time. If two files have the same write time, they will then be sorted by length.

Finally, we come to displaying the size in a human readable format, and it isn't pretty. We have to write a custom expression to display the size in KB or MB.

PS C:\> gci | format-table -auto Mode, LastWriteTime, Length,
@{Name="KB"; Expression={"{0:N2}" -f ($_.Length/1KB) + "KB" }},
@{Name="MB"; Expression={"{0:N2}" -f ($_.Length/1MB) + "MB" }},
Name


Mode LastWriteTime Length KB MB Name
---- ------------- ------ -- -- ----
-a--- 6/10/2009 4:42 PM 10 0.01KB 0.00MB file1.txt
-a--- 6/10/2009 4:42 PM 24 0.02KB 0.00MB file2.txt
-a--- 11/24/2009 3:56 PM 1442522 1,408.71KB 1.38MB file3.zip


We can specify custom properties to display. This format works with any of the format cmdlets (Get-Command -Verb Format) or select-object. The custom columns are created by using a hashtable. A hashtable is specified by using @{ key1=value1, key2=value2 }. In our case we specify a name and an expression. Here is a simple example.

..., @{Name="Foo"; Expression={ $_.Length + 1 }}, ...


In this case we would add a column with the heading Foo and with a value of the Length plus 1. The expression can include all sorts of math or other crazy PowerShell fu.

Ironically, getting a human readable output comes from a non-human readable command.

Tuesday, January 19, 2010

Episode #78: Advanced Process Whack-a-Mole

Ed Prepares to Open Up a Can of Process Whoop-Ass:

I've never considered myself a particularly violent man. But, I have to admit it: Sometimes it just feels good to kill processes. I've even been heard to mutter a deadpan "Dodge This" in my lab late at night as I obliterate errant or evil processes just begging to meet their maker. Then, to make sure such a process doesn't pop back up to start bothering me again, I sow the ground with command-line kung fu salt to strangle any other similar process that might pop up in its place.

This technique, which we've taken to calling "Process Whack-a-Mole", can be helpful to people in all walks of life. I'm sure it's happened to pretty much everyone at some point. You find yourself playing defense in a Capture the Flag tournament against an elite team of ninjas from a three-letter government agency who want to completely control your boxen. They repeatedly gain access, and you have to shew them out before they score and you lose points. To deal with such situations, we can run a command to continuously look for processes with certain characteristics of our adversaries, and then kill them when they appear. We touched upon the idea of killing a process that starts to listen on a given TCP port in Episode #76. But, let's go further this time, discussing how you can make much more flexible whack-a-mole commands to deal with various process characteristics.

These techniques are useful even outside of Capture the Flag games. I often use them in malware analysis and even system administration when I want to suppress some activity temporarily while I'm analyzing or configuring something else.

The basic structure I use for process whack-a-mole consists of the following three parts:

<Continuous Loopinator> <Process Selector> <Process Terminator>

Quite often the Process Selector and Process Terminator are combined together in a single command, because we can select or filter for the process we want in the same command we use to whack it. However, to filter for certain specific process characteristics, we'll have to split out these two entities. I'll show you what I mean in a bit.

We start out with our Continuous Loopinator:

C:\> for /L %i in (1,0,2) do @ping -n 2 127.0.0.1 >nul

This is a simple FOR /L loop that starts counting at 1, goes up to 2, in steps of 0. In other words, it's the cmd.exe equivalent of while (true), used to keep something running continuously. At the start of the loop, we introduce a 1-second delay by pinging ourselves twice (-n 2) and throwing the standard output away so as not to clutter our output (>nul). That way, we'll run our Process Selector and Process Terminator approximately every 1 second, helping to minimize our impact on performance. If you want a faster Process Selector, simply omit that ping, and your system will run our whack-a-mole command as fast as it can, but performance may drag.

We then follow with our Process Selector. If you keep in the ping delay, put in an & followed by the Process Selector, which lets us make one command run after another. Otherwise, just put the Process Selector after the @ (which turns off command echo, by the way... no sense having our output clogged up with commands).

The two most common Process Selectors I use are wmic and taskkill, which have the nice property of also including the ability to act as Process Terminators in the same command. Let's look at wmic first.

The wmic command can be used to select given processes based on our constructing a where clause, using the following syntax:

C:\> wmic process <where clause> <verb clause>

In the where clause, we can specify any attribute or group of attributes of processes that can be listed via wmic process. To get a list of these attributes, you could run:

C:\> wmic process get /?

So, for example, if you want to select a processID of 4242, you could write your wmic command as:

C:\> wmic process where processid=4242

Or, we could look for processes that have a given Parent Process ID:

C:\> wmic process where parentprocessid=3788

Or, we could look for processes with a given name:

C:\> wmic process where name="cmd.exe"

These where clauses also support AND and OR, but you've got to make sure you put the guts of your where clause inside of parentheses. The where clauses also support not equals (!=). Check this out:

C:\> wmic process where (name="cmd.exe" and processid != 676)

Now, we haven't supplied a verb clause here, so all our wmic commands are simply displaying raw, unformatted process information on Standard Output.

Let's start doing our whack-a-mole by specifying a Process Terminator by using the verb clause of wmic with the simple verb "delete". That'll kill a process.

Putting these pieces together, suppose you want to kill all cmd.exe processes other than the current cmd.exe you, the administrator, are running. Let's assume that your own cmd.exe has a process ID of 676. You could run:

C:\> for /L %i in (1,0,2) do @ping -n 2 127.0.0.1 >nul & wmic process where (name=
"cmd.exe" and processid!=676) delete

Now, let's see 'em try to run a cmd.exe. Every time someone tries to launch one, your loop will kill it.

Next, suppose you want to prevent a given process with processid 4008 from spawning child processes. Maybe process ID 4008 is a cmd.exe shell, and you want to prevent the person who is using it from being able to run any commands that aren't built-into the shell itself. Or, better yet, maybe process ID 4008 is Tim Medin's PowerShell process, and you wanted to pee in his Corn Flakes, depriving him of the ability to run any separate EXE commands, forcing him to rely solely on built-in capabilities of Powershell itself. We can do this one without our ping-induced delay to really confound him:

C:\> for /L %i in (1,0,2) do @wmic process where parentprocessid=4008 delete

These wmic where clauses also support substring matches, with the use of "like" and %. For example, suppose you want to continuously kill every process that is running from an EXE with a path in a given user's directory. You could run:

c:\> for /L %i in (1,0,2) do @ping -n 2 127.0.0.1 >nul & wmic process where
(executablepath like "c:\\users\\tim\\%") delete

Note that in a "where" clause with the "like" syntax, you have to surround the elements with parens. Also, note that if you have a \ in your where clause, you have to specify it as \\, the first \ indicating an escape, and the second indicating your backslash.

You can combine these where clause elements (=, !=, AND, OR, LIKE, and %) in all kinds of ways to mix and match against various process attributes for whack-a-mole. I'm sure our readers can dream up all kinds of interesting and useful combinations.

But, there is a missing attribute from "wmic process get /?" -- it's the user name that process is running under. To play whack-a-mole based on user names, we can turn to another Process Selector and Terminator: taskkill. I wrote about taskkill filters back in Episode 22, showing how we can use it to kill a process based on its owner username. Here, we'll wrap that in our whack-a-mole construct:

C:\> for /L %i in (1,0,2) do @ping -n 2 127.0.0.1 >nul & taskkill /F /FI
"username eq tim"

Sorry, Tim. That's what you get for using a shell on the same system I'm on. Hal's over on some lonely Linux that no one ever uses, so I leave him alone. :)

Anyway, where was I? Ah, yes, we were discussing attributes of processes that wmic doesn't include, but which may be handy in whack-a-mole games. How about DLLs? Suppose a bad guy is hacking your system and keeps trying to inject a DLL into some process, and you want to kill that process. Maybe Evil Badguy (yes, that's his full name) has injected metsrv.dll, the Metasploit Meterpreter, into a running process, and uses process migration to jump from process to process. Sorry, but getting the system to unload that DLL using only built-in tools at the command line is very difficult, but killing that process is totally doable:

C:\> for /L %i in (1,0,2) do @ping -n 2 127.0.0.1 >nul & taskkill /F /FI
"modules eq metsrv.dll"

Now, I mentioned above that the Process Selector and Process Terminator components are typically combined in a single command, as we've seen so far with wmic and tasklist. When would you have two different commands for these pieces? Well, one example is with netstat, which can show TCP and UDP port usage and the processes associated with each port. That's exactly what we used in Episode #76, where our Process Selector was netstat (whose output I parsed with a FOR /F loop to pull out the ProcessID number), and the Process Terminator was taskkill:

C:\> for /L %i in (1,0,2) do @(ping -n 2 127.0.0.1>nul & for /f "tokens=5"
%j in ('netstat -nao ^| find ^":47145^"') do @taskkill /f /PID %j)

So, keeping in mind those three components of process whack-a-mole, you can use almost any command that lists processes in pretty much any arbitrary way to build a whack-a-mole command for fun and profit.

And now, for a routine disclaimer: Be careful with any of these commands. If you kill a vital system process, such as lsass.exe, you could bring your whole box down. You have been warned. So, now that you are armed and dangerous, go have fun!

Tim prepares for war:

It seems that Ed has a bit of shell envy. So let's kick that inferior shell off "our" machine and keep it (and him) off our machine.

As Ed described, the structure for whack-an-Ed whack-a-mole has three parts, and that basic structure will be very similar in PowerShell.

<Continuous Loopinator> { <Process Selector> | <Process Terminator> }

The Continuous Loopinator repeatedly calls the Process Selector whose results are piped into the Process Terminator. Let's see how each piece works.

Continuous Loopinator:

There are many ways to do a continuous loop, but the easiest and quickest method is to use the While loop.

 PS C:\> while (1) { <code to run ad infinitum>; Start-Sleep 1 }


This loop is pretty self explanatory. It is a simple While loop that runs while 1 is true, which it always will be. The Start-Sleep cmdlet (alias sleep) will suspend activity for the specified amount of time. If we wanted a shorter nap we could use the -milliseconds parameter. Since Ed's command runs every second, we should run ours a bit faster just because we can. How about 5 times a second?

 PS C:\> while (1) { <code to run ad infinitum>; Start-Sleep -milliseconds 200 }


Process Terminator:

I'm covering this a bit out of order because the Terminator is so simple, so indulge me for a bit. The cmdlet used for killing is Stop-Process (alias spps or kill). It can even be used for some rudimentary process selection before the assassination. We can kill based on the Process Id:

PS C:\> Stop-Process 1337


...or the process name.

PS C:\> Stop-Process -Name cmd


In the second example every process with the name "cmd" would be stopped, but what if we wanted to be a little more high tech in making Ed's processes "sleep with the fishes?"

As described earlier, the results of the Process Selector can be piped into our Process Terminator. We can pick any method to retrieve the process(es) to be killed, but more on that later. Here is what it would look like:

<Get Processes> | Stop-Process


By default, Stop-Process will ask for confirmation prior to terminating any process not owned by the current user. To get around that safety mechanism we can use the Force parameter.

<Get Processes> | Stop-Process -Force


Short version:

<Get Processes> | kill -f


We could just kill the processes with Stop-Process by giving it a Process Id or process name, but we want more options. Now let's see how we can find more processes to kill.

Process Selector:

To get a process or a number of processes we use Get-Process (aliases ps and gps). This is our Process Selector. We have covered this before, but we have a number of ways to get a process or a list of processes. To get help on the command you can run:

PS C:\> Get-Help Get-Process


...or for those who have seen the light and come from the linux side but have a bad memory, this works too:

PS C:\> man ps


To see the examples use the Examples parameter, or use the Full parameter to see everything. From the help we can see how to get a process with a given Process ID. We will be looking for PID 4242:

PS C:\> Get-Process -Id 4242
PS C:\> ps -Id 4242


To get all the cmd.exe processes:

PS C:\> Get-Process cmd
PS C:\> ps cmd


Note that the process name does NOT include .exe.

We can also use filters in order to get more granular. We already had our loop to kill all cmd processes, but what if Ed wants to use PowerShell? We need to make sure that we are King of the PowerShell Hill, and any other PowerShell usurper is destroyed. This will find any PowerShell processes that aren't ours.

PS C:\> Get-Process powershell | ? { $_.ID -ne 21876 }


The weaponized version of the command could look like this:

PS C:\> While (1) { ps powershell | ? { $_.ID -ne 21876 } | kill -f; sleep 1 }


The next thing Ed did, after tee-tee'ing in my Kelloggs, was to prevent me from kicking off any processes from a cmd or PowerShell process. So let's do the same to him. Unfortunately, the objects returned by Get-Process do not have a Parent Process Id property, so we will have to use WMI to find processes with a given parent.

PS C:\> Get-WmiObject win32_process -Filter "ParentProcessId=5552" |
% { Get-Process -Id $_.ProcessID }


Get-WmiObject (alias gwmi) is used to access WMI in order to get all processes with a Parent Process Id of 5552. The results are piped into a ForEach-Object (alias %) loop. In the loop we use Get-Process and the Process Id retrieved from WMI in order to get the process object. We can then pipe that into our kill(er). Similar to what Ed did, we want to run this continuously so he doesn't have a chance. Here is what our command looks like:

PS C:\> While (1) { gwmi win32_process -Filter "ParentProcessId=5552" |
% { ps -Id $_.ProcessID } | kill -f }


We also want to make sure that Ed isn't able to run anything from his user directory (which includes his desktop).

PS C:\> While (1) { ps | ? { $_.Path -like "c:\users\ed\*" } | kill -f }


We use the Where-Object (alias ?) to filter our list of processes based on the path. The Like operator is used with our wildcard search string in order to find any of Ed's processes. Again, we pipe the results into Stop-Process in order to kill it.

Just to make sure that Ed doesn't run anything, we will kill any process where he is the owner. Again, we will have to use WMI in order to find the owner of a process.

PS C:\> While (1) { Get-WmiObject Win32_Process |
? { $_.GetOwner().User -eq "ed" } | % { Get-Process -Id $_.ProcessId } |
Stop-Process -Force }


This command is a little complicated, so let's break it down piece by piece. The While loop portion should be obvious so we'll skip that bit of explanation. The first chunk...

Get-WmiObject Win32_Process | ? { $_.GetOwner().User -eq "ed" }


We start off by querying WMI and retrieving WMI objects representing each process running on the current machine. The results are then piped into our filter, Where-Object (alias ?). The "current pipeline object", represented by the variable $_, allows us to access the properties of each object passed down the pipeline. For all intents and purposes, the $_ variable is used to iterate through each object passed down the pipeline. It takes the first object, in our case the first process, and accesses the GetOwner method's User property. We then check to see if the value is equal (-eq) to "ed". If it is equal, then our WMI object passes our filter and is sent further down the pipeline. Remember, the objects are WMI Process Objects, not PowerShell Process Objects, and they will need to be converted to the PowerShell version so we can natively deal with them in PowerShell. On to the conversion.

... | % { Get-Process -Id $_.ProcessId } ...


The objects that passed through the filter are now sent into our ForEach-Object (alias %) loop. This loop is used to iterate through each object and execute some fu on each of the WMI objects. Again, $_ represents the current object. To retrieve the PowerShell version of the process object we use Get-Process. We need to use Id parameter with the Process Id property of the current object ($_.ProcessId). Now we have PowerShell Process Objects. YAY!

... | Stop-Process -Force


Finally, the processes are piped into Stop-Process to be destroyed. The Force option is used since we don't want a confirmation to kill each process.

Next, let's look for the processes with the injected Meterpreter dll. How do we find this dll? We need to look at the modules a process has loaded. Here is what the Modules property looks like for the PowerShell process.

PS C:\> Get-Process powershell | select modules

Modules : {System.Diagnostics.ProcessModule (powershell.exe), System.Diagnostic
s.ProcessModule (ntdll.dll), System.Diagnostics.ProcessModule (kernel
32.dll), System.Diagnostics.ProcessModule (KERNELBASE.dll)...}


As you can see the dll name is wrapped in parenthesis. So here is how we find it and kill it.

PS C:\> Get-Process | ? { $_.Modules -like "*(metsrv.dll)*" } | Stop-Process


EDIT: In MetaSploit v2 and v3.0-3.2 this technique worked to find meterpreter. In v3.3 (and presumably future versions) this does not work since MetaSploit uses Reflective DLL injection to load the dll. I wrote a separate blog post on my personal blog on how to find the footprints of meterpreter: Finding Meterpreter.

Actually, the modules property is a collection of module objects. So we can use a nested Where-Object to filter.

PS C:\> ps | ? { $_.Modules | ? {$_.ModuleName -eq "metsrv.dll" }} | kill


In this command we retrieve all the processes. We then filter the Modules, where the ModuleName is metsrv.dll. The results are piped into Stop-Process.

We can also parse netstat in order to kill a process similar to Episode #76. Let's take that command and wrap it in our infinite loop.

PS C:\> While (1) { netstat -ano | ? {$_ -like "*:47145 *"} |
% { $_ -match "\d+$"; stop-process $matches[0] } }


And as Ed said, be careful not to kill the wrong process or the whole box could go down. Of course, when it is down it is pretty dang hard to attack. Of course, it is also pretty dang hard to use too.

Now that Ed and I have spent all of our energy going after each other, Hal is going to show up and mop the floor with our tired carcases.

Disclaimer: No Eds where harmed in the making of this episode.

Hal's Analysis:

Why are Ed and Tim so angry all the time? It couldn't have anything to do with the platform they've chosen to work on, could it? Hey guys, don't worry, be happy! You can always install Linux for free, or even just use Cygwin.

When Ed first proposed this topic, I was pretty stoked because I thought it was going to be a cake-walk for me with my little friend lsof. But not all of Ed's challenges that could be answered purely with lsof. Some required a bit more shell fu.

Let's start with the simple stuff first. The "infinite loop with 1 second delay" idiom for bash is something we've seen before in previous Episodes:

# while :; do [...your commands here...]; sleep 1; done

In this case, the commands we put into the while loop are going to be a kill command and usually some variant of "`lsof -t ...`" we'll be using to select the PIDs we want to kill. Remember from previous Episodes that "lsof -t" causes lsof to print out just the PIDs of the matching processes, specifically so we can use the output as arguments to the kill command.

For example, let's suppose we want to kill all of Ed's processes. We can use lsof's "-u" option to select processes for a particular user:

# while :; do kill -9 `lsof -t -u skodo`; sleep 1; done


Or we could nuke all the bash shells on the machine, using "-c" to select commands by name:

# while :; do kill -9 `lsof -t -c bash`; sleep 1; done

Of course, this would hammer our own shell, so it pays to be more selective:

# while :; do kill -9 `lsof -t -a -c bash -u^root -u^hal`; sleep 1; done

Here I've added the "-a" flag which means do a logical "and" on my selection criteria. Those criteria are "all commands named bash" ("-c bash") and "not user root" ("-u^root") and "not user hal" ("-u^hal"). Note that lsof's negation operator ("^") only works when selecting user names, PIDs (with "-p"), process group IDs (with "-g"), command names (with "-c"), and protocol state info ("-s", as in "-s^TCP:LISTEN").

Another one of Ed's challenges was killing processes where the binary is in a particular directory. Again we can do this with lsof:

# while :; do kill -9 `lsof -t -a -d txt +d /home/skodo`; sleep 1; done

Here we're looking for process binaries using "-d txt". In the lingo, the binary is what's used to create the "text segment" of a process (where the executable code lives), hence "-d txt" for lsof. The "+d" tells lsof to look for open files under a particular directory. Yes, lsof has so many command line options that the author had to start doubling up on letters using "+" instead of "-" (there's a reason the lsof manual page is nearly 50 pages long when printed out).

Note that "+d" only searches "one level deep". So if Ed were running "/home/skodo/evil", then our loop above would whack that process. But if Ed were running "/home/skodo/bin/evil", then we wouldn't catch it. If you want to do full directory recursion, use "+D" instead of "+d". lsof distinguishes these with separate options because full directory searches are so time-consuming.

However, as I mentioned earlier, Ed had challenges that I wasn't able to come up with a "pure lsof" solution for. For example, while lsof has the "-R" option for displaying parent process ID (PPID) values, there aren't any switches in lsof to select particular processes by PPID. So we'll need to resort to some awk:

 # while :; do kill -9 `lsof -R -d cwd | awk '($3 == 8552) { print $2 }'`; sleep 1; done

Here the lsof command is outputting PPID values ("-R") in addition to the normal lsof output, and we're only outputting the lines showing the current working directory of each process ("-d cwd"). The "-d cwd" hack is a good way of ensuring that you only get one line of lsof output per process-- so we don't end up outputting the same PID multiple times and generating spurious error messages from kill. The awk code simply matches against a particular PPID value in column #3 and outputs the PID value in column #2.

Even though I had to resort to a bit of awk in the last example, you have to admit that having lsof makes this challenge unfairly easy for us Unix/Linux folks. Ahhh, lsof! How I love thee! Let me count the ways...