Tuesday, February 23, 2010

Episode #83: Faster. Higher. Stronger.

Tim goes for Gold:

The Olympics are in full swing. The world's finest athletes are doing everything they can to shave fractions of a second; and not get caught doping. This episode we try to shave off precious seconds from our tasks, and not get caught with our (Irish) coffee at work (again).

We all have those commands that we use regularly, and accessing those commands more quickly would save some time. With PoweShell we can do just that by using Set-Alias.

PS C:\> Set-Alias -Name ss -Value Select-String

The Name and Value paramters are positional, so we don't have to use the parameter names. This command does the same thing:

PS C:\> Set-Alias ss Select-String

We have created our alias, now how can we use it? If we wanted to search all the files in a directory for the word "test" we had to use this command:

PS C:\> gci | Select-String test

...but now we can use this shorter command:

PS C:\> gci | ss test

The second command is half the length of the first command command. That is a nice efficiency gain.

What if we regularly checked the event log to see the five latest items? Obviously a short command would save us some time, but see what happens when we try to create an alias.

PS C:\> Set-Alias g5 Get-WinEvent -MaxEvents 5
Set-Alias : A parameter cannot be found that matches parameter name 'MaxEvents'.

That didn't work, but we can do it, we just need to use a function.

PS C:\> function Get-Last5Events { Get-WinEvent -MaxEvents 5 }
PS C:\> Get-Last5Events

TimeCreated ProviderName Id Message
----------- ------------ -- -------
2/23/2010 8:12:1... Service Control ... 7000 The Diagnostic S...
2/23/2010 8:12:1... Microsoft-Window... 135 The Diagnostic P...
2/23/2010 8:12:1... Service Control ... 7000 The Diagnostic S...
2/23/2010 8:12:1... Microsoft-Window... 135 The Diagnostic P...
2/23/2010 8:11:4... Service Control ... 7036 The Computer Bro...

The name we picked for our function is a bit long, so let's use Set-Alias to create an alias for the function.

PS C:\> Set-Alias g5 Get-Last5Events

So we've shaved a few seconds off of our commands, now on to the doping.

In PowerShell we can use snap-ins and modules to extend the shell. There are modules and snap-ins for managing Active Directory, Group Policy, Diagnostics, Exchange 2007 and 2010, SharePoint 2010, IIS 7, VMware, and many more servers and services.

Snap-ins load sets of cmdlets and providers. Modules, which are only available in v2, can include cmdlets, providers, functions, variables, aliases, and much more. Modules are easier to create than snap-ins and they appear destined to replace snapins as the main way to extend PowerShell. You programmers can think of modules as a "class" while a snap-in is just a collection of functions.

Here is how a snap-in is loaded:

PS C:\> Add-PSSnapin VMware.VimAutomation.Core

Wildard characters can be used in the module name. I use it since I can't ever remeber to type VMware.VimAutomation.Core, so I just type *vmware*. Let's see what cmdlets have been added.

PS C:\> Get-Command -Module *vmware*

CommandType Name Definition
----------- ---- ----------
Cmdlet Add-VMHost Add-VMHost [-Name] <String> ...
Cmdlet Add-VMHostNtpServer Add-VMHostNtpServer [-NtpSer...
Cmdlet Apply-VMHostProfile Apply-VMHostProfile [-Entity...
Cmdlet Connect-VIServer Connect-VIServer [-Server] <...

Now, let's load a module and then see what new cmdlets are available:

PS C:\> Import-Module GroupPolicy
PS C:\> Get-Command -Module GroupPolicy

CommandType Name Definition
----------- ---- ----------
Cmdlet Backup-GPO Backup-GPO -Guid <Guid> -Pat...
Cmdlet Copy-GPO Copy-GPO -SourceGuid <Guid> ...
Cmdlet Get-GPInheritance Get-GPInheritance [-Target] ...
Cmdlet Get-GPO Get-GPO [-Guid] <Guid> [[-Do...

We have these new aliases, functions and cmdlets, but we don't want to go through the same setup everytime. We can automatically load these by editing our profile, but where is it?

There are four profiles, but we typically only work with the one stored in $profile. This file may not exist, so you may need to create it.

PS C:\> $profile

We can edit the file with good ol' notepad.

PS C:\> notepad $profile

Once we have it open we can add all of our aliases, functions, snap-ins and modules.

Set-Alias -Name ss -Value Select-String
function Get-Last5Events { Get-WinEvent -MaxEvents 5 }
Add-PSSnapin VMware.VimAutomation.Core
Get-Command -Module GroupPolicy
Write-Host "Welcome to Tim's shell..."

This is a good spot to add those commands and functions that aren't built into PowerShell such as Test-Hash and Get-Netstat.

Let's see how the other shells compete.

Hal's Been There, Done That

"See how other shells compete"? Kid, Unix shells were medalling in the Olympic Aliasing event before Powershell was even a gleam in some demented Microsoft developer's eye.

Setting up aliases in bash is straightforward. Here's how I make it easy to access the history command in my shell:

$ alias h=history
$ h
501 alias h=history
502 h
$ type h
h is aliased to `history'

Notice that the type command will tell you when a given command is an alias (or a shell built-in, or a regular command, or...), and thus is preferrable to commands like which.

Aside from using aliases to shorten commands that I use frequently, I'll often use aliases as a way of making sure I get the program I want, even when I type the wrong thing:

alias more=less
alias mail=mutt

And, yes, I still regularly use mutt to read my email. You got a problem with that?

Aliases can always have multiple arguments, you just need to be careful with your quoting:

alias clean='rm -f *~ .*~'
alias l='ls -AF'
alias tless='less +G'

Aliases can do pretty much any shell code you can imagine, but they do have one shortcoming: any variables you put into an alias are expanded when the alias is read during the start-up of your shell. You can't generally have variables that get interpolated when the alias is executed. For example, I wanted to create an alias that did a "cd" to whatever directory the last file I accessed lives in. But that requires run-time interpolation in order to know the last file accessed, so I can't do it in an alias.

However, you can use shell functions for this:

$ function f { cd `dirname $_`; }
$ cp ~/.bashrc ~/stuff/testing/src/clkf/aliases/bashrc.example
$ f
$ pwd

I called the function "f" for "follow"-- as in, "copy that file to a directory and then follow it over there". It's hugely useful.

You'll notice that the function uses the magical bash builtin variable "$_", which is always set to the last argument of the previous command. So my function only works if the last argument of the last command is a file name, but that's good enough for my purposes. If I had specified a directory instead of a file name in the cp command, then I could have just used "cd !$" instead of "f":

$ cp ~/.bashrc ~/stuff/testing/src/clkf/aliases
$ cd !$
cd ~/stuff/testing/src/clkf/aliases

If you're wondering what the heck that "!$" is all about, you need to go back and check out Episode #14.

The best place to put your aliases and shell functions is in your ~/.bashrc file. That way, they'll get read every time you fire off a shell.

Hmmm, I hope this isn't another one of those "easy for Unix (and Powershell), hard for CMD.EXE" kinds of things. Ed always gets so cranky about those...

Ed's Not Overly Cranky Today
Cranky? Moi? Nevah. Well, mostly never. Ok... sometimes.

Despite my non-cranky demeanor, support for aliases in the cmd.exe shell isn't particularly strong. In this corner of the shell-o-sphere, we typically apply two common approaches to mimicking aliases: doskey macros and small batch files. Let's look at each.

Didja shudder just a little bit when I mentioned doskey? Yeah, it's very old skool, but it can help us get some work done. To define a macro, you could simply run:
C:\> doskey <MacroName>=<Macro>
For example, if you want to display your running processes by simply running "ps", you could run:
C:\> doskey ps=wmic process list brief
C:\> ps

HandleCount Name Priority ProcessId ThreadCount WorkingSetSize
0 System Idle Process 0 0 1 24576
426 System 8 4 97 581632
That simple macro is nice, but it's got a big limitation we've got to overcome by expanding our macro knowledge. To see this limitation, let's try running our new macro and searching its output for cmd.exe:
C:\> ps | find "cmd.exe"
HandleCount Name Priority ProcessId ThreadCount WorkingSetSize
0 System Idle Process 0 0 1 24576
426 System 8 4 97 581632
Doh! We only got the full output of our ps macro, without our search taking effect. What happened? Well, the macro substitution by default just ignores anything that follows it on the command line, unless we define the macro to end with $*, which holds the remainder of the command line typed in after the macro. By putting it at the end of our macro, everything typed after the macro will be executed after macro expansion. So, a better ps would be:
C:\> doskey ps=wmic process list brief $*
C:\> ps | find "cmd.exe"
23 cmd.exe 8 676 1 1462272
28 cmd.exe 8 4008 1 2785280
That's what Momma likes.

Next, to more closely mimic what Hal does above with shell history by simply running the "history" command or the "h" command, you could use:
C:\> doskey history=doskey /history $*
C:\> doskey h=doskey /history $*

C:\> h
Note that I did use $* here, in case someone wants to pipe our output to another command or to redirect it into a file. If you just run the macro with nothing after it, $* contains nothing, so it executes as you'd expect... no harm, no foul. You can even split up the command line options following your macro, referring to each component as $1 up to $9, using each independently. And, if you want to have multiple commands in a macro, enter them as command1$Tcommand2, rather than using & between them. But, if you really want to use &, modern versions of Windows allow you to define the macro as command1 ^& command2. Likewise, if you want to do any piping in doskey macros, make sure you use a ^|.

If you ever want to list all of the macros you've created, run:
C:\> doskey /macros
It should be noted that these macros take precedent over any normal commands you type at cmd.exe, so you could really make your shell useless with them. For example, if you run:

c:\> doskey dir=echo No Way Dude!

c:\> dir
No Way Dude!
You've lost access to the dir command. To undefine a macro, simply use doskey to redefine a macro name with nothing after the equals sign:
C:\> doskey dir=
C:\> dir
Volume in drive C has no label.
Volume Serial Number is 442A-03DE
Now, these little toy macros we've defined above are cute, but let's get into some macros that really save us some serious time. Faithful readers of this blog (Hi Mom!) know that many of my episodes include certain constructs, like making a command run forever or invoking delayed variable expansion. Let's look at some macros for those:
C:\> doskey forever=for /L %z in (1,0,2) do @$*

C:\> forever ipconfig /displaydns & ping -n 2 > nul
Here, I've created a macro called "forever" that invokes a FOR /L loop, set to count from 1 to 2 in steps of zero. That way, it'll run forever. In the DO clause of my loop, I turn off the display of commands (@) and run whatever follows the invocation of forever. Note that I have to define a variable for my FOR loop, and I've chosen %z. That's because I want to minimize the chance I'll have a collision with any variables I might choose for my command to forever-ize. If I had used a %i in my macro, I'd really be thrown for a loop (no pun intended) if I used %i in my follow-up command. With FOR /F loops that have multiple tokens, variables are dynamically allocated alphabetically, so I hang out at the end of the alphabet here to lower the chance of collision.

You could put your ping delay inside the forever macro, but I find it best to keep it out, giving me more flexibility as I define my commands. Note that the command I'm running after forever here will run ipconfig to dump the DNS cache, followed by a one-second delay introduced by pinging myself twice (first ping happens immediately, followed by another ping one second later).

And, to perform delayed variable expansion (described in Episode #12), I could define a macro of:
C:\> doskey dve=cmd.exe /v:on /c "$*"
Note that you can put stuff in a macro _after_ the $* remainder of your command-line invocation, as I'm putting in a close quotation mark here to make sure that my full command gets executed in the context of delayed variable expansion. Now, I can do crazy stuff like this from Episode 58, filling the screen with random numbers to look a little like the Matrix:
C:\> dve for /L %i in (1,0,2) do @set /a !random!
It's important to note that any environment variables whose value changes in your command must be referred to using !variable_name!, instead of the more traditional %variable_name%.

Ahhh... "But wait," you say. After running dve in this example, I manually typed out a FOR /L loop that was simply my "forever" macro from before. Couldn't I just do a dve followed by a forever? Let's try it:
C:\> dve forever set /a !random!
'forever' is not recognized as an internal or external command, operable program
or batch file.
No sir. You can't nest these suckers. Also, they have a problem if they aren't the first command included on your command line. Consider:
C:\> echo hello & ps
'ps' is not recognized as an internal or external command,
operable program or batch file.
So, we've got some pretty serious limitations here*, but, as Tim points out, they can shave off a precious few seconds, especially for things we often start our command-lines with, such as the forever FOR loop or delayed variable expansion.

Wanna see a really whacked out macro? This one was provided by Brian Dyson, a cmd.exe warrior like no other (and a long-lost twin of my friend Mike Poor, but that is a story for another day). Brian showed me a macro in which he emulates the !command feature of bash, which Hal alludes to above. Check out this amazing action, which I quote from Brian:

For the hard-core DOSKEY macro lover, a '!' DOSKEY macro that acts (somewhat like) Bash, running the last command:
c:\> doskey !=doskey /history ^| findstr /r /c:"^$*" ^| findstr /rv /c:"^!" ^>
"%TEMP%\history.txt" ^&^& ((for /f "tokens=*" %a in (%TEMP%\history.txt) do
@set _=%a) ^&^& call echo.%_^% ^& call call ^%_^%) ^& if exist "%TEMP%\history.txt"
del "%TEMP%\history.txt"
Here we get the history and pipe it into a `findstr` command searching
for commands that begin with the arguments to `!`. We remove any
previous `!` command and redirect everything into a temporary file. (I
couldn't find a work-around for the temporary file). If the final
findstr was successful, parse through the temporary file and set '_' to
the last command (like Bash). If this was successful, then echo out the
command and call it via double `call`. Finally, clean up the temporary
history file.

Dude! Nice. Now, to invoke Brian's macro, you have to run ! followed by a space, followed by a command or letters and it will invoke the last instance of whatever previous command started with the letters you type. Check it out:
C:\> ! di
Volume in drive C has no label.
Volume Serial Number is 442A-03DE

Directory of c:\
That space between the ! macro and the first part of the command we want to run is very important. Without it, cmd.exe tries to find a command called !di and bombs. With it, Brian's macro kicks in and expands it to the most recent command that starts with those letters.

Note that a given set of macros only applies to the shell in which it is created. Your macros aren't carried to other currently running shells, nor do they even apply to child shells that you spawn. If you exit your shell, they are gone. If you want to make your macros permanent, you should first define them, as we show above and put all those definitions in a file. Then, you can export them into a file, by running:
C:\> doskey /macros > macros.txt
You can call the file anything you want, but I like calling it macros.txt because it's easy to remember. Then, at any time, you can import these macros by running:
C:\> doskey /macrofile=macros.txt
If you want to apply your macros to every cmd.exe you launch going forward, you can place your macros.txt in a convenient place on your system (such as your common user home directory or even in system32). Then, put a command like the following into any of your autostart locations:

%systemroot%\system32\doskey.exe /macrofile=%systemroot%\system32\macros.txt

For macros, I typically put them in the autostart entry associated with the command shell itself, namely the Autorun Registry key at HKCU\Software\Microsoft\Command Processor. You can define this key by running:
C:\> reg add "hkcu\software\microsoft\command processor" /v Autorun /t reg_sz /d
"%systemroot%\system32\doskey.exe /macrofile=%systemroot%\system32\macros.txt"
Be careful! If you already have a command set to autorun via this key, you may want to append the command to your already-existing one by simply inserting an & between the two commands. The reg command will prompt you to confirm or reject the overwrite if you already have something there.

*To get around some (but not all) of the limitations of doskey macros, you could alternatively use small bat files placed in %systemroot%\system32 to kick off familiar commands. For example, you could run:
C:\> echo @wmic process list brief > %systemroot%\system32\ps.bat
C:\> ps
HandleCount Name Priority ProcessId ThreadCount WorkingSetSize
0 System Idle Process 0 0 1 24576
404 System 8 4 97 774144
The advantages of doing your command-line shrinkage with bat files is that you can now run them as little commands themselves, anywhere in your command invocation:
C:\> echo hello & ps
HandleCount Name Priority ProcessId ThreadCount WorkingSetSize
0 System Idle Process 0 0 1 24576
404 System 8 4 97 745472
The downside of this approach is that your bat file must contain whole commands, not just the start of a command, like we can do with macros. That's why my forever and dve examples above work so well as macros and not as bat files. They are the starter of something else, but not whole commands to themselves, as is the ps and history examples we've touched on here.

So, back to Tim's juicing metaphor. You get to pick your poison with cmd.exe and alias-like behavior. While both methods have some limitations, shell jocks can use macros or bat files to shave off a few precious seconds and boost your performance.

Seth (our Go-To-Mac Guy) Shoots & Scores:
Saving time? Who wants to save time? I thought the whole point of CLI tricks was to make things as painful and as drawn out as possible? Oh wait, that's the job of some of the curlers in Vancouver.

When you start talking about an alias on a Mac, the first thing any Mac user is going to think of is a GUI pointer, otherwise known in Mac land as an Alias. These have existed long before the days of OS X and while not as powerful as say Unix Symbolic Links, they do have some nice features. For example, if you move a target file of an alias, the alias will still work as long as the target is on the same file system. Sadly though, they don't work from the command line. Use good old ln for that.

But I get off topic. I'm disappointed in Hal; for all his talk about being quick on the draw and not letting the boss see you enjoying that wonderful concoction of roasted bean water and barley, he doesn't take his shell fu to the next level.

Hal left us with:
$ function f { cd `dirname $_`; }
$ cp ~/.bashrc ~/stuff/testing/src/clkf/aliases/bashrc.example
$ f
$ pwd
Great! But who's to say we can't use aliases?
$ alias lastdir='function f { cd `dirname $_`; }; f; echo Your working directory is now $PWD'
$ cp ~/.bashrc ~/stuff/testing/src/clkf/aliases/bashrc.example
$ lastdir
Your working directory is now /home/hal/stuff/testing/src/clkf/aliases
Now you might ask, what's the point of wrapping a function into an alias, it's extra unneeded text in an already beautiful simple command! True, but it shows the flexibility of one line commands ( I love the semi-colon) and saves us an extra step when we load it into our Mac ~/.bash_profile file.

Aliases also allow us to use variables, you just have to remember to single quote and not double quote the command. Otherwise it will resolve the variable when you set the alias.
$ pwd
$ alias shellfu="echo O Canada! Our $HOSTNAME and native $PWD; echo True patriot love in all they sons $SHELL"
$ shellfu
O Canada! Our HomePC and native /Users/seth
True patriot love in all they sons /bin/bash
$ cd ~/Desktop
$ shellfu
O Canada! Our HomeMac and native /Users/seth
True patriot love in all they sons /bin/bash

So even though we changed the current working directory, our alias is reporting the path that was active when we set the alias. But when we single quote the command:
$ pwd
$ alias shellfu='echo O Canada! Our $HOSTNAME and native $PWD; echo True patriot love in all they sons $SHELL'
$ shellfu
O Canada! Our HomeMac and native /Users/seth
True patriot love in all they sons /bin/bash
$ cd ~/Desktop
$ shellfu
O Canada! Our HomeMac and native /Users/seth/Desktop
True patriot love in all they sons /bin/bash

Ok, so what have we learned? First, you can use variables in aliases. Secondly, you can use multiple arguments and commands in aliases. Let's make this even more interesting.

Suppose it's 2 am. You've just gotten word that a host is down on your network. What's the first thing you do? Personally I ping it.
$ ping -c 3 curlingrocks.ohcanada.com
PING curlingrocks.ohcanada.com ( 56 data bytes
64 bytes from icmp_seq=0 ttl=128 time=0.991 ms
64 bytes from icmp_seq=1 ttl=128 time=0.558 ms
64 bytes from icmp_seq=2 ttl=128 time=0.403 ms

Ok, so I know that is the correct IP address for that server, so DNS is working and the host is responding to pings. I'm not sure what this server does or where it lives on my company network, but it's really important and the Boss wants it back up ASAP. So what do I do, run some more commands to see what services are running, etc. Probably faster than trying to dig through crummy documentation.

So with my nice little scan alias:

$ alias scan='hping --count 3 --fast --rawip $_; nslookup $_; echo Results 
from $HOSTNAME to $_; traceroute $_; nmap -n -A -PN $_'

After I ping it, I can immediately find out lots of other information about the host without having to wait by just typing in "scan". Now, the reason that the above works, and we don't have to get into functions like Hal did, is because the last argument in each command is the IP that we want and it doesn't change. $_ in the last nmap command is actually referring to the last argument of the traceroute command and so on. If this wasn't the case, we'd have to use a function like Hal showed us or assign our target IP it's own variable.

$ alias scan='TARGET=($_); echo Results from $HOSTNAME to $TARGET; hping 
--count 3 --fast --rawip $TARGET; nslookup $TARGET; traceroute $TARGET;
nmap -n -A -PN $TARGET'

Also remember if you alias your favorite command, and happen to name it something else useful‚ top perhaps. You can always ignore the alias with a backslash.

$ \top

Talk about an easy way to disguise the up'ers!

Tuesday, February 16, 2010

Episode #82: Hippy Barfday Spew Do You?

Ed's Got Sed (well, a little bit of it at least):

In a celebratory mood, I belt out:
C:\> cmd.exe /v:on /c "for /f "delims=" %i in ('echo Hippy barfday spew do you!')
do @set stuff=%i& echo !stuff! & set stuff=!stuff:i=a! & echo !stuff! & set
stuff=!stuff:arf=irth! & echo !stuff! & set stuff=!stuff:spew=to! & echo !stuff!
& set stuff=!stuff: do=! & echo !stuff! & set stuff=!stuff:yo=f! & echo. & echo
Hippy barfday spew do you!
Happy barfday spew do you!
Happy birthday spew do you!
Happy birthday to do you!
Happy birthday to you!

Happy birthday to fu!

If you couldn't tell, we're celebrating the FIRST BIRTHDAY of this blog! Yes, we've already made it through one orbit of that big ball of gas at the center of our Solar System, and we're looking forward to even more. It was one year ago today that three merry command line bandits decided to take a fun little brawl of command-line one-upmanship from Twitter to blog format, so we could get into deeper fu. As a birthday present, my command above shows how we can perform string substitutions at the Windows cmd.exe command line.

I've never hid the fact that I've often longed for the sed command, which allows for nifty stream editing. Although it's got a ton of flexible features, one of sed's most common uses is replacing a string with another string in a stream of data, such as Standard Output. What's not to like? Well, the fact that we don't have a built-in equivalent in cmd.exe is one thing that's a bummer.

So, I got to thinking about this problem the other day, when I realized that I could do the substitution and replacement thing using string altering options in cmd.exe. I could take the data I want to alter, put it in a variable, and then change all occurrences of given strings to other strings using the notation:
set string=%string:original=replacement%

Or, if we use delayed environment variable expansion, we rely on:
set string=!string:original=replacement!

I've done just that above, starting by turning on delayed environment variable expansion (cmd.exe /v:on) to execute the command (/c) of FOR /F. My FOR /F command is designed to take the output of my echo command and put it in the variable %i. Alternatively, if I wanted to change text inside of a file, I could have used the "type filename" command instead of echo, iterating through each line of the file making substitutions. I turn off default parsing on spaces and tabs using "delims=", so that my whole line of data gets shoved into %i. This is all very routine stuff for life inside of cmd.exe, even on your barf^h^h^h^hbirthday.

Then, I move my iterator value into a variable that I can take action on (set stuff=%i). I now can use my string replacement technique to start altering that variable, in a shallow and pale (but useful) mimicking of but one of the features of sed.

I can change individual characters into other characters, such as "i" to "a":
set stuff=!stuff:i=a!

I can change multi-character substrings like barf into birth:
set !stuff:arf=irth!

I can replace whole words, changing "spew" into "to":
set stuff=!stuff:spew=to!

I can delete whole words, like " do":
set stuff=!stuff: do=!

This one might be worth a note. Here, I'm replacing something with nothing by placing nothing after the equals sign. The item I'm replacing is " do", with a space in it. Otherwise, I'd have a double space left behind. And, yes, I could have alternatively replaced "do " (with a space after it) with nothing. Or, I could have replaced " do " with " " using !stuff: do = !. There are many options.

And I can even take bigger strings and replace them with smaller strings:

This is really cool and useful, but it does have some limitations. Note that every instance of the substring I specify is replaced, and there really is no means for just changing, for example, the first occurrence of the substring. Also, this doesn't work for non-printable ASCII characters. You have to be able to type it to get it into that syntax. I've also gotta shove everything into a string to make this work, but that's not so bad.

So, there you have it... a highly obfuscated command for wishing ourselves Happy Birthday.

Whatcha got Hal & Tim?

Tim blows out the candles:

Tim let's it rip:
PS C:\> "Hippy barfday spew do you!" | Tee-Object -Variable stuff; 
$stuff -replace "i","a" | Tee-Object -Variable stuff;
$stuff -replace "arf","irth" | tee -var stuff;
$stuff -replace "spew","to" | tee -va stuff;
$stuff -replace " do","" | tee -va stuff;
Write-Object; $stuff -replace "yo","f"

Hippy barfday spew do you!
Happy barfday spew do you!
Happy birthday spew do you!
Happy birthday to do you!
Happy birthday to you!

Happy birthday to fu!

One orbit around that big ball of gas huh? I'm sure there is joke related to Ed and his love of beans, but we are here to celebrate not disgust.

For those of you who followed the blog since the beginning, the original third bandit was Paul Asadoorian, not me. For those The Three Stooges fans out there, I guess you call me Curly, and Paul would be Shemp. Although, I can't decide if that would make Hal Moe or Larry. Let's get back to business and this week's PowerShell nyuk, nyuk, nyuk.

It is rather fitting that PowerShell is second this week. Cmd's string replacement is pretty weak and the syntax is terrible, while Linux is the opposite. PowerShell is somewhere in between, but much closer to the linux side of things.

Before we dig into the entire command above, we'll first do the string substituion without all the extra output.

PS C:\> "Hippy barfday spew do you!" -replace "i","a" 
-replace "arf","irth" -replace "spew","to" -replace " do",""
-replace "yo","f"

Happy birthday to fu!

The Replace operator is used to replace strings, duh! By default, the Replace operator is case insensitive, but to be explicitly case insensitive use the IReplace operator. For a case senstitive replace use the CReplace operator.

Now, let's do all of Ed's tricks:

Change individual characters into other characters, such as "i" to "a":
... -replace "i","a"

Change multi-character substrings like barf into birth:
... -replace "arf","irth"

Replace whole words, changing "spew" into "to":
... -replace "spew","to"

Delete whole words, like " do":
... -replace " do",""

We can even to tricks that Ed can't, by using regular expressions:
Replace "y" with "f" but only if it is the first character in a word:
PS C:\> "Happy birthday to do yu!" -replace "\sy"," f"
Happy birthday to do fu!

Swap the first two words in a line:
PS C:\> "birthday Happy to fu!" -replace "^(\w+)\s(\w+)","`$2 `$1"
Happy birthday to fu!

The last command uses regular expression groups. We won't go into the depths of regex, but in short, "\w+" will grab a word and "\s" will grab a space. The caret (^) is used to anchor the search to the beginning of the string, and the parentheses are used to define the groups. In the replacment portion we use `$1 and `$2 to represent (respectively) the first and second groups (words) found. Since we want to output them in reverse order we use "`$2 `$1" to put the second word before the first word.

Back to the original command:

PS C:\> "Hippy barfday spew do you!" | Tee-Object -Variable stuff; 
$stuff -replace "i","a" | Tee-Object -Variable stuff;
$stuff -replace "arf","irth" | tee -var stuff;
$stuff -replace "spew","to" | tee -va stuff;
$stuff -replace " do","" | tee -va stuff;
Write-Object; $stuff -replace "yo","f"

We want to display each change as it happens. To pull this off we will have to use the Tee-Object cmdlet. Similar to the linux's tee command, Tee-Object takes the command output and saves in in a file or variable, as well as sending it down the pipeline or to the console.

If we break it down, this command has three parts that are repeated.

<input object> | Tee-Object -Variable stuff | $stuff
-replace <original> <replacment>

We start with the input object "Hippy barfday spew do you!" and pipe it into Tee-Object (alias tee). The only reason we use Tee-Object is so we can display the output and work with it further down the pipeline. After tee, we do the replace. The output of the previous portion becomes the input for the next. Rinse and repeat.

Towards the end of command we throw in the Write-Object cmdlet (alias write, echo) to add the extra line break.

One quick thing to note, when using the Tee-Object cmdlet's Variable parameter, do not use a $. The parameter accepts a string, which is the name of the variable.

So that a more lucid version of Ed's highly obfuscated command, and now it is time for Hal to hand out the Birthday spankings.

MoeHal sedsSays

Huh, I was sure Ed was Curly. At Ed's current rate of hair loss, he's going to resemble Curly before too much longer.

Hmmm, my Windows colleagues are desperately trying to achieve some of the functionality of sed in their silly little command shells. Here's a hint guys: Cygwin is your friend. Then you could do things like this:

$ echo 'Hippy Barfday Spew Do You!' | 
sed 's/\(H\)i\(ppy B\)arf\(day \)Spew D\(o \)Yo\(u!\)/\1a\2irth\3T\4F\5/'

Happy Birthday To Fu!

The whole trick here is leveraging sub-expressions-- the text enclosed in "\(...\)"-- in the first part of the sed substitution expression. Essentially I'm using the sub-expressions to "save" the bits of the line that I want to keep. You'll notice the bits of our input string that I don't want are carefully kept outside the "\(...\)" boundaries.

You can refer to the contents of the sub-expressions on the righthand side of the substitution using the \1, \2, ... variables. Sub-expressions are numbered left to right by opening parenthesis-- this is an important distinction when you start doing crazy stuff like nested sub-expressions. In this case, however, all I have to do is output the contents of my sub-expressions in order with appropriate text in between them to form our final message.

So really I'm just using sed sub-expressions like a cookie cutter here to chop out the bits of the line I want. This functionality makes sed very useful as a surgical tool for reformatting text into a regular format. Another example of this comes from one of our earliest Episodes where I showed ShempPaul how to bring the sed madness to parse the output of the host command.

Now the problem is that these sed expressions always end up looking awkward because of all of the backwhacks floating around. If you have GNU sed handy, you can use the "-r" (extended regex) option. This allows you do create sub-expressions with "(...)", saving yourself a lot of backwhack abuse:

$ echo 'Hippy Barfday Spew Do You!' | 
sed -r 's/(H)i(ppy B)arf(day )Spew D(o )Yo(u!)/\1a\2irth\3T\4F\5/'

Happy Birthday To Fu!

Still ugly, but definitely more readable.

Thanks everybody for taking time out of your busy lives to keep up with our humble little blog in the past year. We'll save you a bit of the birthday cake!

Tuesday, February 9, 2010

Episode #81: From the Mailbag

Hal checks out the mail

We love getting email from readers of the blog. And we love getting cool shell hacks from readers even more. Recently, loyal reader Rahul Sen sent along this tasty little bit of shell fu:

How to search for certain text string in a directory and all its subdirectories, but only in files of type text, ascii, script etc:

$ grep 9898 `find /usr/local/tripwire -type f -print | xargs file |
egrep -i 'script|ascii|text' | awk -F":" '{print $1}'`


That's totally cool, Rahul!

Honestly, when I first looked at this I thought, "There's got to be a shorter way to do this." But the tricky part is the "only in files of type text, ascii, script, etc" requirement. This basically forces you to do a pass through the entire directory first in order to locate the relevant file types. Thus the complicated pipeline to pass everything through the file command and egrep.

A few minor improvements I might suggest:

  1. I'm worried that for a large directory you'll end up returning enough file names that you exceed the built-in argument list limits in the shell. So it might be better to use xargs again rather than backticks.

  2. I probably would have chosen to use sed at the end of the pipeline rather than awk, just to be more terse.

  3. You don't actually need "-print" on modern versions of find-- it's now the default. Only old-timers like me and Rahul end up doing "-print" all the time because we were trained to do so by old versions of find.

So my revised version would look like:

$ find /usr/local/tripwire -type f | xargs file |
egrep -i 'script|ascii|text' | sed 's/:.*//' | xargs grep 9898

Mmmm-hmmm! That's some tasty fu! Let's see what my Windows brethren have cooking...

Ed Unfurls:
This is a helpful little technique. Now, unfortunately at the Windows command line (man, if I only had a dime ever time I said that phrase), we do not have the "file" command to discern the type of file. But, fear not! We do have a couple of alternative methods.

For a first option, we could use a nifty feature of the findstr command to ignore files that have non-printable characters. When run with the /p option, findstr will ignore any files that contain high-end ASCII sequences, letting us skip over EXEs, DLLs, and other stuff. It's not as fine a grained scalpel as scraping the output of the the Linux file command for script, ascii, and text, but it'll serve us well as follows:

C:\> findstr /s /p 9898 *

Here, I'm using findstr to recurse the file system (/s) from wherever my current working directory is, skipping files with non-printable characters (/p), looking for the string 9898 in any file (*). If you want to get even closer to the original, we can specify a directory where we want to start the search using the /d: option as follows:

C:\> findstr /d:C:\windows /s /p 9898 *

Now, for our second option, there is another way to refine our search besides the /p option of findstr, getting us a little closer to the file types Rahul specified in Linux using the find command. It turns out that Microsoft actually put an indication of each file's type in the name of the file itself. You see, by convention, Windows file names have a dot followed by three letters that indicate the file type. Who knew?!?! :)

To map the desired functionality to Windows, we'll rely on file name suffixes to look inside of .bat, .cmd, .vbs, and .ps1 files (various scripts), .ini files (which often contain config info), and .txt files (which, uh... you know). What's more, many commands associated with searching files allow us to specify multiple file names, with wild cards, such as the dir command in this example:

C:\> dir *.bat *.cmd *.vbs *.ps1 *.ini *.txt

And, happy to say, dir isn't the only one that lets us look for multiple file names with wildcards. For my second solution to this challenge, I'm going to use a FOR /R loop. These loops recurse through a directory structure (/R, doncha know) setting an iterator variable to the name of each file that is encountered. Thus, we can use the following command as a rough equivalent to Rahul's Linux fu:

C:\> FOR /R C:\ %i in (*.bat *.cmd *.vbs *.ps1 *.ini *.txt) do @findstr 9898 "%i" && echo %i

Here, I'm running through all files found under C:\ and its subdirectories, looking inside of any file that has a suffix of .bat, .cmd, etc, running findstr on each file (whose name is stored in %i, which has to be surrounded in quotes for those cases when the value has one or more spaces in the file name) looking for 9898. And, if I successfully find a match, I echo out the file's name. Now, this output looks a little weird, because the file's name comes after each line that contains the string. But, that is a more efficient way to do the search. Otherwise, I'd have to introduce unnecessary complexity by using a variable and parsing to store the line of the file and print its name first, then print the contents. I'd certainly do that for prettiness in a script. But, at the command line by itself, I'd eschew the complexity and just go with what I've shown above to get the job done.

Now, there's a gazillion other ways to do this as well. For a third possibility, we could take the first option above (findstr) and use the multiple file suffix specification of option 2 (*.bat *.cmd *.vbs *.ps1 *.ini *.txt) to come up with:

C:\> findstr /d:C:\windows /s 9898 *.bat *.cmd *.vbs *.ps1 *.ini *.txt

I actually like this third approach best, because it's relatively easy to type, makes a bunch of sense, has nicer-looking output than the FOR /R option, and has better performance.

Fun, fun, fun! Thanks for the great suggestion, Rahul.

Whatcha got for us, Tim?

Tim delivers:

Sadly, PowerShell is missing the same "file" command as the standard Windows command line. Also, there isn't a PowerShell cmdlet similar to "findstr /p" either, but of course we could use FindStr since all the Windows commands are available in PowerShell. Ed already covered FindStr so we will just use just PowerShell cmdlets.

If we know that the files in question are in a specific directory, not subdirectories, there is a pretty simple command to find the files using the Select-String cmdlet.

PS C:\> Select-String 9898 -Path *.bat,*.cmd,*.vbs,*.ps1,*.ini,*.txt
-List | Select Path


According to the documentation "the Select-String cmdlet searches for text and text patterns in input strings and files. You can use it like Grep in UNIX and Findstr in Windows." Whoah, big difference there! Grep has much more robust regular expression support when compared to FindStr, and yes, PowerShell does give us rich regular expressions.

Back to the task at hand. We only care if the file contains the text in question, not how many times the text is found in the file. The List parameter is used as a time saver, since it will stop searching after it finds the first match in a file.

For each match, the default console output displays the file name, line number, and all text in the line containing the match. Of course the output is an object and we just want the file's path, so the results are piped into Select-Object (alias select) in order to get the full file path.

But we want to search though subdirectories too. To do that we have to use Get-ChildItem (alias dir, gci, ls).

PS C:\> Get-ChildItem -Include *.bat,*.cmd,*.vbs,*.ps1,*.ini,*.txt
-Recurse | Select-String 9898 -List | Select-Object path


The Recurse parameter specifies that the search should recursively search through subdirectories. The Include parameter retrieves only the files that match our filter. We could use the Exclude parameter if we wanted to search all files that aren't exe's or dll's.

PS C:\> Get-ChildItem -Exclude *.exe,*.dll -Recurse |
Select-String 9898 -List | Select-Object path

The command can be shortened even further since the full parameter name doesn't have to be used. As long the shortened parameter name isn't ambiguous a short version of the parameter name can be used. Since there is no other parameters that start with R we or I this command will work as well:

PS C:\> ls -i *.bat,*.cmd,*.vbs,*.ps1,*.ini,*.txt -r |
select 9898 -L | select path

There is a catch if you are using version 1 of PowerShell, the Select-String cmdlet natively doesn't take the input from Get-ChildItem and use it to specify the path. We have to use a For-EachObject loop (alias %) in order to accomplish the same task.

PS C:\> Get-ChildItem -Include *.bat,*.cmd,*.vbs,*.ps1,*.ini,*.txt
-Recurse | % { Select-String 9898 -List -Path $_.FullName } |
Select-Object path

A side note:
As you probably already know, Windows XP, Vista, and 2003 don't come with Powershell and require a separate install, but for the love of Pete, install it. Version 2 has been available for quite a while and there are many enhancements over v1 (and even more when compared to cmd). Windows 2008 R2 and Windows 7 come with v2 by default.

PowerShell v1 in Windows 2008 (R1) is an optional feature that needs to be enabled. It can be installed using the Windows command line by running this command:

C:\> ServerManagerCmd.exe -install PowerShell

This is probably the most useful Windows command available (sorry Ed).

Seth Matheson (Our Brand-Spankin'-New Mac OS X Go-To Guy) Interjects:

While it's great to have 3 or 4 commands to string together to get the output you want (heck, this is part of the joy of the power of 'nix, the freedom to do almost anything because you're not restricted to a specific command), sometimes it's nice to have one command that will do it for you, out of the box.

Enter Spotlight on the Mac. Spotlight is Apple's built in OS-level search that functions on meta tags and even though everyone rags on them for the GUI, Apple usually puts in a command-line function or two for our trouble.

$ mdfind evil

The above command will search file names and file contents for the string "evil". Simple enough right?

$ mdfind evil -onlyin /Users/ed

Still pretty simple, this will limit the search for "evil" in Ed's home directory. I'm sure we won't find anything, right Ed?

So back to the original challenge, what good would this be unless it can search for file type? Guess what, you can!

$ mdfind "evil kind:text" -onlyin /Users/ed

This will find all text files in Ed's home directory with the string "evil". Wonder what that'll come up with...

You can use kind for other things too, Applications, Contacts, Folders, etc.

This is only a very simple Spotlight search. You can go much further. Everything in OS X (10.4 and above) has meta tags that detail all sorts of interesting things about the files. Run the following to find out just what kind of data that Spotlight sees:

$ mdls "/Users/ed/Desktop/March CLK Fu.txt"

kMDItemContentCreationDate = 2010-02-09 17:54:00 -0500
kMDItemContentModificationDate = 2010-02-09 17:54:00 -0500
kMDItemContentType = "public.plain-text"
kMDItemContentTypeTree = (
kMDItemDisplayName = "March CLK Fu.txt"
kMDItemFSContentChangeDate = 2010-02-09 17:54:00 -0500
kMDItemFSCreationDate = 2010-02-09 17:54:00 -0500
kMDItemFSCreatorCode = ""
kMDItemFSFinderFlags = 0
kMDItemFSHasCustomIcon = 0
kMDItemFSInvisible = 0
kMDItemFSIsExtensionHidden = 0
kMDItemFSIsStationery = 0
kMDItemFSLabel = 0
kMDItemFSName = "March CLK Fu.txt"
kMDItemFSNodeCount = 0
kMDItemFSOwnerGroupID = 501
kMDItemFSOwnerUserID = 501
kMDItemFSSize = 3
kMDItemFSTypeCode = ""
kMDItemKind = "Plain text"
kMDItemLastUsedDate = 2010-02-09 17:54:00 -0500
kMDItemUsedDates = (
2010-02-09 00:00:00 -0500

All fun things you can search by! For example:

$ mdfind "kMDItemFSOwnerGroupID == '501'"

Will get everything owned by UID 501 (usually the first created user on the system).

Note: It should be mentioned that most of the 'nix command line fu on this site will work on a Mac, it is after all BSD under the hood. That being said, sometimes we can save some time with the built in utilities, like Spotlight.

Tuesday, February 2, 2010

Episode #80: Time Bandits

Tim stomps in:

I have always wanted to time travel. Since it isn't possible to go back and kill Hitler I thought, maybe we can go back in time and change some files. Obviously, we don't have the technology to actually go back in time and make changes. However, what if we could make it appear that we went back in time by altering timestamps.

First, let's create a few files for our time warp. By the way, this next set of commands are all functionally equivalent.

PS C:\> Write-Output aaaa | Out-File a.txt
PS C:\> Write bbbb | Out-File b.txt
PS C:\> echo cccc | Out-File c.txt
PS C:\> echo dddd > d.txt

Now to see the time related properties available to us for the file object.

PS C:\> Get-ChildItem | Get-Member -MemberType Property | Where-Object { $_.Name -like "*time*" }

TypeName: System.IO.FileInfo

Name MemberType Definition
---- ---------- ----------
CreationTime Property System.DateTime CreationTime {get;set;}
CreationTimeUtc Property System.DateTime CreationTimeUtc {get;set;}
LastAccessTime Property System.DateTime LastAccessTime {get;set;}
LastAccessTimeUtc Property System.DateTime LastAccessTimeUtc {get;set;}
LastWriteTime Property System.DateTime LastWriteTime {get;set;}
LastWriteTimeUtc Property System.DateTime LastWriteTimeUtc {get;set;}

The Get-Member cmdlet (alias gm) is used to get the properties and methods available for an object that has been sent down the pipeline. In our case, the object sent down the pipeline is the file object. We just want to look at the properties (not methods, scriptproperties, etc) of the object, so we use the MemberType parameter for filtering. Then the Where-Object cmdlet (alias ?) is used to filter for properties with "time" in the name. The properties above are read/write, as shown by {get;set;}. Well, lookey there, we can set the timestamps!

An important side note: The Get-Member cmdlet is an extremely useful command. I can't begin to say how often I've used this command to find properties and methods available for an object. There are other MemberTypes, but we will have to cover those at a later time. For a full description check out the MemberType parameter on this Microsoft help page.

The xxxxxTime and xxxxxTimeUtc properties are actually the same property, the only difference is how they display the date in regards to the UTC (coordinated universal time) offset. In my case, the difference is 6 hours. For the sake of brevity the UTC times will be ignored since they are essentially redundant.

Let's take a look at our files.

PS C:\> gci | select Name, LastWriteTime, CreationTime, LastAccessTime

Name LastWriteTime CreationTime LastAccessTime
---- ------------- ------------ --------------
a.txt 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM
b.txt 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM
c.txt 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM
d.txt 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM 1/23/2010 11:16:24 AM

Now let's go back in time.

(gci a.txt).LastAccessTime = Get-Date ("1/1/2010")
(gci b.txt).CreationTime = Get-Date ("1/1/2010")
(gci c.txt).LastWriteTime = Get-Date ("1/1/2010")

Since there isn't a cmdlet for setting the time properties, we need to access properties in a different manner. To do this, we get the object and use the dot notation to access the property. The Get-Date cmdlet creates a valid date/time object for our new timestamp. Let's see how that worked.

PS C:\> gci | select Name, LastWriteTime, CreationTime, LastAccessTime

Name LastWriteTime CreationTime LastAccessTime
---- ------------- ------------ --------------
a.txt 1/23/2010 11:16:24 AM 1/23/2010 12:07:05 PM 1/1/2010 12:00:00 AM
b.txt 1/23/2010 11:16:24 AM 1/1/2010 12:00:00 AM 1/23/2010 12:07:05 PM
c.txt 1/1/2010 12:00:00 AM 1/23/2010 12:07:05 PM 1/23/2010 12:07:05 PM
d.txt 1/23/2010 11:16:24 AM 1/23/2010 12:07:05 PM 1/23/2010 12:07:05 PM

Interesting, the times have been changed, but can we forensically find a difference? After taking an image of the drive and using the istat tool from the SluethKit.org guys it is very obvious that something weird has happened. Let's take a look at an istat output snippet to see where the problem lies.

Flags: Archive
Owner ID: 0
Security ID: 408 ()
Created: Fri Jan 01 00:00:00 2010
File Modified: Fri Jan 23 11:16:24 2010
MFT Modified: Fri Jan 23 11:20:43 2010
Accessed: Fri Jan 23 11:16:24 2010

$FILE_NAME Attribute Values:
Flags: Archive
Name: a.txt
Parent MFT Entry: 9946 Sequence: 10
Allocated Size: 0 Actual Size: 0
Created: Fri Jan 23 11:16:24 2010
File Modified: Fri Jan 23 11:16:24 2010
MFT Modified: Fri Jan 23 11:16:24 2010
Accessed: Fri Jan 23 11:16:24 2010

The PowerShell commands modify the STANDARD_INFO's created, file modified and accessed times. As you can see, there is a discrepancy in the Creation times between the FILE_NAME and STANDARD_INFO attribute values. Also, if you look at the STANDARD_INFO's MFT Modified date you can take a good guess as to when this change was made. The MFT Modified stamp is updated to the current system time whenever any of our PowerShell commands make a change to the file. The MFT Modified timestamp only marks the last change, so we could change all the dates to make it more confusing as to what change happened at that time.

While making the changes to the timestamps in PowerShell is effective when looking at the file system via the GUI or command line, not everything is hidden when looking at it with forensic tools.

While we can't go back in time and bump off Hitler, we did go back in time and take this functionality out of the standard Windows command line. OK, that's not really true. But what is true is that there is no way in cmd.exe to manipulate time stamps. So Ed won't be joining us with any cmd fu this week.

Hal has the touch:

Changing atimes and mtimes on Unix files is easy because we have the touch command. If you simply "touch somefile", then the atime and the mtime on that file will be updated to the current date and time (assuming you're the file owner or root).

But if you're root, you can also specify an arbitrary time stamp using the "-t" flag:

# touch -t 200901010000 /tmp/test
# alltimes /tmp/test
atime: Thu Jan 1 00:00:00 2009
mtime: Thu Jan 1 00:00:00 2009
ctime: Sun Jan 24 05:33:56 2010

The ctime value is always set to the current date and time, because the touch command is updating the atime and mtime meta-data in the inode and ctime tracks the last meta-data update. By the way, don't bother looking for the alltimes command in your Unix OS. It's a little Perl script I wrote just for this Episode (download the script here).

A couple of our loyal readers wrote in to remind me of the stat command as an alternative to my alltimes script. stat has a different output format on different Unix-like OSes, but on Linux I could have done:

# stat /tmp/test | tail -3
Access: 2009-01-01 00:00:00.000000000 -0800
Modify: 2009-01-01 00:00:00.000000000 -0800
Change: 2010-02-03 15:26:33.279577981 -0800

Anyway, thenks everybody for the stat reminder!

The touch command also has "-a" and "-m" flags that allow you to selectively update only the atime or the mtime:

# touch -a -t 200909090909 /tmp/test
# touch -m -t 201010101010 /tmp/test
# alltimes /tmp/test
atime: Wed Sep 9 09:09:00 2009
mtime: Sun Oct 10 10:10:00 2010
ctime: Sun Jan 24 05:49:29 2010

As you can see from the above example, touch is perfectly willing to set timestamps into the future as well as the past.

OK, so what about tweaking the ctime value? In general, setting the ctime on a file to an arbitrary value requires specialized, file system dependent tools. The good news(?) is that for Linux EXT file systems, the debugfs command will let us muck with inode meta-data. If you're dealing with other file system types or other operating systems, however, all I can say is good luck with your Google searching.

debugfs has a huge number of options that we don't have time to get into here. I'm just going to show you how to use set_inode_field to update the ctime value:

# debugfs -w -R 'set_inode_field /tmp/test ctime 200901010101' /dev/mapper/elk-root
debugfs 1.41.9 (22-Aug-2009)

The "-w" option specifies that the file system should be opened read-write so that we can actually make changes-- by default debugfs will open the file system in read-only mode for safety. We also need to specify the file system we want to open as the last argument. Sometimes this will be a disk partition device name like "/dev/sda1", but in my case I'm using LVM, so my disk devices have the "/dev/mapper/" prefix. If you're not sure what device name to use you can always run a command like "df -h /tmp/test" and look for the device name in the first column.

The "-R" option can be used to specify a single debugfs command to run in non-interactive mode. Note that there's also a "-f" option that allows you to specify a file of commands you want to run. If you leave off both "-R" and "-f" you'll end up in an interactive mode where you can run different commands at will.

In this case, however, we're going to use "-R" and run set_inode_field to set the ctime on /tmp/test. As you can see, you use a time stamp specification that's very similar to the one used by the touch command. And speaking of touch, we could use debugfs to "set_inode_field ... atime ..." or "set_inode_field ... mtime ..." instead of touch if we wanted to. This would allow us to update the atime/mtime values for a file without updating the ctime like touch does.

Anyway, now our ctime value should be updated, right? Let's check:

# alltimes /tmp/test
atime: Wed Sep 9 09:09:00 2009
mtime: Sun Oct 10 10:10:00 2010
ctime: Sun Jan 24 05:49:29 2010

That doesn't look right! What's going on?

What's happening is that we've run afoul of the Linux disk cache. We actually have updated the information in the inode, but we've done it in such a way as to do an end-run around the normal Linux disk access routines, so our changes are not reflected in the in-memory file system cache. The good news is that (at least as of Linux kernel 2.6.16) there's a simple way to flush the inode cache:

# echo 2 > /proc/sys/vm/drop_caches
# alltimes /tmp/test
atime: Wed Sep 9 09:09:00 2009
mtime: Sun Oct 10 10:10:00 2010
ctime: Thu Jan 1 01:01:00 2009

That looks better!

By the way, if you "echo 1 > /proc/sys/vm/drop_caches", that flushes the page cache. If you "echo 3 > /proc/sys/vm/drop_caches" it flushes both the page cache and the inode/dentry cache.