Tuesday, June 30, 2009

Episode #49: There's No Place Like %HOMEPATH%

Ed.bat >> \\commandlinekungfu.com\2009\06\episode-49-theres-noplace-like-homepath.html

It was so long ago, yet I remember it like it was yesterday. We were just a bumptious young group of shell jocks, plying our trade, trying to make the world a better place. Life was simple then, and fun. Faithful readers may recall, with a certain nostalgia, Episode #28, a friendly little ditty about environment variables. We focused on methods for setting them, displaying them, and interacting with a couple of them, namely those associated with the PATH, the current username, and the prompt. Ahhhhh... the good old days.

Well, although times have changed and we're all a little worse for the wear, I'd like to build on and extend the ideas from that episode. This time around, we'll list our favorite variables that we didn't cover before and show some useful applications of them.

As I mentioned in Episode #28, the "set" command lists environment variables, we can expand a variable name into its value by surrounding it with percent signs (%var%), and the variable names themselves are case insensitive. Just to make it easier to type, I almost always refer to my variables in all lower case, even though many are specifically defined with upper case or mixed case.

One of the more useful variables we have is homepath, which is a reference to our current user's home directory. We don't have the nifty ~ to refer to our home directory as our bashketeer friends do, but we can get similar functionality with:

C:\> cd %homepath%
C:\Users\Ed>

Another useful environment variable is computername, which stores the name of the local machine. In some environments, the local computer's hostname is a byzantine mess of obtuse nonsense that is hard to type. Yet, we can always refer to it simply using %computername% in our commands. For example, here's how we can make a null SMB connection with ourselves, referring to our computer name:

C:\> net use \\%computername% "" /u:""
The command completed successfully.

C:\> net use \\%computername% /del
\\DOOFENSHMIRTZ was deleted successfully.

While the output of the set command shows us all of our statically set variables, there are some other very useful variables that aren't static, such as %cd% (which is the current working directory), %date% (which is the current date, used in Episode #48), and %errorlevel% (which is an indication of whether the previously executed command had an error, as I mentioned in Episode #47). I'd like to discuss two more dynamic variables in more detail, because they can be pretty darned useful: %time% and %random%.

The %time% variable displays the current time down to one-hundredths of a second. We can use that to get a feel for how long it takes a given command or series of command to run, sort of emulating the Linux time command. Back in Episode #21, I actually resorted to using the Cygwin time command to measure how long it took to run the command "dir /s /b C:\ | findstr /i vmware > nul". Instead of relying with Cygwin, I could have used the %time% variable as follows:

C:\> cmd.exe /v:on /c "echo !time! & (dir /s /b C:\ | findstr /i vmware > nul)
& echo !time!"

9:07:34.54
9:07:40.45

In this command, I'm invoking a cmd.exe with /v:on to turn on delayed environment variable expansion so that my time will expand to its current value when I call it (as described in Episode #48). Otherwise, it'll always be the same value it contains when I hit Enter. I tell my cmd.exe to run the command (/c) that will echo !time! (remember, delayed variable expansion requires us to refer to !var! and not %var%). It then runs whatever we have inside the parens (), which is the command whose running time we want to determine. You don't really need the parentheses, but I included them to help offset the command we are timing. Then, after our command is finished running, I display the current time again. While this will not do the math for us of showing the delta in times like the Linux time command (that would require a little script to parse the output), it does give us a decent feel for how long it took to run a command, without having to install Cygwin.

And, that gets us to %random%, a variable that expands to a random number between 0 and 32767. We can turn that into a random number between 0 and whatever we'd like (less than 32767) by making our shell do math with the set /a command (described in Episode #25), applying modulo arithmetic using the % operator:

To create a random number between 0 and 99:

C:\> set /a %random% % 100
42
To create a random number between 1 and 10:

C:\> set /a %random% % 10 + 1
7
And, finally, to flip a coin:

C:\> set /a %random% % 2
1
That last one should save you some change.

Now, let's see what Hal.sh has up his sleeve.

Hal won't get sucked in:

Hal.sh? Ed, I refuse to participate in these twisted little scenarios with you anymore. I would have thought you'd be tired after last night! And by the way, that French Maid outfit does nothing for your legs, big guy.

So, hmmm, let's see. Oh yeah! Shell fu!

Just to quickly give you the shell equivalents of Ed's rogue's gallery of Windows variables, I present:

$ echo $HOME
/home/hal
$ echo $HOSTNAME
elk
$ echo $PWD
/tmp
$ echo $RANDOM
29597
$ echo $(( $RANDOM % 10 + 1 ))
3

There's no variable in bash to tell you the time because we have the "date" command. With a formatting options, we can get date to give us the time to nanosecond precision:

$ date +%T.%N
10:41:29.451948717

We can even incorporate the time into our shell prompt and/or history trail:

$ export HISTTIMEFORMAT='%T  '
$ export PS1='(\t)$ '
(10:49:53)$ history
1 10:49:34 export HISTTIMEFORMAT='%T '
2 10:49:53 export PS1='(`date +%T`)$ '
3 10:49:56 history
(10:49:56)$

HISTTIMEFORMAT supports the standard escape sequences used by the strftime(3) library call. Usually, this means your choices are a little more limited compared to all of the various options available to the date command on most Linux systems (no %N, for example) but are typically more than sufficient. Notice that the bash shell has also has built-in date/time escape sequences that can be used in prompts-- like the "\t" in our example above.

But there are a bunch of other shell variables that I find useful. For example, CDPATH is a list of directories the shell will search through when you use a non-absolute path with the cd command. For example:

$ export CDPATH=.:$HOME
$ ls
$ cd stuff
/home/hal/stuff
$

".:$HOME" is the most common setting for CDPATH, but if you have a directory that you commonly access-- your music or DVD collection, a programming projects directory, etc-- adding that directory to CDPATH can save you a lot of typing.

Then there are all the common shell variables that are used by various programs that you might execute:

$ export EDITOR=emacs
$ export VISUAL=emacs
$ export PAGER=less
$ export TMPDIR=$HOME/.tmp

EDITOR and VISUAL are the programs that should be invoked when a program (like "crontab -e" for example) wants to allow you to edit text, and PAGER is your preferred screen reader. Many (but not all) programs will also use the value of TMPDIR as a place to put temporary files. This helps you avoid the risks of world-writable shared directories like /tmp and /var/tmp.

And, of course, many programs in the Unix environment have their own special shell variables. For example, $DISPLAY for X applications, $SSH_AUTH_SOCK when you're using ssh-agent, and so on. Also there are variables that allow you to configure your preferred program defaults-- like the "export LESS='-eMqw'" setting I have in my .bashrc file.

Tuesday, June 23, 2009

Episode #48: Parse-a-Palooza

Hal takes a step back:

Over the last several weeks we've been parsing a lot of different kinds of data using all sorts of different shell idioms. Ed pointed out that it might not be obvious to you, our readers, why we might pick one tool for a certain job and then use a completely different tool for another task. So he suggested a "meta-post" where we talk about the various parsing tools available in our shells and why we might pick one over the other for certain kinds of problems. Great idea Ed!

Then he suggested that I take a whack at starting the article. Bad idea Ed! Though I'm sure Ed did, and still does, think he was brilliant twice with this. Honestly, I've been solving these problems in the shell for so long that my parsing tool choices have become unconscious (some scurrilous wags would suggest that it's always been that way), and it was actually a little difficult to come up with a coherent set of rules for choosing between cut, awk, and sed. But here goes:


  1. cut is definitely the tool of choice if you need to extract a specific range of characters from your lines of input. An example of this would be extracting the queue IDs from file names in the Sendmail message queue, as I did in Episode 12:

    grep -l spammer@example.com qf* | cut -c3- | xargs -I'{}' rm qf{} df{}

    cut is the only tool that allows you to easily pull out ranges of characters like this.


  2. cut is also useful when your input contains strongly delimited data, like when I was parsing the /etc/group file in Episode 43:

    cut -f1,4 -d: /etc/group


  3. awk, of course, is the best tool to use when you're dealing with whitespace delimited data. The canonical example of this is using awk to pull out process ID info from the output of ps:

    ps -ef | awk '{print $2}'


  4. awk also has a variety of built-in matching operators, which makes it ideal when you only want a subset of your input lines. In fact, because awk has the "-F" option to specify a different delimiter than just whitespace, there are times when I prefer to use awk instead of cut on strongly delimited input. This is typically when I only want the data from some of the input lines and not others. Paul learned this in Episode 13:

    awk -F'|' '/CVE-2008-4250/ {print $1}' | sort -u

    Remember, if you find yourself piping grep into cut, you probably should be using awk instead.


  5. I use sed when I need to parse and modify data on the fly with the flexible matching power of regular expressions. For example, there's this bit of sed fu to extract browser names from Apache access_log files that I concocted for Episode 38:

    sed -r 's/.*(MSIE [0-9]\.[0-9]|Firefox\/[0-9]+|Safari|-).*/\1/' access_log*


Rules, of course, are made to be broken. YMMV, and so on. But I hope these will help all of you when you're trying to figure out the best way to solve a parsing problem in the Unix shell.

Ed jumps in:
You know, Hal, I have a confession to make, just between the two of us and the growing multitude of faithful readers of this blog. I envy you. There I said it. And, my envy is not just because of your thick, luxurious hair (which is a certain sign of a deal with the devil, if you ask me... but I digress). No, my envy is based primarily on all of the great options you and your beloved bash have for parsing text. The cmd.exe parsing options pale in comparison to the splendors of cut, awk, and sed. But, as they say, you gotta dance with the one that brung ya, so here is how I approach parsing my output in cmd.exe.

Following your lead, Hal, here are the rules, followed in order, that I apply when parsing output in cmd.exe:

  1. Starting from first principles, I see if there is a line of output of a command that already has what I want in it, and whether that line is acceptable for me to use by itself. If so, I just compose the appropriate syntax for the find or findstr commands to locate the line(s) that I'm interested in. For example, I did this in Episode #6 to create a command-line ping sweeper with this command:

    C:\> FOR /L %i in (1,1,255) do @ping -n 1 10.10.10.%i | find "Reply"

    Because the only output lines I'm looking for have the text "Reply" in them, no parsing is necessary. This is a lucky break, and in cmd.exe, we take all the lucky breaks we can. If I need regex, I use findstr instead of the find command.

  2. When the output I'm looking for won't work "as-is", and I need to change the order, extract, or otherwise alter output fields, I've gotta do some more intricate parsing. Unlike the various options bash users have, we don't really have a lot to brainstorm through with cmd.exe. Most of our parsing heavy lifting is done with the FOR /F command. This is the most flexible of all FOR commands in Windows, allowing us to parse files, strings, and the output of a given command. In fact, if you ever want to assign a shell variable with the value of all or part of the output of a command in cmd.exe, you are going to likely turn to FOR /F loops to do it, whether you want to parse the output or not.

    The syntax of a FOR /F loop that iterates over the output of [command1], running [command2] on whatever you've parsed out looks like this:

    C:\> FOR /F ["options"] %i in ('[command1]') do @[command2]

    FOR /F takes each line of output from [command1], and breaks it down, assigning portions of its output to the iterator variable(s) we specify, such as %i in this example. The "options", which must be surrounded in double quotes, are where we get to specify our parsing. One of our options is how many lines we want to skip in the output of [command1] before we want to start our parsing operation. By indicating "skip=2", we'd skip the first two lines of output, often column titles and a blank line or ------ separators.

    Once we skip those lines, FOR /F will parse each line based on additional options we specify. By default, FOR /F breaks lines down using delimiters of spaces and tabs. To use something else, such as commas and periods, you could specify "delims=,.". All such characters will be sucked out of our output, and the remaining results will be assigned to the variables we specify. If I have output text that includes one or more annoying characters that I want to go away, I'll make it a delimeter. I did this in Episode #43 to get rid of a stray * that the "net localgroup" command put in its output.

    Once we specify our skip and delims (with syntax like "skip =2 delims=,."), we then have the option of associating variables with output elements using the "tokens=" syntax. If we just want the first item that is not part of our delimiters to be assigned to our %i variable, we don't have to specify "tokens=[whatever]" at all. But, suppose we wanted the first and third elements of the output of command1 to be assigned to our iterator variables. We'd then specify "tokens=1,3". Note that we don't have to actually create any more variables beyond the initial that we specify (%i in a lot of my examples), because the FOR /F command will automatically make the additional variable associated with our tokens declaration by simply going up the alphabet. That is, if we specify FOR /F "tokens=1,3" %i ... the first component of our output other than our delimiters will be assigned to variable %i, and the third component will be assigned to %j. The FOR /F loop auto creates %j for us. We can also specify ranges of tokens with "tokens=1-5", which will automatically assign our first component of output to our first variable (such as %i) and auto create %j, %k, and so on for us, up to five variables in this example. And, finally, if you want the first element of your output dumped into your first variable, and everything else, including delimiters, dumped into a second variable, you could specify "tokens=1,*" as I did in Episode #26.

    Whew! That's an ugly form of parsing, but it usually does what we need.

  3. But not always... sometimes, we need one extra kicker -- the ability to do substring operations. Our cmd.exe shell supports creating and displaying substrings from variables, using the syntax %var:~N,M%. This causes cmd.exe to start displaying var at an offset of N characters, printing out M characters on the screen. We start counting offsets at zero, as any rational person would do. They are offsets, after all, so the first character is at offset zero. Let's look at some examples using a handy predefined environment variable: %date%:

    C:\> echo %date%
    Mon 06/22/2009

    If we want only the day of the week, we could start at the zero position and print out three characters like this:
    C:\> echo %date:~0,3%
    Mon

    If you want the year, you could start at the end and go back four characters with a -4 as our offset into the variable:
    C:\> echo %date:~-4,4%
    2009

    If you only put in the start character, it'll display your variable from that offset all the way through the end:
    C:\> echo %date:~4%
    06/22/2009
    C:\> echo %date:~-4%
    2009
    Now, there's one more little twist here. When doing heavy parsing, we often want to perform substring operations on a variable that we've parsed out of a command using a FOR /F loop. For example, suppose that I want to run the command "dir c:\temp" in a FOR /F loop so I can parse each line of its output, and I want to select the time field. But let's also assume that I only want the single minutes digit of the time field. In other words, when dir c:\ shows this:

    C:\> dir c:\temp
    Volume in drive C has no label.
    Volume Serial Number is 442A-03DE

    Directory of C:\temp

    05/19/2009 10:16 AM .
    05/19/2009 10:16 AM ..
    05/19/2009 10:02 AM 22 stuff1.txt
    05/19/2009 10:02 AM 22 stuff1.txt.bak
    05/19/2009 10:03 AM 43 stuff2.txt
    05/19/2009 10:03 AM 43 stuff2.txt.bak
    4 File(s) 130 bytes
    2 Dir(s) 17,475,887,104 bytes free

    What's I'd really like to see is:
    6
    6
    2
    2
    3
    3

    Why would I want this? I have no idea. Perhaps I just like to parse for parsing's sake. OK?

    Anyway, you might try the following:

    C:\> FOR /F "skip=4 tokens=2" %i in ('dir c:\temp') do @echo %i:~-1,1%

    That doesn't work, because we can only do substring operations on environment variables, not the iterator variables of a FOR loop. You'll just get the ugly ":~-1,1%" displayed after each timestamp, because %i is expanded to its value without any substring operation taking effect. OK, you might then reason... I'll just save away %i in an environment variable called a, and then perform substring operations on that, as follows:
    C:\> FOR /F "skip=4 tokens=2" %i in ('dir c:\temp') do @set a=%i & echo %a:~-1,1%

    No love there either. You'll just see either a hideous "%a:~-1,1%" displayed on your output if the environment variable a isn't already set. Or, if a was already set before this command ran, you'd see the last character of what it was already set to, a constant value on your screen.

    The culprit here is that cmd.exe by default does immediate environment variable expansion, so that your echo command is immediately expanding %a to its value right when you hit ENTER. It never changes while the command is running, because its value is fixed at the time the command was started. We want %a's value to change at each iteration through the loop, so we need delayed environment variable expansion for our command. To achieve this, we can launch a new cmd.exe, with the /v:on option to perform delayed environment variable expansion, and the /c option to make it run our FOR command. Oh, and when we do this, we have to refer to all variables whose expansion we want delayed as !var! instead of %var%. The result is:

    C:\> cmd.exe /v:on /c "FOR /F "skip=4 tokens=2" %i in ('dir c:\temp') do
    @set a=%i & echo !a:~4,1!"

    That delayed environment variable expansion is annoying, but I periodically resort to it, as you've seen in Episodes #12 and #46.
Using these different options of FOR /F loops and substring operations, we can have fairly flexible parsing. It's not as easy as the parsing options offered in bash, but we can usually make do.

Ed finishes by musing about PowerShell a bit:
So, we've seen Hal's parsing tips in bash, along with my approach to parsing in cmd.exe. In my adventures with PowerShell, I've noticed something quite interesting (at least for an old shell guy like me). I almost never need to parse!

In most shells, such as bash and cmd.exe, we have streams of output and/or error from one command that we have to format properly so another command can handle it as input. In PowerShell, the pipeline between cmdlets carries objects, often with several dozen properties and methods. We shuffle these objects and their associated stuff from cmdlet to cmdlet via pipes, selecting or refining whole objects along the way. We usually don't need to parse before we pipe, because we want the entire objects to move through the pipeline, with all their glorious properties and methods. If we want output, we can almost always display exactly the properties we want, in the order we want, and typically in a format that we want. It's very cool and efficient. Dare I say it? Could it be I'm falling in love with PowerShell?

Anyway, it occurred to me that when it comes to the need for parsing, cmd.exe and bash are far more similar to each other than either is to PowerShell.

Just sayin'.

Tuesday, June 16, 2009

Episode #47: Fun with Output Redirection and Errorlevels

Ed begins thusly:

We really love it when readers write in with suggestions and questions for our hearty band of shell aficionados. Diligent reader Curt Shaffer writes in with a suggestion:

I have been following this blog for some time. I learn more and more every day and not just for pen testing but system administration as well. For that I thank everyone involved! It has come to a point where I just want to script and do everything from the command line on Windows. I have always opted for command line on Linux. My question/suggestion is to do an episode detailing how we might get output into a log file on commands we run. I know now that 2>c:\log.txt or something similar will provide standard error but what I would like to see is where it possibly errored out. Here is an example:

I want to script adding a group to local administrators for all servers on the network. I get them all into a text file called Servers.txt. I then do the following:

C:\> for /f %a in (C:\Servers.txt) do psexec \\%a net localgroup
"administrators" "domain\server-admin-gg" /add
Now that should work just fine I believe. The problem is because I need to be sure this happened, I would like to capture the data of failures. So I know I can add a 2>c:\error.txt. What I would like though is a central log file that would give me something along the lines of %a and the error. (i.e. Server 1 – Access Denied, Server 2 – OK, Server 3—OK, Server 4 –Access Denied etc) or something to that effect so I can know which were not successful so I can go back and make sure they get it.

I have to say, Curt, I like the cut of your jib. This is a good question, and its answer will open up areas we haven't yet talked much about in this blog -- output redirection, file descriptor duplication, and ways to deal with and record errorlevels. Thank you for the great suggestion!

First off, it should be noted that you don't have to wrap your psexec in a FOR /F loop to run it against multiple machines. The psexec command (not built-into Windows, but downloadable from Microsoft Sysinternals) has an @filename option, which causes it to run the given command on each remote system listed in filename. I'm going to work with your provided command with the FOR /F loop as is, though, because I think it'll keep our answer here more generally applicable to uses other than just psexec.

Now, as you point out, we can capture standard error information via the file descriptor 2, and send it to a file with a simple redirection (2>c:\error.txt). To capture both standard error and standard output we could append the following to our command:

[command] > results.txt 2>&1

The first part of this is taking the standard output of our command and dropping it into the results.txt file (> results.txt). This is actually a shortened form of explicitly referring to our standard out using the file descriptor 1. We could write it as 1>results.txt if we wanted to be a little more explicit. Now, we want to dump standard error into the same place that standard output is going. If we simply did "2>1", we'd have a problem, because standard output (1) is already busy dumping stuff into results.txt. The syntax "2>&1" tells the shell that we want standard error to be connected to a duplicate of standard output, which itself is already connected to the results.txt file. So, we get both standard output and standard error in the results.txt file.

We've got to get the order right here. If you transpose these redirects with "[command] 2>&1 > results.txt", you wouldn't capture standard error, because your standard error will be dumped to standard output (the screen by default) before you redirect it to the results.txt file. In other words, you will see standard error on the screen, and your results.txt will only get standard output.

Now, I'm sure my shell sparring buddies will point out that their blessed little bash shell supports a nice shortened construct to make this a little easier to type. In bash, you can replace all of this with a simple &>results.txt. Let me preemptively point out that cmd.exe doesn't support that. Sorry, but cmd.exe, but its very nature, makes us work harder.

OK, so with that in hand, and realizing that we can append with >>, we can tweak Curt's command a bit to achieve his desired results:

C:\> for /f %a in (C:\Servers.txt) do @psexec \\%a net localgroup "administrators"
"domain\server-admin-gg" /add >> c:\results.txt 2>&1


Curt also asked how he could get a copy of %a in his output. Well, we could do that using an "echo %a", adding a few dash marks to separate our output:

C:\> for /f %a in (C:\Servers.txt) do @echo %a---------- >> c:\results.txt & psexec
\\%a net localgroup "administrators" "domain\server-admin-gg" /add >>
c:\results.txt 2>&1

Now, that'll work, Curt, but your output will contain everything that psexec displays. That's a bag chock full of ugly. Can we make it prettier, and explore a little more flexibility of cmd.exe and error messages? You bet we can, using our good friends && and ||.

If a command succeeds, it should set the %errorlevel% environment variable to 0. Otherwise, it sets it to some other value. Please note that not all Windows commands properly set this errorlevel! You should experiment with a command first to make sure the errorlevel is set as you wish by running your command followed by "& echo %errorlevel%". Run your command in a successful fashion and also in a way that makes it fail, and verify that %errorlevel% functions as you desire -- with 0 for success and a non-zero value for failure.

The psexec command does set the errorlevel to 0 if the command it runs on the target machine succeeds, provided that we don't use the -d option. Invoked with -d, psexec runs a command on the target machine in detached mode (running it without access to its standard input, output, and error). Since 2005, psexec with the -d switch will return the processid that it created on the target. Anyway, here, we're not using -d, so we should be cool with the %errorlevel% value. But, consider yourself warned about psexec, -d, %errorlevel%, and processids.

We could use an "IF %errorlevel% equ 0" statement to put in some logic about what to store for a result based on this %errorlevel% value, but that's a lot to type into a shell. I'd do it in a heartbeat for a script, but let's keep this focused on more human-typable commands. Instead of IF errorlevel stuff, we can use the shorthand [command1] && [command2] || [command3]. If command1 succeeds, command2 will run. If command1 has an error, command3 will execute.

The result gives us pretty much complete control of the output of our command:

C:\> for /f %a in (C:\Servers.txt) do @echo %a---------- >> c:\results.txt & psexec
\\%a net localgroup "administrators" "domain\server-admin-gg" /add >nul &&
echo Success >> c:\results.txt || echo Failure >> c:\results.txt

C:\> type c:\results.txt
10.1.1.1----------
Failure
10.1.1.4----------
Success
10.1.1.12----------
Success
10.1.1.100----------
Failure

But, you know, somehow it just feels wrong to throw away all of those details in nul. The psexec command goes to all the trouble of creating them... we can at least store them somewhere for later inspection. How about we create two files, one with a summary of Success or Failure in results.txt and the other with all of our standard output and standard error in details.txt? We can do that by simply combining the above commands to create:

C:\> for /f %a in (C:\Servers.txt) do @echo %a---------- >> c:\results.txt & psexec
\\%a net localgroup "administrators" "domain\server-admin-gg" /add >>
c:\details.txt 2>&1 && echo Success >> c:\results.txt || echo Failure >>
c:\results.txt

The results.txt has a nice summary of what happened, and all of the details are in details.txt.

Fun, fun, fun!

Verily Hal doth proclaim:

It's fairly obvious that the Windows command shell has... "borrowed" a certain number of ideas from the Unix command shells. Nowhere is it more obvious than with output redirection. The syntax that Ed is demonstrating above for Windows is 100% the same for Unix. For example, you see this construction all the time in cron jobs and scripts where you don't care about the output of a command:

/path/to/some/command >/dev/null 2>&1

You know, Ed. I wasn't going to rag you at all about Windows lacking the "&>" syntax-- I personally prefer ">... 2>&1" (even though it's longer to type) because I think it documents what you're doing more clearly for people who have to maintain your code. Still, if I had to spend my life working with that sorry excuse for a shell, I guess I'd be a little defensive too.

Anyway, we can also emulate Ed's final example as well. In fact, it's a little disturbing how similar the syntax ends up looking:

for a in $(< servers.txt); do
echo -n "$a " >>results.txt
ssh $a usermod -a -G adm,root,wheel hal >>details.txt 2>&1 \
&& echo Success >>results.txt || echo Failure >>results.txt
done

Of course the command I'm running remotely is different, because there's no real Unix equivalent of what Curt is trying to do, but you can see how closely the output redirections match Ed's Windows command fu. To make the output nicer, Unix has "echo -n", so I can make my "Success/Failure" output end up on the same line has the host name or IP address. Neener, neener, Skodo.

There are all kinds of crazy things you can do with output redirection in the Unix shell. Here's a cute little idiom for swapping standard input and standard error:

/path/to/some/command 3>&1 1>&2 2>&3

Here we're creating a new file descriptor numbered 3 to duplicate the standard output-- normally file descriptor 1. Then we duplicate the standard error (descriptor 2) on file descriptor 1. Finally we duplicate the original standard output handle that we "stored" in our new file descriptor 3 and associate that with file descriptor 2 to complete the swap. This idiom can be useful if there's something in the error stream of the command you want to filter on. By the way, just to give credit where it's due, I was reminded of this hack from a posting on the CommandLineFu blog by user "svg" that I browsed recently.

But here's perhaps the coolest output redirection implemented in bash:

ps -ef >/dev/tcp/host.example.com/9000

Yes, that's right: you can redirect command output over the network to another machine at a specific port number. And, yes, you can use the same syntax with /dev/udp as well. It's like "netcat without netcat"... hmmm, where have I heard that before? Oh, that's right! That was the title of Ed's talk where he showed me this little tidbit of awesomeness.

Some notes about the "... >/dev/tcp/..." redirection are in order. This syntax is a property of the bash shell, not the /dev/tcp device. So you can't use this syntax in ksh, zsh, and so on. Furthermore, some OS distributions have elected to disable this functionality in the bash shell-- notably the Debian Linux maintainers (which means it also doesn't work under Ubuntu and other Debian-derived distributions). Still, the syntax is portable across a large number of Unix and Linux variants-- I use it on Solaris from time to time, for example.

Paul Chimes In:

I have to say, I'm a bit dizzy after trying to figure out a way to add a users and groups via the command line in OS X Leopard. In fact, this is where I got stuck trying adapt the Windows and UNIX/Linux concepts above to OS X. User/groups creation in OS X is very different from Windows and UNIX/Linux, and even varies depending on which version of OS X you are running. So far I've go this in order to just create a user in the latest version of Leopard:

dscl . -create /Users/testuser
dscl . -create /Users/testuser UserShell /bin/bash
dscl . -create /Users/testuser RealName "Test User"
dscl . -create /Users/testuser UniqueID 505
dscl . -create /Users/testuser PrimaryGroupID 80
dscl . -create /Users/testuser NFSHomeDirectory /Users/testuser
dscl . -passwd /Users/testuser supersecretpassword
dscl . -append /Groups/admin GroupMembership testuser


The article How to: Add a user from the OS X command line, works with Leopard! was extremely useful and documented several methods. Also, there is a really neat shell script published called "Create & delete user accounts from the command line on Mac OS X". One important item to note, most of the documentation either says to reboot or logout and back in again in order to see the new user (yuk). So, I hope the above examples and resources provide you with enough information to adapt the techniques covered in this post for OS X administration, and I plan to cover more on this in upcoming episodes.

Monday, June 8, 2009

Episode #46: Counting Matching Lines in Files

IMPORTANT ANNOUNCEMENTS!
As much as we enjoy beating each other silly with our command line kung fu, we are going to tweak things around here on this blog a bit. Instead of our relentless 3-times per week posting, we're going to move to a once per week posting rate here. We'll have new fu for you each Tuesday, 5 AM Eastern. That way, you'll be able to schedule a weekly date with our hearty band of shell warriors. You could set aside lunch every Tuesday with us... or Wednesday... or an hour each weekend. Or, some evening during the week, you could cook a nice meal, light up some soft candles, put on some romantic music, and then enjoy your meal spending time reading our weekly missive. Yeah! That's the ticket!

Oh, and one more little announcement. I'll be teaching my Windows Command-Line Kung Fu course via the SANS@Home system in a couple of weeks (June 22 and 23). The class runs on-line from 7 until 10 PM EDT, and I'll be your live instructor. It's a fun class that is designed to give security and sys admin people practical advice for using the Windows command-line to do their jobs better. It's not a regurgitation of what's on this blog, but instead gets really deep into useful commands such as WMIC, sc, tasklist, and much more. If you like this blog, the course is ideal for you.

The course normally costs about a thousand dollars live and $825 via @Home. But, SANS is offering a big discount for friends and readers of this blog. The course is $412.50 if you use the discount code of "Foo". Sign up at https://www.sans.org/athome/details.php?nid=19514

Thanks!
--Ed Skoudis.

And now... back to your regularly scheduled fu... Here's a fun episode for you!

Hal's back at it again:

I had another one of those counting problems come up recently, similar to our earlier Browser Count Torture Test challenge. This time my customer needed me to count the number of instances of a particular string in each of several dozen files in a directory. In my case I was looking for particular types of test cases in a software regression test suite, but this is also useful for looking for things like IP addresses in log files, vulnerabilities in assessment tool reports, etc.

For a single file, it would be easy enough to just:

$ grep TEST file1 | wc -l
11

But we want to operate over a large number of files, which means we somehow need to associate the name of the file with the output of "wc -l".

So I created a loop that does the main part of the work, and then piped the output of the loop into awk for some pretty-printing:

$ for f in *; do echo -n "$f  "; grep TEST $f | wc -l; done | \
awk '{t = t + $2; print $2 "\t" $1} END {print t "\tTOTAL"}'

11 file1
8 file2
14 file3
31 file4
12 file5
7 file6
3 file7
25 file8
19 file9
22 file10
19 file11
22 file12
10 file13
203 TOTAL

Inside the loop we're first spitting out the filename and a couple of spaces, but no newline. This means that the output of our "grep ... | wc -l" will appear on the same line, immediately following the filename and the spaces.

The only problem I had with the basic loop output was that the file names had very irregular lengths (unlike the sample output above) and it was difficult to read the "wc -l" data because it wasn't lined up neatly in a column. So I decided to do some post-processing with awk. The main part of the awk code keeps a running total of the values we've read in so far (you saw me using this idiom previously in Browser Count Torture Test). But you'll also notice that it reverses the order of the two columns and also inserts a tab to make things line up nicely ('print $2 "\t" $1'). In the "END" block we output the "TOTAL" once the entire output from the loop has been processed.

I love the fact that the shell lets me pipe the output of a loop into anther tool like awk for further processing. This lets me grind up a bunch of data from many different sources into a single stream and then operate on this stream. It's an idiom I use a lot.

Paul Chimes In:


Thats some pretty sweet command kung fu! When I first read this I immediately put it to good use, with some modifications of course. I frequently find myself needing to search through 28,000+ files and look for certain strings. My modifications are as follows:

$ for f in *; do echo -n "$f "; grep -i xss $f | wc -l; done | awk '{t = t + $2; print $2 "\t" $1} END {print t "\tTOTAL"}' | egrep -v '^0' | sort -n

I really didn't care about files that did not contain at least one occurance of my search string so I sent it to egrep with "-v" which shows me only results which do NOT contain the search term. My regular expression "^0" reads as, "only show me lines that begin with 0", which when combines with the "-v" removes all lines that begin with 0. Now, I could have used a filter with awk, but the syntax was not cooperating (i.e. awk /[regex]/ {[code]}). Then I wanted to see a sorted list so I ran it through "sort -n".

Ed retorts:
Gee, 28,000 files, Paul? Where did ya get that number? Sounds suspiciously like... I dunno... Nessus plug-ins. But, I digress.

OK, Sports Fans... Hang on to your hats, because I'm gonna match Hal's functionality here in cmd.exe, and it's gonna get ugly. Real ugly. But, when we're done, our command will do what Hal wants. And, in the process, it'll take us on an adventure through some interesting and useful features of good ol' cmd.exe, tying together a lot of fu that we've used in piece-parts in previous episodes. It's gonna all come together here and now. Let's dive in!

We start out simple enough:

C:\> find /c "TEST" * 2>nul | find /v ": 0"
---------- FILE1: 11
---------- FILE2: 8
---------- FILE3: 14

Here, I've used the /c option of the find command to count the number of lines inside of each file in my current directory that have the string "TEST". I throw away error messages (2>nul) to avoid cruft about directories in my output. I do a little more post processing by piping my output into find again, to search for lines that do not have (/v) the string ": 0" in them, because we don't want to display files that have our string in them zero times.

That's pretty close to what we want right there. So, we could call it a day and just walk away.

But, no.... we're kinda nuts around here, if you haven't noticed. We must press on to get closer to Hal's insanity.

The --------- stuff that find /c puts in our output is kinda ugly. Let's get rid of that with a little parsing courtesy of FOR /F:

C:\> for /f "delims=-" %i in ('"find /c "TEST" * 2>nul | find /v ": 0""') do @echo %i
FILE1: 11
FILE2: 8
FILE3: 14

Here, I'm using a FOR /F loop to parse the output of my previous command. I'm defining custom-parsing with a delimiter of "-" to get rid of those characters in my output.

Again, we could stop here, and be happy with ourselves. We've got most of what Hal wants, and our output is kinda pretty. Heck, our command is almost typable.

But we must press on. Hal's got totals, and we want them too. We could do this in a script, but that's kinda against our way here, as we strive to do all of our kung fu fighting in single commands. We'll need to add a little totaller routine to our above command, and that's where things are going to get a little messy.

The plan will be to run the component we have above, followed by another command that counts the total number of lines that have TEST in them and displays that total on the screen. We'll have to create a variable called total that we'll track at each iteration through our new counting loop. The result is:

C:\> (for /f "delims=-" %i in ('"find /c "TEST" * 2>nul | find /v ": 0""') do
@echo %i) & set total=0 & (for /f "tokens=3" %a in ('"find /c "TEST" * 2>nul"')
do @set /a total+=%a > nul) & echo. & cmd.exe /v:on /c echo TOTAL: !total!

FILE1: 11
FILE2: 8
FILE3: 14

TOTAL: 33

Although what I'm doing here is probably obvious to everyone except Hal and Paul (yeah, right!), please bear with me for a little explanation. You know, just for Hal and Paul.

I've taken my original command from above and surrounded it in parens (), so that it doesn't interfere with the new totaller component I'm adding. My totaller starts by setting an environment variable called total to zero (set total=0). I then add another component in parens (). These parens are very important, lest the shell get confused and blend my commands together, which would kinda stink as my FOR loops would bleed into each other and havoc would ensue.

Next, I want to get access to the line count output of my find /c command to assign it to a variable I can add to my total. In cmd.exe, if you want to take the output of a command and assign its value to a variable, you can use a FOR /F loop to iterate on the output of the command. I do that here by running FOR /F to iterate over "find /c "TEST" * 2>nul". To tell FOR /F that my command is really a command, I have to wrap it in single quotes (' '). But, because my command has special characters in it (the > in particular), I have to wrap the command in double quotes too (" "). The result is wrapped in single and double quotes (' " " '), a technique I use a lot such as in Episodes #34 and #45. My FOR /F loop is set to tokenize around the third element of output of this command, which will be the line count I'm looking for (default FOR /F parsing occurs on spaces as delimiters, and the output of ----- [filename]: [count] has the count as the third item).

Thus, %a now holds my interim line count of the occurrences of TEST for a given file. I then bump my total variable by that amount (set /a total+=%a) using the set /a command we discussed in "My Shell Does Math", Episode #25. I don't want to display the results of this addition on the output yet, so I throw them away (> nul). When my adding loop is done (note that all important close parens), I then echo a blank line (echo.).

Now for the ugly part. I want to display the value of my total variable. But, as we've discussed in previous episodes, cmd.exe does immediate variable expansion. When you run a command, your environment variables are expanded to their values right away. Thus, if I were to simply use "echo %total%" at the end here, it would display the total value that existed when I started the command, if such a value was even defined. But, we want to see the total value after our loop finishes running. For this, we need to activate delayed environment variable expansion, a trick I used in Episode #12 in a slightly different way.

So, with my total variable set by my loop, followed by an extra carriage return from echo. to make things look pretty, I then invoke another cmd.exe with /v:on, which enables delayed variable expansion. I ask that cmd.exe to run a command for me (/c), which is simply displaying the word TOTAL followed by the value !total!. But, what's with the bangs? Normal variables are expanded using %var%, not !var!. Well, when you use delayed variable expansion, you get access to the variable's value using !var!. The bangs are an artifact of delayed variable expansion.

And, for the most part, we've matched Hal's functionality. Our command reverses the file name and counts from Hal's fu, although we could go the other way if we want with some additional logic. I prefer filename first myself, so that's what we'll go with here.

And, our descent into insanity is pretty much done for now. :)

Friday, June 5, 2009

Episode #45: Removing Empty Directories

Hal answers the mail:

Loyal reader Bruce Diamond sent some email to the "suggestions" box with the following interesting problem:

I have a directory structure several layers deep not too dissimilar to:

Top (/)
|----/foo 1
| |----/bar1
| |----/bar2
| |----/bar3
| |----<files>
|----/sna
| |----/fu1
| |----/fu2
| |----/fu3
|
|----/kil
| |----/roy1
| | |----<files>
| |
| |----/roy2
| | |----<files>
| |
| |----/roy3


My problem is, I wish to identify (and then delete) the directories AND directory trees that are really, truly empty. So, in the above example, /foo/bar1, /foo/bar2, /kil/roy3, and ALL of /sna, being empty of files, would be deleted.


The Unix answer to this challenge turns out to be very straightforward if you happen to know about the "-depth" option to the "find" command. Good old "-depth" means do "depth-first traversal": in other words, dive down to the lowest level of each directory you're searching and then work your way back up. To show you how this works, here's the "find -depth" output from Bruce's sample directory structure:

$ find . -depth -type d
./kil/roy1
./kil/roy3
./kil/roy2
./kil
./sna/fu1
./sna/fu3
./sna/fu2
./sna
./foo/bar3
./foo/bar1
./foo/bar2
./foo
.

To remove the empty directories, all we have to do is add "-exec rmdir {} \;" to the end of the above command:

$ find . -depth -type d -exec rmdir {} \;
rmdir: ./kil/roy1: Directory not empty
rmdir: ./kil/roy2: Directory not empty
rmdir: ./kil: Directory not empty
rmdir: ./foo/bar3: Directory not empty
rmdir: ./foo: Directory not empty
rmdir: .: Invalid argument

"find" is calling "rmdir" on each directory in turn, from the bottom up. Directories that are completely empty are removed silently. Non-empty directories generate an error message from "rmdir" and are not removed. Since the sub-directories containing files will not be removed, their parent directories can't be removed either. On the other hand, directories like "sna" that contain only empty subdirectories will be completely cleaned up. This is exactly the behavior we want.

By the way, if you don't want to see the error messages you could always redirect the standard error to /dev/null like so:

$ find . -depth -type d -exec rmdir {} \; 2>/dev/null

Mr. Bucket points out that GNU find has a couple of extra options that make this challenge even easier:

$ find . -type d -empty -delete

The "-empty" option matches either empty files or directories, but since we're specifying "-type d" as well, we'll only match empty directories (though you could leave off the "-type d" and remove zero length files as well, and possibly clean up even more directories as a result). The "-delete" option removes any matching directories. What's cool about "-delete" is that it automatically enables the "-depth" option so that we don't have to specify it on the command line.

Why do I have the feeling that this is another one of those "easy for Unix, hard for Windows" challenges?

Ed responds:
Awesome question, Bruce! Thanks for writing in.

And, it turns out that this one isn't too bad in Windows after all. When I first saw it, I thought it might get kinda ugly, especially after reading Hal's comments above. But, then, I pulled a little trick using the sort command, and it all worked out ok. But let's not get ahead of ourselves.

You see, we can get a directory listing using our trusty old friend, the dir command, as follows:

C:\> dir /aD /s /b .
[dir]\foo
[dir]\kil
[dir]\sna
[dir]\foo\bar1
[dir]\foo\bar2
[dir]\foo\bar3
[dir]\kil\roy1
[dir]\kil\roy2
[dir]\kil\roy3
[dir]\sna\fu1
[dir]\sna\fu2
[dir]\sna\fu3

This command tells dir to list all entities in the file system under our current directory (.) with the attribute of directory (/aD), recursing subdirectories (/s), with the bare form of output (/b) -- which we use to make the dir command show the full path of each directory. You could leave off the . in the command, but I put it in there as a place holder showing where you'd add any other directory you'd like to recurse this command through.

Nice! But, we can't just delete these directories listed this way. We need some method of doing a depth-first search, simulating the behavior of the Linux find command with it's -depth option. Well, dir doesn't do that. I pondered this for about 15 seconds, when it hit me. We can just pipe our output through "sort /r" to reverse it. Because sort does it's work alphabetically, when we do a reverse sort, the shorter dir paths will come before the longer (i.e., deeper) ones, so the output will actually be a depth-first listing! Nice!

C:\> dir /aD /s /b . | sort /r
[dir]\sna\fu3
[dir]\sna\fu2
[dir]\sna\fu1
[dir]\sna
[dir]\kil\roy3
[dir]\kil\roy2
[dir]\kil\roy1
[dir]\kil
[dir]\foo\bar3
[dir]\foo\bar2
[dir]\foo\bar1
[dir]\foo
Now that we have that workable component, let's make it delete the directories that are empty. We'll wrap the above dir & sort combo in a FOR /F loop to iterate over each line of its output, feeding it into the rmdir command to remove directories. If you ever want to run a command to process each line of output of another command, FOR /F is the way to do it, specifying your original command inside of single quotes in the in () component of the FOR /F loop. Like Hal, we'll rely on the fact that rmdir will not remove directories that have files in them, but will instead write a message to standard error. Truly empty directories, however, will be silently deleted. The result is:

C:\> for /F "delims=?" %i in ('dir /aD /s /b . ^| sort /r') do @rmdir "%i"
I put the "delims=?" in the FOR /F loop to dodge a bit of ugliness with the default parsing of FOR /F. You see, if any of the directory names in the output of the dir command has a space in them, the FOR /F loop will parse the directory name and assign the %i variable the value of the text before the space. We'd only have part of the directory name, which, as Steve Jobs would say, is a bag of hurt. We need a way to turn off the default space-based parsing of FOR /F. We can do that by specifying a custom delimiter of a character that can't be used in a file's name. In Windows, we could use any of the following / \ : * ? " < > |. I chose to use a ? here, because no directory name should have that. Thus, %i will get the full directory name, spaces and all.

The ^ before the | is also worthy of a bit of discussion. FOR /F loops can iterate over the output of command by specifying a command inside of single quotes in the "in ()" part of the FOR loop declaration. But, if the command has any funky characters, including commas, quotation marks, or pipe symbols, we have to put a ^ in front of the funky symbol as an escape so FOR handles it properly. The other option we have is to put the whole command inside of single quote double quote combinations, as in:

... in ('"dir /aD /s /b . | sort /r"')... 

That's a single quote followed by a double quote up front, and a double quote single quote at the end.

If I have only one funky character in my FOR /F command, I usually just pop in a ^ in front of it. If I have several of them, rather than escaping each one with a ^, I use the single-quote double-quote trick.

Going back to our original command, we'll see an error message of "The directory is not empty." any time we try to rmdir a directory with files in it. We can get rid of that message by simply taking standard error and throwing it into nul by appending 2>nul to the overall command above.

Tim Medin (aka, our PowerShell "Go-To" Guy) adds:
The PowerShell version of the command is very similar to Ed's command, with one notable exception, length.

As Hal explained, we need a list of the directories sorted in depth-first order. Unfortunately, there isn't an option like "-depth" to do it for us, so we have to do it the long way. This command will retrieve a depth-first list of directories:

Short Version:
PS C:\> gci -r | ? {$_.PSIsContainer} | sort -desc fullName

Long Version:
PS C:\> Get-ChildItem -Recurse | Where-Object {$_.PSIsContainer} |
Sort-Object -Descending FullName

The first portion of the command retrieves a recursive directory listing. The second portion filters for containers (directories) only. The directories are then sorted in reverse order so we end up with a listing similar to that retrieved by Hal.

For those of you not familiar with PowerShell, the names of these commands might seem a little odd. The reason for the odd name is that these commands are very generic. The Get-ChildItem command works like the dir command, but it can do much more. It can be used to iterate through anything with a hierarchical structure such as the registry. The PSIsContainer applies to these generic objects such as directories or registry keys. The $_ variable refers to the "current pipeline object." Back to our regularly scheduled programming...

So we have a depth-first directory listing similar to this:
C:\sna\fu3
C:\sna\fu2
C:\sna\fu1
C:\sna
C:\kil\roy3
C:\kil\roy2
C:\kil\roy1
C:\kil
C:\foo1\bar3
C:\foo1\bar2
C:\foo1\bar1
C:\foo1


Now we need to check if our current directory is blank, so we can later delete it.

!(gci)


This command will return True if there are no items in the current directory. We can use it in a "where-object" command to filter our results.

Finally, we pipe the results into rm (Remove-Item). Our final command looks like this:

Short Version:
PS C:\> gci -r | ? {$_.PSIsContainer} | sort -desc FullName |
? {!(gci $_.FullName)} | rm

Long Version:
PS C:\> Get-ChildItem -Recurse | Where-Object {$_.PSIsContainer} |
Sort-Object -Descending FullName | Where-Object {!(Get-ChildItem $_.FullName)} |
Remove-Item

Looks like Hal's and Ed's Kung Fu are much shorter, and as they say, "size does matter."

-Tim Medin

Wednesday, June 3, 2009

Episode #44: Users & Groups (Part II)

Hal goes first this time:

Last time in Episode #43, Ed presented a challenge to list all groups and the user names that were in each group. But as I was working on Ed's challenge, I realized that there was another way to look at this data. What if instead of a list of "users per group", you wanted to get a list of "groups per user"?

This is actually straightforward in Unix:

$ for u in `cut -f1 -d: /etc/passwd`; do echo -n $u:; groups $u; done | sort
avahi-autoipd:avahi-autoipd
avahi:avahi
backup:backup
bin:bin
bind:bind
daemon:daemon
games:games
gdm:gdm
gnats:gnats
haldaemon:haldaemon
hal:hal adm dialout cdrom plugdev lpadmin admin sambashare
[...]

Here we're using "cut" to pull all the user names from /etc/passwd and then running a loop over them. Inside we output the user name and a trailing colon, but use the "-n" option on the "echo" statement so the we don't output a newline. This means that the output of the "groups $u" command will appear on the same line as the username, immediately after the colon. Finally we're piping the output of the entire loop into "sort", so we get the information sorted by the user names.

I wonder if Ed's going to have to go to as much trouble answering my challenge as I went through answering his...

Ed responds:
When I first read your challenge, Hal, I was thinking, "Uh-oh... Windows is gonna make this hard." My gut told me that while mapping groups to users was easy ("net localgroup [groupname]"), going the other way was gonna be tough.

But, whenever I need to find something out about users on a Windows box, I almost always start out by running "net user [username]". (Yes, yes, there is "wmic useraccount list full" but I usually go there second). So, I ran that command and smiled with glee when I saw a list of all of the associated groups for that user in the output. I started to formulate a FOR /F loop that would run "net user" to get a list of users and then inside the body of the loop would run "net user [username]" on each... when...

I stopped in my tracks. The "net user" command does this really annoying thing where it puts the user names in columns, as follows:

C:\> net user

User accounts for \\MYCOMPUTER

-------------------------------------------------------------------------
Administrator cheetah Guest
jane tarzan

Sure, I could parse those columns with Yet Another FOR Loop (YAFL), but that would be annoying and tiresome. I decided to go for a cleaner way to get a list of users than "net user", solving the challenge as follows:

C:\> for /F "skip=1" %i in ('wmic useraccount get name') do @echo. & echo %i &
net user %i | find "*"

Administrator
Local Group Memberships *Administrators
Global Group memberships *None

cheetah
Local Group Memberships *Users
Global Group memberships *None

Guest
Local Group Memberships *Guests
Global Group memberships *None

jane
Local Group Memberships *Administrators *Backup Operators
Global Group memberships *None

tarzan
Local Group Memberships *Users *HelpServicesGroup
Global Group memberships *None
My command here is simply running "wmic useraccount get name", which is a clean way to get a list of account names from the local box. I use a FOR /F loop to iterate over the output of this command, skipping the first line (which contains the "Name" column header from the wmic output). At each iteration through the loop, I skip a line (echo.) to make our output prettier. Then, I display the username (echo %i) and run the "net user [username]" command to get all of the details associated with that account. Now, I pipe the output of the "net user [username]" command through the find command to locate lines that have a * in them. Yes, that annoying little * that I was complaining about in Episode #43. But, here, I'm using it as a selector to grab the group names. If Windows annoyingly puts *'s in front of group names, darnit, I'm gonna use them to my advantage. No sense trying to pee against the wind.... er... Windows, that is.

Sure, sure, we could parse this output further to remove out the text that says "Local Group Memberships" and "Global Group memberships" (btw, didja note the inconsistency in the capitalization of Membership and membership? Gee, thanks, Microsoft). If I really needed to, I'd parse that stuff out using another FOR /F loop with a delimiter of *. But, that would make the command unnecessarily complicated and ugly, for it already has the information we want, in a relatively useful form.

I also like this solution, because it shows a useful mixture of "wmic useraccount" with "net user". It's not every day that you get to use the two of them together in a convenient fashion like this. So, I'm happy, and that's really what command line kung fu is all about... making people happy.

Monday, June 1, 2009

Episode #43: Users & Groups

Ed rushes in:

Here's an easy one that I use all the time when analyzing a system. When auditing a box or investigating a compromised system, I often want to double check which groups are defined locally, along with the membership of each group. I especially focus on who is in the admin group. We can dump a list of groups as follows:

C:\> net localgroup

Then, we can check the accounts associated with each group using:

C:\> net localgroup [groupname]

That's all well and good, but how can we get all of this information at one time? We could use a FOR /F loop to iterate on the output of our first command (group names) showing the membership (user names). Put it all together, and you have:

C:\> for /F "skip=4 delims=*" %i in ('net localgroup ^| find /v
"The command completed successfully"') do @net localgroup "%i"

Here, I'm using a for /F loop to parse through the output of "net localgroup". I filter (find) through the output of that command to choose lines that do not have (/v) the annoying "The command completed successfully" output. Otherwise, I'd get errors. I define some custom parsing in my FOR /F loop to skip the first 4 lines of cruft, and set a delimeter of * to remove that garbage that Microsoft prepends to each group name. BTW, what's with Microsoft making the output of their commands so ugly? Why do we have to parse all this garbage? How about they make the output useful as is? Oh well... Anyway, once I've parsed the output of "net localgroup" to get a group list, I push the output through "net localgroup" again to get a list of members.

Hal's going off the deep end:

Ed's challenge seemed deceptively simple when I first read it. Then my brain kicked in, and as usual that made things infinitely more difficult. At first I thought this was going to be as simple as using "cut" to pull the appropriate fields out of /etc/group:

$ cut -f1,4 -d: /etc/group
root:
daemon:
bin:
sys:
adm:hal
[...]

Alternatively, if you only cared about groups that actually had users listed in the last field, you could do:

$ cut -f1,4 -d: /etc/group | grep -v ':$'
adm:hal
dialout:hal
cdrom:hal
audio:pulse
plugdev:hal
lpadmin:hal
admin:hal
sambashare:hal

But now my uppity brain intruded with an ugly fact: by only looking at /etc/group, we're ignoring the users' default group assignments in /etc/passwd. What we really need to do is merge the information in /etc/passwd with the group assignments in /etc/group. I'll warn you up front that my final solution skirts perilously close to the edge of our "no scripting" rule, but here goes.

We're going to be using the "join" command to stitch together the /etc/passwd and /etc/group files on the group ID column. However, "join" requires both of its input files to be sorted on the join field. So before we do anything we need to accomplish this:

$ sort -n -t: -k4 /etc/passwd >passwd.sorted
$ sort -n -t: -k3 /etc/group >group.sorted

In the "sort" commands, "-n" means do a numeric sort, "-t" specifies the field delimiter, and "-k" is used to specify the field(s) to sort on.

Once we have the sorted files, producing the output we want is trivial:

$ join -a 1 -t: -1 3 -2 4 group.sorted passwd.sorted | \
awk -F: '{ grps[$2] = grps[$2] "," $4 "," $5 }
END { for ( g in grps ) print g ":" grps[g] }' | \
sed -r 's/(:|,),*/\1/g; s/,$//' | sort

[...]
list:list
lpadmin:hal
lp:hplip,lp
mail:mail
man:man
messagebus:messagebus
mlocate:
netdev:
news:news
nogroup:nobody,sshd,sync
[...]

While I hate to belabor the obvious, let me go over the above example line-by-line for the two or three folks reading this blog who might be confused:


  • "join" is a bit funky. The "-t" option specifies the column delimiter, just like "sort", and you can probably guess that "-1 3" and "-2 4" are how we're specifying the join column in file 1 ("-1") and file 2 ("-2"). Normally "join" will only output lines when it can find lines in both files that it can merge together. However, the "-a 1" option tells "join" to output all lines from file 1, even if there's no corresponding line in file 2.

    So that you can understand the rest of the command-line above, let me show you some of the output from the "join" command by itself:

    $ join -a 1 -t: -1 3 -2 4 group.sorted passwd.sorted
    [...]
    124:sambashare:x:hal
    125:ntp:x::ntp:x:112::/home/ntp:/bin/false
    126:bind:x::bind:x:113::/var/cache/bind:/bin/false
    1000:hal:x::hal:x:1000:Hal Pomeranz,,,:/home/hal:/bin/bash
    65534:nogroup:x::nobody:x:65534:nobody:/nonexistent:/bin/sh
    65534:nogroup:x::sshd:x:114::/var/run/sshd:/usr/sbin/nologin
    65534:nogroup:x::sync:x:4:sync:/bin:/bin/sync

    When doing its output, "join" puts the merge column value (the GID in our case) up at the front of each line of output. Then you see the remaining fields of the first input file (group name, group password, user list), followed by the remaining fields of the second input file (user name, BSD password, UID, and so on). The "sambashare" line at the top of our sample output is an example of a group that had no corresponding users in /etc/passwd. The "nogroup" lines toward the bottom of the output are an example of a single group that actually has several users associated with it in /etc/passwd.

    Somehow we've got to pull the output from the "join" command into a consolidated output format. That's going to require some pretty flexible text processing, plus the ability to merge user names from multiple lines of output, like the "nogroup" lines in our sample output. Sounds like a job for awk.

  • In the awk expression I'm using "-F:" to tell awk to split the input lines on colons, rather than whitespace which is the default. Now the group name is always in field 2, and the list of users from the /etc/group is in field 4, and the user name from /etc/passwd is in field 5. As I read each line of input, I'm building up an array indexed by group name that contains a list of all the values in fields 4 and 5, separated by commas. In the "END" block that gets processed when the input is exhausted I'm outputting the group name, a colon, and the list of users.

    The only problem is that sometimes field 4 and field 5 are null, so you get some extra commas in the output:

    $ join -a 1 -t: -1 3 -2 4 group.sorted passwd.sorted | \
    awk -F: '{ grps[$2] = grps[$2] "," $4 "," $5 }
    END { for ( g in grps ) print g ":" grps[g] }'

    [...]
    sambashare:,hal,
    nogroup:,,nobody,,sshd,,sync
    [...]

    A little "sed" will clean that right up.

  • Our "sed" expression actually contains two substitution operations separated by a semicolon: "s/(:|,),*/\1/g" and "s/,$//". Both substitutions will be applied to all input lines.

    The first subsitution is the most complex. We're matching either a colon or a comma followed by some number of extra commas and replacing that with the initial colon or comma. This allows us to remove all of the extra commas in the middle of the output lines.

    The second substitution matches commas at the end of the line and removes them (replaces them with nothing).


We throw a final "sort" command at the end of the pipeline so we get the output sorted by group name, but the hard part is basically over.

Clever readers will note that there's a potential problem with my solution. What if the "nogroup" entry in /etc/group had a user list like "nogroup:x:65534:foo,bar"? Because there were multiple /etc/passwd lines that were associated with "nogroup", I'd end up repeating the users in from the list in /etc/group multiple times:

$ join -a 1 -t: -1 3 -2 4 group.sorted passwd.sorted | ... | grep nogroup
nogroup:foo,bar,nobody,foo,bar,sshd,foo,bar,sync

The real solution requires introducing some conditional logic into the middle of the awk expression in order to avoid this duplication:

$ join -a 1 -t: -1 3 -2 4 group.sorted passwd.sorted | \
awk -F: '{ if (grps[$2]) { grps[$2] = grps[$2] "," $5 }
else { grps[$2] = $4 "," $5 } }
END { for ( g in grps ) print g ":" grps[g] }' | \
sed -r 's/(:|,),*/\1/g; s/,$//' | sort

[...]
nogroup:foo,bar,nobody,sshd,sync
[...]

The "if" statement in the middle of the awk code is checking to see whether we've seen this group before or not. The first time we see a group (the "else" clause), we make a new entry in the "grps" array with both the user list from /etc/group ($4) and the user name from the /etc/passwd entry ($5). Otherwise, we just append the user name info from the /etc/passwd entry and don't bother re-appending the group list from /etc/group.

I was able to successfully type the above code into a single command-line, but it's clearly a small script at this point. So I'd say that it at least goes against the spirit of the rules of this blog.