Tuesday, September 29, 2009

Episode #62: Anybody Using This?

Ed is currently auditioning for the coveted title role in "Where in the World is Carmen San Diego?" So in the meantime we've invited Sifu Tim Medin to bring the Windows madness to the Command Line Kung Fu dojo. Tim's also got his own blog over at, where he's lately been throwing down some Powershell solutions to our earlier Episodes.

Tim wanders in:

So it has happened to all of us, a server needs to be taken down for some reason or another, but you can't just yank it out of production. Users have to be notified before taking it offline. The problem is we don't know who is logged in. And there may be quite a few users, especially in the Windows world of Terminal Services. Windows has two commands to help us out, quser and qwinsta (short for Query WINdows STAtion). Both commands can be used to find out who is logged in locally and both accept the /server option to query another server.

Quser in action:

C:\> quser /server:Alpha
larry rdp-tcp#5 5 Active . 9/29/2009 5:43 AM
moe 1 Disc none 9/29/2009 9:32 AM

The problem is, quser is NOT included in Windows XP so we will use qwinsta for compatibility. Too bad, since quser would have been a better fit for two reasons. First, it only displays active and disconnected sessions instead of listeners. Second, the username is the first item, and we all know that parsing text from the Windows command line is a pain.

Qwinsta in action:

C:\> qwinsta /server:Alpha
console 0 Conn wdcon
rdp-tcp 65536 Listen rdpwd
rdp-tcp#5 larry 5 Active rdpwd
moe 1 Disc wdica

C:\> qwinsta /server:Omega
console curly 0 Active wdcon
rdp-tcp 65536 Listen rdpwd

Shown above are two servers, Alpha and Omega. Server Alpha has two connections, one from Larry (connected) and Moe (disconnected). Curly is the only user logged to Omega, and is logged via the console.

We don't care about the listening sessions so we can filter the results for active and disconnected sessions. By using the findstr command we can search for an active or disconnected session. A space between search terms is treated as a logical OR.

C:\> qwinsta /server:Alpha | findstr "Active Disc"
rdp-tcp#5 larry 5 Active rdpwd
moe 1 Disc wdica

C:\> qwinsta /server:Omega | findstr "Active Disc"
console curly 0 Active wdcon

We only want the username so we will have to have to use our handy dandy FOR loop to parse it (Episode 48). This is made more difficult since a disconnected session doesn't have a session name and throws off the parsing. Here is what I mean:

C:\> for /F %i in ('qwinsta /server:Alpha ^| findstr "Active Disc"') do @echo %i

What we get is the first string on each line of the output, not quite what we want. The for loop divides the string into tokens using white space as a delimiter, and leading spaces are discarded. If there is a session name, we need to display the second token, otherwise, we need to display the first token. If you notice, sessions names either contain '#' or 'console', and we can use this nugget to ensure the right bit of information is displayed.

C:\> for /F "tokens=1,2" %i in ('qwinsta /server:Alpha ^| findstr "Active Disc"')
do @echo %i | find /v "#" | find /v "console" || echo %j


The username will either be located in the first or second token, represented by %i and %j respectively. Remember, any tokens after the first use a different letter of the alphabet, so if we used the third and forth tokens they would be represented by %k and %l. Ok, so now we have the username but we don't know if it is in variable %i or %j. How do we do that?

If you remember from previous posts, 'find /v' only returns lines NOT containing the specified string. In our example we use it twice to filter %i if it contains '#' or 'console'. The "||" is used to only execute the next command if the previous command fails (see Episode 47). A command is deemed to have failed if it raises an error or if it returns no output.

We can logically combine these pieces to display the username. We attempt to echo %i, but if it contains '#' or 'console' then nothing is returned, since nothing is returned it is treated like a failure and the next command is executed (echo %j). And there we (finally) have the username.

At least we don't have to use wmic, because that post would have required a dissertation.

C:\> cmd.exe /v:on /c "for /F "tokens=2 DELIMS=," %i in
('wmic /node:SERVER path win32_loggedonuser get Antecedent /value ^| find /v "SERVICE"')
do @set var=%i & @echo !var:~6,-4!"

Somehow I think Hal will have an easier way of doing this in Linux...

Hal takes over:

"Easier way of doing this in Linux"? Holy cow, Tim! I'm having trouble thinking of how there could be anything harder than the solution that Windows forces on you. Wowzers...

There are at least three different ways to get a list of users currently logged into the local system. First there's the w command:

$ w
14:12:19 up 5 days, 4:36, 10 users, load average: 1.73, 2.04, 1.88
hal tty7 :0 Tue13 5days 1:21 0.40s x-session-manag
hal pts/0 :0.0 Tue13 27:39m 5.20s 5.20s ssh deer
hal pts/1 :0.0 Tue13 6:12 1.92s 1:55m gnome-terminal
hal pts/2 :0.0 Tue13 0.00s 0.02s 0.02s w
hal pts/3 14:11 28.00s 0.00s 0.00s -bash

The w command gives us lots of information, including the command that each user is currently running on their tty. Things that are labeled with ":0*" in the FROM column are local X windows-related processes on the system console. Remote logins are labeled with the IP address of the remote host in the FROM column. The IDLE time column can help decide how actively a given user is using the machine, though be careful about long running jobs that a user may have started some time ago but which shouldn't be shut down. The only problem with w is that the output can be more difficult to parse in shell pipelines because of the extra uptime information and header line at the beginning of the output.

The who command produces output that's easier to parse, but which contains much less detail:

$ who
hal tty7 2009-09-22 13:11 (:0)
hal pts/0 2009-09-22 13:11 (:0.0)
hal pts/1 2009-09-22 13:11 (:0.0)
hal pts/2 2009-09-22 13:11 (:0.0)
hal pts/3 2009-09-27 14:11 (

There's also the finger command, which works on the local system even if you currently don't have the finger daemon enabled.

$ finger
Login Name Tty Idle Login Time Office Office Phone
hal Hal Pomeranz tty7 5d Sep 22 13:11 (:0)
hal Hal Pomeranz pts/0 28 Sep 22 13:11 (:0.0)
hal Hal Pomeranz pts/1 6:13 Sep 22 13:11 (:0.0)
hal Hal Pomeranz pts/2 Sep 22 13:11 (:0.0)
hal Hal Pomeranz pts/3 1 Sep 27 14:11 (

Frankly, finger has the same parsing difficulties as w, but provides less information overall, so I don't find it that useful.

But all of these commands only work on the local system. So how would you got information on who's logged into a remote machine? Why with ssh of course:

$ ssh remotehost who | awk '{print $1}' | sort -u

Here I'm SSHing into the machine remotehost and running the who command. The output of that command gets piped into awk on the local machine where I pull the usernames out of the first column of output. The sort command puts the usernames into alphabetical order and the "-u" (unique) option removes duplicate lines. And that's my final answer, Regis.

However, since Tim started out with the scenario of having a server that needs to get shut down, I just wanted to mention a couple of other items. First, if you use the Unix shutdown command (which we talked about way back in Episode 7), all of the currently logged in users will get a message (actually lots of messages, which is why shutdown is so darned annoying) sent to their tty letting them know that the system is being shut down. If you include your contact info in the message, the users can get ahold of you and request that you abort the shutdown.

The other item worth mentioning here is that if you create the file /etc/nologin, then the system will not allow new user logins. The contents of the /etc/nologin file will be displayed users who try to log into the system:

$ ssh remotehost
hal@remotehost's password:
Logins are currently disabled because the system will be shut down shortly.
System will resume normal operations at noon PDT.

Connection closed by

Typically the shutdown command will create /etc/nologin automatically as the shutdown time gets close. But you can also create this file yourself to customize the message your users see.

Tuesday, September 22, 2009

Episode #61: Just Sit Right Back & You'll Hear a Tale... or a Tail...

Ed muses whimsically:

I'm sure every self-respecting geek has contemplated the scenario. I know I think about it all the time. You're trapped on a desert island, surrounded by nothing but coconut trees, sand, water, and 50 beautiful babes... all living in rustic harmony in your lavish hut. Oh, and you also have your laptop computer, with a power supply and Internet connection. Now, the question before the house, of course, is as follows:

When stranded on a desert island, if you could only have a single command in your operating system, what would it be and how would you use it?

Yes, it's a dilemma that has no doubt puzzled philosophers for ages. I'd like to weigh in on it, and see Hal's thoughts on the matter as well.

For my Windows usage, I'd have to go with WMIC, the Windows Management Instrumentation Command-Line tool. While it's just a command, it opens up whole worlds to us for interacting with our Windows boxen. Built-into Windows XP Pro and later, WMIC can be used to query information from machines and update it using a syntax known as the WMIC Query Language (WQL), which I described in an article a while back.

WMIC can be used as a replacement for numerous other commands, and in many instances it provides even more information than the commands it subsumes. For instance, you could supplant the Service Controller command (sc) that we discussed in Episode #57 with:

C:\> wmic service list full

Or, you can replace the tasklist command with:

C:\> wmic process list full

The taskkill command functionality can be mimicked with:

C:\> wmic process where processid="[pid]" delete

You can get lists of users, including many settings for their account and their SIDs with:

C:\> wmic useraccount list full

Those are the things I most often use WMIC for: interacting with services, processes, and user accounts based on variations of those commands. I've written a lot about WMIC in the past, but I've come up with some new uses for it that I'd like to talk about here. And, getting back to our little desert island fantasy... I mean... scenario, let's talk about some additional WMIC use cases.

Suppose, on this desert island, you wanted to see if a given Windows machine was real or virtual. Perhaps you had hacked into another box on the island or you had this question about your own system. WMIC can provide insight into the answer, especially if VMware is in use:

C:\> wmic bios list full | find /i "vmware"
SerialNumber=VMware-00 aa bb cc dd ee ff aa-bb cc dd ee ff 00 aa bb

VMware detection, in a single command! I'm sure the babes will like that one. Here, I'm querying the bios of my Windows machine, looking for the string "VMware" in a case-insensitive fashion (/i). If you see output, you are running inside a VMware guest machine. Also, you'll get the serial number of that VMware install, which might be useful to you.

Perhaps, with all that spare time on your island paradise, you will start to contemplate the fine-grained inner workings of your Windows box, thinking about the order that various drivers are loaded. Wanna see that info? Use this:

C:\> wmic loadorder list full

On a desert island, I'm sure you'll need to know a lot of details about your hard drive, including the number of heads, cylinders, and sectors (so you can make a new hard drive from coconut shells when your existing one fails, of course). To get that information, run:

C:\> wmic diskdrive list full

At some point, you may need to write up a little command-line script that checks the current screen resolution on your default monitor. There must be a distinct need on desert islands for pulling this information (perhaps just to impress the babes), which can be obtained with:

C:\> wmic desktopmonitor where name="Default Monitor" get screenheight,screenwidth
ScreenHeight ScreenWidth
665 1077

Now, suppose you are conducting a detailed forensics investigation to determine who among your cadre of babes stole the coconut cream pie. The answer might lie in the creation, modification, or last accessed time of a given directory on your hard drive. You can get that information by running:

C:\> wmic fsdir where (name="c:\\tmp") get installdate,lastaccessed,lastmodified
InstallDate LastAccessed LastModified

20090913044801.904300-420 20090914051243.852518-420 20090913073338.075232-420

Note that the path to the directory in this one must use \\ in place of each backslash. The first backslash is an escape, and the second is the real backslash. You have to do this for any where clauses of wmic that have a backslash in them. Also, note that fsdir works only for directories, not files. Still, that should help you crack the case of the missing coconut cream pie!

There are thousands of other uses for WMIC, which can be explored by simply running "wmic /?". As you can see, it is an ideal tool for an intrepid geek in a tropic island nest.

No phone! No lights! No motor cars!
Not a single luxury...
Like Robinson Caruso...
Except for WMIC. :)

Hal's been on the island far too long:

When Ed proposed this question I thought it was kind of unfair for me to get to choose a command plus have all the functionality of the Unix shell. And that got me thinking, just how much could I accomplish using the shell itself with no other external commands? This is not as idle a question as it might first appear: there have been times when I've had to recover hosed systems without much more than the built-in functionality in my shell.

First let's inventory our resources. Built into the bash shell we have cd for navigating the directory tree, echo and printf for outputting data, read for reading in data, and a few miscellaneous commands like kill, umask, and ulimit. We also have several different kinds of loops and conditional statements, plus the test operator for doing different kinds of comparisons. This actually turns out to be a lot of functionality.

Starting off simply, we can create a simple ls command with the echo built-in:

$ cd /usr/local
$ echo *
bin cronjobs depot etc games include lib lib64 libexec lost+found man sbin share src

But that output is kind of ugly, and would be hard to read if the directory contained more items. So we could make pretty output with an ugly loop:

$ i=0; for f in *; do printf '%-20s' $f; (( ++i % 4 )) || echo; done; \
(( $i % 4 )) && echo

bin cronjobs depot etc
games include lib lib64
libexec lost+found man sbin
share src

I'm using printf to output the data in columns (though you'll note that my columns sort left to right rather than up and down like the normal ls command), and a counter variable $i to output a newline after the fourth column.

Emulating the cat command is straightforward too:

$ while read l; do echo $l; done </etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail. localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

We can also use this idiom as a simple version of the cp command by just redirecting the output into a new file. Unfortunately, there's no unlink operator built into the shell, so I can't do either rm or mv (though you can use ">file" to zero-out a file). There's also no way to do the ln command in the shell, nor to emulate commands like chown, chmod, and touch that update the meta-data associated with a file.

However, since bash has a pattern matching operator, we can emulate grep very easily:

$ while read l; do [[ $l =~ ^127 ]] && echo $l; done </etc/hosts localhost.localdomain localhost

In a similar vein, we can also count lines ala "wc -l":

$ i=0; while read l; do ((i++)); done </etc/hosts; echo $i

While our cat emulator works fine for small files, what if we had a longer file and wanted something like more or less that would show us one screenful at a time:

$ i=0; \
while read -u 3 l; do
echo $l;
((++i % 23)) || read -p 'More: ';
done 3</etc/passwd

[... 21 lines not shown ...]

After every 23 lines, I use the read command to display the "More: " prompt and wait for the user to hit newline. Since I'm going to be reading the user's input on the standard input, I have to read the file the user wants to view on a different file descriptor. At the end of the loop I'm associating the /etc/passwd file with file descriptor 3, and at the top of the loop I use "read -u 3" to read my input from this file descriptor. Thank you bash, and your amazingly flexible output redirection routines.

Since we have for loops, creating our own version of the head command is also easy:

$ for ((i=0; $i<10; i++)); do read l; echo $l; done </etc/passwd

If I have the head command, I suppose it's a moral imperative that I also produce something like tail:

$ i=0; while read l; do a[$i]=$l; i=$(( ($i+1)%10 )); done </etc/passwd; \
for ((j=0; $j<10; j++)); do echo ${a[$(( ($j+$i)%10 ))]}; done

xfs:x:43:43:X Font Server:/etc/X11/fs:/sbin/nologin
sabayon:x:86:86:Sabayon user:/home/sabayon:/sbin/nologin
radiusd:x:95:95:radiusd user:/:/bin/false
mailman:x:41:41:GNU Mailing List Manager:/usr/lib/mailman:/sbin/nologin

In the first loop I'm using an array as a circular buffer to hold the last 10 lines read. After the first loop exhausts the file, I use a second loop to output the lines stored in the array.

The idea of reading the contents of a file into an array suggested this nasty hack to emulate the sort command:

$ n=0; while read l; do a[$n]=$l; ((n++)); done <myfile; \
n=$(($n-1)); \
for ((i=0; $i<$n; i++)); do
for ((j=$((i+1)); $j<=$n; j++)); do
[[ ${a[$s]} < ${a[$j]} ]] || s=$j;
t=${a[$i]}; a[$i]=${a[$s]}; a[$s]=$t;
done; \
for ((i=0; $i<=$n; i++)); do echo ${a[$i]}; done


Yep, the middle, nested loops are actually a selection sort implemented in the shell. You'll notice that the sort here is an alphabetic sort. We could produce a numeric sort using "-lt" instead of "<" inside the "[[ ... ]]" clause in the innermost loop.

You'll also notice that I put some duplicate values in my test input file. Hey, if you're going to do "sort" you've got to also do "uniq". Here's a numeric sort plus some mods to the final loop to emulate uniq:

$ n=0; while read l; do a[$n]=$l; ((n++)); done <myfile; \
n=$(($n-1)); \
for ((i=0; $i<$n; i++)); do
for ((j=$((i+1)); $j<=$n; j++)); do
[[ ${a[$s]} -lt ${a[$j]} ]] || s=$j;
t=${a[$i]}; a[$i]=${a[$s]}; a[$s]=$t;
done; \
for ((i=0; $i<=$n; i++)); do
[[ "X$l" == "X${a[$i]}" ]] || echo ${a[$i]}; l=${a[$i]};


With the help of the IFS variable, we can do something similar to the cut command:

$ IFS=":"; \
while read uname x uid gid gecos home shell; do
echo $uname $uid;
done </etc/passwd

root 0
bin 1
daemon 2

And since bash has a substitution operator, I can even emulate "sed s/.../.../":

$ while read l; do echo ${l//root/toor}; done </etc/passwd

I couldn't resist exploiting /proc on my Linux box to generate a simple ps listing:

$ printf "%-10s %5s %5s   %s\n" UID PID PPID CMD; \
for d in /proc/[0-9]*; do
cmd=$(cat $d/cmdline | tr \\000 ' ');
while read label value rest; do
case $label in
Name:) name=$value;;
Pid:) pid=$value;;
PPid:) ppid=$value;;
Uid:) uid=$value;;
done <$d/status;
[[ -z "$cmd" ]] && cmd=$name;
printf "%-10s %5s %5s %s\n" $uid $pid $ppid "$cmd";

0 1 0 init [3]
0 10 1 watchdog/2
0 10994 87 kjournald
0 11 1 migration/3
0 11058 1 /usr/lib/vmware/bin/vmware-vmx -...

This is obviously skirting pretty close to our "no scripting" rule, but I actually was able to type this in on the command line. I suspect that there may be information available under /proc that would also enable me to emulate some functionality of other commands like netstat and ifconfig, and possibly even df, but this episode is already getting too long.

Before I finish, however, I wanted to show one more example of how we could create our own simple find command. This one definitely wanders far into scripting territory, since it involves creating a small recursive function to traverse directories:

$ function traverse { 
cd $1;
for i in .[^.]* *; do
$(filetest $i) && echo "$1/$i";
[[ -d $i && -r $i && ! -h $i ]] && (traverse "$1/$i");

$ function filetest { [[ -d $1 ]]; }
$ traverse /etc
traverse /etc

Specify a directory and the traverse function will walk the entire directory tree, calling the filetest function you define on each object it finds. If the filetest function resolves to true, then traverse will echo the pathname of the object it called filetest on. In the example above, filetest is true if the object is a directory, so our example is similar to "find /etc -type d".

Tuesday, September 15, 2009

Episode #60: Proper Attribution

Hal starts off:

Back in Episode #54 we got a chance to look at the normal permissions and ownerships in the Unix file system. But we didn't have room to talk about extended file attributes, and that's a shame. So I thought I'd jot down a few quick pointers on this subject.

The Linux ext file systems support a variety of additional file attributes over and above the standard read/write/execute permissions on the file. Probably the most well-known attribute is the "immutable" bit that makes a file impossible to delete or modify:

# touch myfile
# chattr +i myfile
# lsattr myfile
----i-------------- myfile
# touch myfile
touch: cannot touch `myfile': Permission denied
# rm myfile
rm: cannot remove `myfile': Operation not permitted
# ln myfile foo
ln: creating hard link `foo' => `myfile': Operation not permitted
# chattr -i myfile
# rm myfile
As you can see in the above example, you use the chattr command to set and unset extended attributes on a file, and lsattr to list the attributes that are currently set. Once you set immutable on a file (and you must be root to do this), you cannot modify, remove, or even make a hard link to the immutable file. Once root unsets the immutable bit, the file can be modified or removed as normal. It's not uncommon to see rootkit installation scripts setting the immutable bit on the trojan binaries they install, just to make them more difficult for novice system administrators to remove.

But there are many other extended attributes that you can set on a file. For example, the append-only ("a") attribute means that you can add data to a file but not remove data that's already been written to the file. The synchronous updates attribute ("S") means that data that's written to the file should be flushed to disk immediately rather than being buffered for efficiency-- it's like mounting a file system with the "sync" option, but you can apply it to individual files. There's also a dir sync attribute ("D") that does the same thing for directories, though it's really unclear to me why this is a separate attribute from "S". The data journalling attribute ("j") is equivalent to the behavior of mounting your ext3 file systems with the "data=journal" option, but can be applied to individual files.

As you can see in the output of lsattr in the example, however, there are lots of other possible extended attribute fields. Many of these apply to functionality that's not currently implemented in the mainstream Linux kernels, like "c" for compressed files, "u" for "undeletable" files (meaning the file contents are saved when the file is deleted so you can "undo" deletes), "s" for secure delete (overwrite the data blocks with zero before deallocating them), and "t" for tail-merging the final fragments of files to save disk space. Then there are attributes like the "no dump" attribute ("d") which means the file shouldn't be backed up when you use the dump command to back up your file systems: "d" isn't that useful because hardly anybody uses the dump command anymore. There are also a bunch of attributes (E, H, I, X, Z) which can be seen with lsattr but not set with chattr.

So in general, "i" is useful for files you want to be careful not to delete, and "a" and possibly "S" are useful for important log files, but a lot of the other extended attributes are currently waiting for further developments in the Linux kernel. Now let's see what Ed's got going for himself in Windows land (and look for an update from a loyal reader after Ed's Windows madness).

Ed finishes it up:

In Windows, the file and directory attributes we can play with include Hidden (H), System (S), Read-only (R), and Archive (A). H, S, and R are pretty straightforward, and function as their name implies. The Archive attribute is used to mark files that have changed since the last backup (the xcopy and robocopy commands both have a /a option to make them copy only files with the Archive attribute set).

You can see which of these attributes are set for files within a given directory using the well-named attrib command. The closest thing we have to the Linux immutable attribute used by Hal above is the Read-Only attribute, so let's start by focusing on that one, mimicking what Hal does (always a dangerous plan).

C:\> type nul >> myfile

Note that we don't have a "touch" command on Windows, so I'm simply appending the contents of the nul file handle (which contains nothing) into myfile. That'll create the file if it doesn't exist, kinda like touch. However, it will not alter the last modified or accessed time, unlike touch. Still, it'll work for what we want to do here.

C:\> attrib +r myfile
C:\> attrib myfile
A R C:\tmp\myfile

Here, we've set the read-only attribute on myfile using the +r option, and then listed its attributes. Note that it had the Archive attribute set by default. We could specify a whole list of attributes to add or subtract in a single attrib command, such as +s -a +h and so on. Note that you cannot add and remove the same attribute (e.g., +r -r is forbidden).

Now, let's try to see how this attribute is similar to what Hal showed earlier for the immutable stuff:

C:\> type nul >> myfile
Access is denied.

C:\> del myfile
Access is denied.

C:\> attrib -r myfile

C:\> del myfile

To remove all attributes, you could run:

C:\> attrib -h -s -r -a [filename]

Beyond attrib, we can also use the dir command to list files with certain attributes, using the /a option. For example, to list all hidden files, we could run:

C:\> dir /b /ah
System Volume Information

I used the /b option here to display the bare form of output so that I omit some clutter.

If you want to see non-hidden files, you could run:

C:\> dir /a-h

You can bundle multiple attributes together as well in dir using a slightly different syntax from the attrib command. With dir, you just smush together all the attributes you want to see, and indicate the ones you don't want with a minus sign in front of them. For example, if you want to see read-only files that are not hidden but that are also system files, you could run:

C:\> dir /ar-hs

A lot of people get thrown off by the fact that, by default, the dir command omits hidden and system files from its output. Consider:

C:\> type nul > myfile
C:\> type nul > myfile2
C:\> type nul > myfile3
C:\> attrib +h myfile
C:\> attrib +s myfile2
C:\> attrib +r myfile3

C:\> attrib
A H C:\tmp\myfile
A S C:\tmp\myfile2
A R C:\tmp\myfile3

C:\> dir /b

This issue comes up rather often when playing the Capture the Flag game in my SANS 560 class on network penetration testing. Attendees need to grab GnuPG keys from target accounts, with the keys acting as the flags in the game. Rather often, folks get command shell on a target box, change into the appropriate directory for the GnuPG keys, and run the dir command. They see... NOTHING. Inevitably, a hand goes up and I hear "Someone deleted the keys!". I respond, "You know, GnuPG keys have the hidden attribute set...." The hand goes down, and the happy attendee snags the keys.

You see, the dir command with the /a option lets us specify a set of attributes we want or don't want. If you want to see all files regardless of their attributes, use dir with the /a option, but don't include any specific attributes in your list after the /a. That way, you'll see everything:

C:\> dir /b /a

With this functionality, some people think of the /a option of dir as meaning "all" or "anything", but it really is more accurate to think of it as "attributes" followed a blank list of attributes. In my own head, when I type "dir /a", I think of it as "Show me a directory listing with attributes of anything."

There's one last thing about attributes I'd like to cover. It's really annoying that the Windows file explorer doesn't show system files by default. This is a travesty. Darnit, I need to see those files, and hiding them from me is just plain annoying. I can accept hiding hidden files, because, well, they are hidden. But system files? Who thought of that? It makes Windows into its own rootkit as far as I'm concerned. Because of this, one of the first things I do with any Windows box I plan on using for a while is to tweak this setting. You can change it in the Explorer GUI itself by going to Tools-->Folder Options-->View. Then, deselect "Hide protected operating system files (Recommended)". Recommended? I guess Microsoft is trying to protect its operating system files from clueless users. Still, this is just plain annoying. Oh, and while you are there, you may as well deselect "Hide extensions for known file types", which is yet another rootkit-like feature that confounds some users when they are looking for files with specific extensions. And, finally, you may want to just go whole hog (is that kosher?) and select "Show hidden files and folders." There, now you have a Windows machine that is almost usable!

Hal's got an update from a loyal reader:

The fact that anybody with root privileges can undo your file attribute settings, makes "chattr +i" somewhat less useful from an absolute security perspective. Loyal reader Joshua Gimer took me to school regarding the Linux kernel capability bounding feature:

If you wanted to make a file immutable so that not even root could modify it you could perform the following after making your attribute changes:


This setting could be changed back by anyone that has super-user privileges running the same command again. You could make it so that the kernel capability bounding set cannot be modified (not even by root) by issuing the following:


This would require a system reboot to reintroduce these capabilities. You could then add these commands to /etc/rc.local, if you wanted them to persist through a reboot.

If you're interested in messing around with the lcap program, you can find the source code at

I had actually not been aware of this functionality before Joshua's message, so I went off and did a little research on my own. Turns out there's been a major change to this functionality as of Linux kernel 2.6.25. The upshot is that the global /proc/sys/kernel/cap-bound interface that the lcap program uses has been removed and kernel capabilities are now set on a per-thread basis. However, you can set capability bounding sets on executables in the file system. So if you want to globally remove a capability, the suggested method appears to be to set a capability bound on the init executable (using the setcap command) and have that be inherited by every other process on the system when you next reboot the machine. I have personally not tried this myself.

More info in the capabilities(7) manual page:

Tuesday, September 8, 2009

Episode #59: Lions and Tigers and Shares... Oh Mount!

Ed fires the initial salvo:

I'm sure it happens all the time. A perfectly rational command shell warrior sitting in front of a cmd.exe prompt needs to get a list of mounted file systems to see what each drive letter maps to. Our hero starts to type a given command, and then backs off, firing up the explorer GUI (by running explorer.exe, naturally) and checking out the alphabet soup mapping c: to the hard drive, d: to the DVD, f: to his favorite thumb drive, and z: to his Aunt Zelda's file share (mapped across the VPN, of course). While opting to look at this information in the GUI might be tempting, I think we can all agree that it is tawdry.

So, how can you get this information at the command line? There are a multitude of options for doing so, but I do have my favorite. Before letting the cat out of the bag (ignore the scratching and muffled meows in the background) with my favorite answer, let's visit some of the possibilities.

To get a list of available local shares, you could run:

c:\> net share

Share name Resource Remark

C$ C:\ Default share
IPC$ Remote IPC
ADMIN$ C:\Windows Remote Admin
The command completed successfully.

That's a fine start, but it won't show things like your DVD or thumb drives unless you share them. Also, it leaves out any shares you've mounted across the network.

Let's see... we could get some more local stuff, including DVDs and thumb drives, by running wmic:

c:\> wmic volume list brief
Capacity DriveType FileSystem FreeSpace Label Name
32210153472 3 NTFS 14548586496 * C:\
2893314048 5 UDF 0 SANS560V0809 D:\
16015360000 2 FAT32 8098086912 E:\

That's pretty cool, and even shows us full capacity and free space. But, it does have that annoying "DriveType" line with only an integer to tell us the kind of file system it is. You can look at a variety of sites for the mapping of these numbers to drive types. However... be warned! There are a couple of different mappings depending on the version of Windows you use. On my Vista box, the mapping is:

0 = Unknown
1 = No Root Directory
2 = Removable
3 = Fixed
4 = Network
5 = CD-ROM
6 = RAM Disk

Other versions of Windows are lacking the "Root Doesn't Exist" item, and all the numbers (except 0) shift down by one.

Uh... thanks, Windows, but it would be nice to get that info without having to do the cross reference. Plus, we're still missing mounted network shares from this list. Hmmm....

Well, as we discussed in Episode #42 on listing and dropping SMB sessions, to get the list of mounted shares across the network, you could run:

c:\> net use
New connections will be remembered.

Status Local Remote Network

OK Z: \\\c$ Microsoft Windows Network
The command completed successfully.

Gee, that's nice. It shows you the drive letter and what it's connected to. But, you know, it's missing the local stuff.

How can we get it all, in an easy-t0-type command and a nice output format? Well, we could rely on the fsutil command:

c:\> fsutil fsinfo drives

Drives: A:\ C:\ D:\ E:\ Z:\

Ahhh... nicer. At least we've got them all now. But, you know, having just the letters kinda stinks. What the heck do they actually map to? We could check individual letter mappings by running:

c:\> fsutil fsinfo drivetype z:
z: - Remote/Network Drive

But, you know, this is kind of an ugly dead end. I mean, we could write a loop around this to pull out the info we want, but it's going to be a command that no reasonable person would just type on a whim, plus it's not going have enough detail for us.

To get what we really want, let's go back to our good friend wmic, the Windows Management Instrumentation Command line tool. Instead of the "wmic volume" alias we checked out above, we'll focus on the very useful "wmic logicaldisk" alias:

c:\> wmic logicaldisk list brief
DeviceID DriveType FreeSpace ProviderName Size VolumeName
A: 2
C: 3 14548656128 32210153472 *
D: 5 0 2893314048 SANS560V0809
E: 2 8098086912 16015360000
Z: 4 3144540160 \\\c$ 4285337600

Ahh... almost there. The DriveType crap still lingers, but this one is promising. We can check out the available attributes for logicaldisk by running:

c:\> wmic logicaldisk get /?

Digging around there, we can see that name, description, and providername (which shows mounted network shares) could be useful. Let's make a custom query for them:

c:\> wmic logicaldisk get name,description,providername
Description Name ProviderName
3 1/2 Inch Floppy Drive A:
Local Fixed Disk C:
CD-ROM Disc D:
Removable Disk E:
Network Connection Z: \\\c$

Soooo close. It kinda stinks having the drive letter in the middle of our output, doncha think? It should start with that. But, a frustrating fact about wmic is that its output columns show up in alphabetical order by attribute name. The "D" in description comes before the "N" in name, so we see the description first. Try reversing the order in which you request the attributes in the get clause, and you will see that they always come out the same way. Bummer.... We could switch those columns around with a FOR loop and some hideous parsing, but no one would ever want to type that command.

But, there is a solution. It turns out that the drive letter is not just stored in the "name" attribute, but is also located in the "caption" attribute. And, my friends, I don't have to remind you that "C" comes before "D" in the alphabet, do I? So, yes, we can trick Windows into giving us exactly what we want by running:

c:\> wmic logicaldisk get caption,description,providername
Caption Description ProviderName
A: 3 1/2 Inch Floppy Drive
C: Local Fixed Disk
D: CD-ROM Disc
E: Removable Disk
Z: Network Connection \\\c$

So, there you have it. Reasonable, typable, beautiful. Life is good.

Hal responds:

When Unix folks want to answer the "What's mounted?" question, most of them reach for the df command first:

# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/elk-root 1008M 656M 302M 69% /
tmpfs 1.9G 0 1.9G 0% /lib/init/rw
varrun 1.9G 156K 1.9G 1% /var/run
varlock 1.9G 0 1.9G 0% /var/lock
udev 1.9G 3.0M 1.9G 1% /dev
tmpfs 1.9G 324K 1.9G 1% /dev/shm
lrm 1.9G 2.4M 1.9G 1% /lib/modules/2.6.27-11-generic/volatile
/dev/sda1 236M 60M 165M 27% /boot
/dev/mapper/elk-home 130G 99G 25G 81% /home
/dev/mapper/elk-usr 7.9G 3.3G 4.2G 44% /usr
/dev/mapper/elk-var 4.0G 743M 3.1G 20% /var
/dev/scd0 43M 43M 0 100% /media/cdrom0
/dev/sdd1 150G 38G 112G 26% /media/LACIE
//server/hal 599G 148G 421G 26% /home/hal/data

Frankly, though, I find that the mount command actually provides a lot more useful data about the mounted file systems than just the amount of available space that df shows:

# mount
/dev/mapper/elk-root on / type ext3 (rw,relatime,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
/proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
varrun on /var/run type tmpfs (rw,nosuid,mode=0755)
varlock on /var/lock type tmpfs (rw,noexec,nosuid,nodev,mode=1777)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
lrm on /lib/modules/2.6.27-11-generic/volatile type tmpfs (rw,mode=755)
none on /proc/bus/usb type usbfs (rw,devgid=46,devmode=664)
/dev/sda1 on /boot type ext3 (rw,relatime)
/dev/mapper/elk-home on /home type ext3 (rw,relatime)
/dev/mapper/elk-usr on /usr type ext3 (rw,relatime)
/dev/mapper/elk-var on /var type ext3 (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw)
none on /proc/fs/vmblock/mountPoint type vmblock (rw)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev)
gvfs-fuse-daemon on /home/hal/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=hal)
/dev/scd0 on /media/cdrom0 type iso9660 (ro,nosuid,nodev,utf8,user=hal)
/dev/sdd1 on /media/LACIE type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096)
//server/hal on /home/hal/data type cifs (rw,mand)

Both commands show you the various physical and logical file systems on the machine, plus information on shares (like //server/hal) and removable media devices (like /dev/scd0). But the extra file system type information (ext3, iso9660, cifs, etc) and mount options data that the mount command provides is typically more useful to auditors and forensic examiners because it provides a better picture of how the devices are actually being used.

The one thing that's missing from both the df and mount output is information about your swap areas. You need to use the swapon command to get at this information:

# swapon -s
Filename Type Size Used Priority
/dev/mapper/elk-swap partition 4194296 5488 -1

If you're running on hardware with a PC BIOS, then your OS probably also includes the fdisk command:

# fdisk -l

Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xed1f86f7

Device Boot Start End Blocks Id System
/dev/sda1 * 1 31 248976 83 Linux
/dev/sda2 32 19457 156039345 83 Linux

Disk /dev/sdd: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xb90dbb65

Device Boot Start End Blocks Id System
/dev/sdd1 * 1 19457 156288321 7 HPFS/NTFS

Aside from giving you physical geometry information about how your disks are laid out, fdisk might also show you file systems that are not currently mounted.

Astute readers might have noticed a discrepancy between the output of the mount and fdisk commands. Let me add some command line options to each command to help highlight the difference:

# fdisk -l /dev/sda

Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xed1f86f7

Device Boot Start End Blocks Id System
/dev/sda1 * 1 31 248976 83 Linux
/dev/sda2 32 19457 156039345 83 Linux
# mount -t ext3
/dev/mapper/elk-root on / type ext3 (rw,relatime,errors=remount-ro)
/dev/sda1 on /boot type ext3 (rw,relatime)
/dev/mapper/elk-home on /home type ext3 (rw,relatime)
/dev/mapper/elk-usr on /usr type ext3 (rw,relatime)
/dev/mapper/elk-var on /var type ext3 (rw,relatime)

We see in the fdisk output that /dev/sda is split into two partitions, and we can see in the mount output that /dev/sda1 is mounted on /boot. But what are all those /dev/mapper/elk-* devices and how do they map into the apparently unused /dev/sda2 partition?

What you're seeing here is typical of a system that's using the Linux Logical Volume Manager (LVM). LVM is a mechanism for creating "soft partitions" that can be resized at will, and it also ties in with a bunch of other functionality, some of which we'll encounter shortly. Other flavors of Unix will typically have something similar, though the exact implementation may vary. The high-level concept for Linux LVM is that your file systems each live inside of a "logical volume" (LV). A set of logical volumes is a "volume group" (VG), and VGs live inside of a "physical volume" (PV). You can think of the PV as the actual physical partition on disk.

To take an example from the output above, the /home file system lives inside the LV /dev/mapper/elk-home. You can use the lvdisplay and vgdisplay commands to get information about the LV and VG, and these commands would show you that "elk-home" and all the other LVs on the system are part of the VG "elk". But in order to figure out the mapping between the VG and the PV on disk, you need to use the pvdisplay command:

# pvdisplay
--- Physical volume ---
PV Name /dev/mapper/sda2_crypt
VG Name elk
PV Size 148.81 GB / not usable 1.17 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 38095
Free PE 0
Allocated PE 38095

You'll note that the "PV Name" lists a device name that doesn't look like a physical partition like /dev/sda2. That's because in this case my PV is actually an encrypted file system that was created using the Linux disk encryption utilities. That means we have to go through one more level of indirection to get back to the actual physical disk partition info:

# cryptsetup status sda2_crypt
/dev/mapper/sda2_crypt is active:
cipher: aes-cbc-essiv:sha256
keysize: 256 bits
device: /dev/sda2
offset: 2056 sectors
size: 312076634 sectors
mode: read/write

Whew! So let's recap. /home is an ext3 file system inside of /dev/mapper/elk-home, which we learn from the mount command. lvdisplay would tell us which VG this volume was part of, and vgdisplay would give us more details about the "elk" VG itself. pvdisplay normally gives us the mappings between the VGs and the physical partitions, but in this case our PV is actually a logical volume inside of an encrypted file system. So we need to use cryptsetup to dump the information about the encrypted volume, including the actual physical device name. That's a lot of layers, but it's really not that awful to deal with in practice.

Tuesday, September 1, 2009

Episode #58: That's Pretty Random

Hal has too much time on his hands:

So there I was trolling the CommandLineFu blog looking for an idea for this week's Episode, and I came across this posting by user "curiousstranger" with an idea for generating a random 8-digit number:

jot -s '' -r -n 8 0 9

As useful as the jot command is in this situation, it's not a common command on most Unix systems I encounter. So that got me thinking how I'd answer this question within the rules of our blog.

My first notion was to just use the special $RANDOM variable to spit out an 8-digit number. But $RANDOM only produces values up to 32K. I could use $RANDOM in a loop to produce the individual digits I need, just like curiousstranger does with the jot command:

for ((i=0; $i<8; i++)); do echo -n $(( $RANDOM % 10 )); done

Actually the fact that the leading digit might be zero and the lack of a trailing newline are bugging me, so let's change that to:

echo -n $(( $RANDOM % 9 + 1)); \
for ((i=0; $i<7; i++)); do echo -n $(( $RANDOM % 10 )); done; \

Here's a different loop idea that uses repeated multiplication so that we do fewer iterations:

for (( i=1; i<10000000; i=$(($i * $RANDOM)) )); do :; done; \
echo $(( $i % 100000000 ))

Notice that we're multiplying a series of $RANDOM values together until we get at least an 8-digit value. Then we cut off the first 8 digits of the number we generated.

But these loops are all pretty ugly, and anyway I was feeling bad about hitting Ed with the "no loops" requirement back in Episode #56. So I started thinking about ways that I could accomplish this without a loop, and also about ways that I could generate sequences of any arbitrary length.

Now most modern Unix systems have a /dev/urandom device for generating pseudo-random data, but I needed to figure out a way to convert the binary data coming out of /dev/urandom into decimal numbers. And then it hit me: the "od" command. "od" stands for "octal dump", but it's really a very flexible binary data dumper that can output a wide variety of formats:

$ od -A n -N 16 -t u8 /dev/urandom
5073535022611155147 14542989994974172695

Here I'm reading the first 16 bytes ("-N 16") of data from /dev/urandom and formatting the output as 8-byte unsigned integers ("-t u8"). The "-A n" flag suppresses the byte offset markers that would normally appear in the first column.

Neat! All I need to do is remove the whitespace and chop off the first 8 digits, right?

$ od -A n -N 16 -t u8 /dev/urandom | sed 's/ //g' | cut -c1-8

Experimenting with this command a little bit, it looked to me like the distribution of random numbers wasn't really even: there seemed to be an awful lot of numbers starting with 1 coming up. Since the maximum 64-bit unsigned value is 18,446,744,073,709,551,615 this isn't too surprising-- roughly half the random numbers we generate are going to start with 1. You can confirm this with a little loop:

$ for ((i=0; $i<1000; i++)); do \
od -A n -N 16 -t u8 /dev/urandom | sed 's/ //g' | cut -c1; \
done | sort | uniq -c

512 1
65 2
61 3
53 4
63 5
62 6
64 7
61 8
59 9

Here I'm using od to generate the random digit strings but only pulling off the first digit. I do that 1000 times in a loop and then use "sort | uniq -c" to count the number of times each digit appears in the output. As you can see, the number 1 does indeed account for roughly half the output.

To fix this problem, I decided to simply throw away the first digit of each number. Since I still don't want any leading zeroes in my output, I'm also going to throw away any zeroes after the first digit. This just means a slightly more complicated sed expression:

$ od -A n -N 16 -t u8 /dev/urandom | sed -r 's/ +[0-9]0*//g' | cut -c1-8

If you run a test on this data, you'll find it yields a nice, even distribution of leading digits.

Now this method will give us a fairly long string of digits-- up to at least 30 digits or so. But what if we wanted a really long string of numbers? The answer is just to suck more data out of /dev/urandom:

$ od -A n -N 64 -t u8 /dev/urandom | sed -r 's/ +[0-9]0*//g'

As you can see, od puts each 16 bytes of data on its own line. So to make this one continuous stream of digits we need to remove the newlines:

$ od -A n -N 64 -t u8 /dev/urandom | tr \\n ' ' | sed -r 's/ +[0-9]0*//g'

I have to admit that I was feeling pretty good about myself at this point. By changing the "-N" parameter we can suck any amount of data we need out of /dev/urandom and produce an arbitrarily long string of random digits.

Then I went back to the CommandLineFu blog and noticed the follow-up comment by user "bubo", who shows us a much simpler method:

head /dev/urandom | tr -dc 0-9 | cut -c1-8

In the tr command, "-c 0-9" means "take the compliment of the set 0-9"-- in other words, the set of everything that is not a digit. The "-d" option means delete all characters in this set from the output. So basically we're sucking data from /dev/urandom and throwing out everything that's not a digit. Much tighter than my od solution. Good call, bubo!

Now what's that I hear? Ah yes, that's Ed's soul making the sound of ultimate suffering. Don't worry, big guy, I'll let you use loops for this one. I'm pretty sure Windows doesn't offer anything close to our /dev/urandom solution.

Update from a Loyal Reader: Jeff Haemer, who apparently has even more time on his hands than I do, came up with yet another solution for this problem that uses only built-in shell math operators. You can read about his solution in this blog post. Mmmm, tasty!

Ed responds:
Wow, Hal... you really do have too much time on your hands, don't ya?

And, yes, the sound of ultimate suffering started about the time you dumped your loops and went, well, loopy with /dev/urandom. Still, it was cool stuff, and I'll take you up on your reprieve from the loop embargo.

I frequently worry about writing shell fu that uses random numbers for fear that some of my shell gyrations will lower the entropy of an already-questionable entropy source. If the distribution of pseudo-random digits isn't quite random enough, and I compound that by using it multiple times, my result will be less random. Thus, I wouldn't use these solutions for industrial-grade pseudo-randomness, but they'll pass for casual coin flippers.

As I discussed in Episode #49, you can generate a random digit between 0 and 32767 using the %random% variable, and then apply modulo arithmetic (via the % operator) to get it between 0 and the number you want. Thus, we can create a single random digit between 0 and 9 using:

C:\> set /a %random% % 10

Now, Hal wants 8 of these bad boys, which we can do with a FOR loop thusly:

C:\> cmd.exe /v:on /c "for /L %i in (1,1,8) do @set /a !random! % 10"

At first, I thought that this would be harder, because echo'ing output always adds an annoying extra Carriage Return / Linefeed (CRLF) at the end. To see what I mean, try running:

C:\> echo hello & echo hello

But, in my random digit generator, I don't actually echo the output here. I just use "set /a" to do the math, which does display its result *without the extra CRLF*. That's nice. Makes my command easier. If you need more insight into the cmd.exe /v:on stuff up front, check out Episode #46, which describes delayed variable expansion.

While my solution above is simple and readily extensible to more digits, it does have some problems. The biggest problem is that it just displays the number, but doesn't set it in a variable. In fact, my loop never knows what the full random number it generates is, as it just steps through each unrelated digit.

A solution that actually puts the number in a variable is more useful for us, because then we can do further math with our random number, strip off leading zero digits, and use it in a script.

I can think of a whole bunch of ways to put our results in a variable for further processing. One that maintains our existing logic is:

C:\> for /f %j in ('cmd.exe /v:on /c "for /L %i in (1,1,8) do 
@set /a !random! % 10"') do @echo %j

Here, I'm just using a for /f loop to extract the output of our previous command into the iterator variable %j.

Another rather different approach that puts our value into a variable involves generating each digit and appending it to a string:

C:\> cmd.exe /v:on /c "set value= & (for /L %i in (1,1,8) do 
@set /a digit = !random! % 10 > nul & set value=!digit!!value! & echo !value!)"

Here, after turning on delayed variable expansion, I clear the variable called "value" by simply setting it to nothing with "set value=". Then, I generate each digit using %random% % 10. Finally, I simply append my value to the new current digit appended with the old value, building my string digit by digit. For fun, I echo it out at each iteration so we can see the number being built. I put an extra set of parens () around my FOR loop to indicate where it ends, because after that close paren, you can put in more logic.

The nice thing about this approach is that you can now use !value! in follow-up logic and commands to do stuff with it. Stuff like what? Well, how about this?

C:\> cmd.exe /v:on /c "set value= & (for /L %i in (1,1,8) do 
@set /a digit = !random! % 10 > nul & set value=!digit!!value!
& echo !value!) & echo. & echo. & set /a !value! + 10"


Good! So, we can gen numbers and then do math with them. Uh... but we have a problem. Let's try running this again to see what might happen:

C:\> cmd.exe /v:on /c "set value= & (for /L %i in (1,1,8) do 
@set /a digit = !random! % 10 > nul & set value=!digit!!value!
& echo !value!) & echo. & echo. & set /a !value! + 10"


What? My sum at the end is waaaaay wrong. What happened? Well, as we discussed in Episode #25 on shell math, if you use "set /a" to do math on a number, and your number has a leading zero, the shell interprets it as frickin' octal. Not just octal, my friends, but frickin' octal. What's the difference? Octal is a fun and nice way to refer to numbers. Frickin' octal, on the other hand, happens when your shell starts using octal and you don't want it to.

So, what can we do here? Well, we can use a variant of the kludge I devised in Episode #56 for dealing with leading zeros. We can put a digit in front of them, and then simply subtract that digit at the end. Here goes:

C:\> cmd.exe /v:on /c "set value= & (for /L %i in (1,1,8) do
@set /a digi
t = !random! % 10 > nul & set value=!digit!!value! & echo !value!)
& echo. & ech
o. & echo !value! & set interim=1!value! & echo !interim!
& set /a result=!inter
im! - 100000000"


I build my random number as before. Then, I build an interim result using string operations to prepend a 1 in front of it. Then, I subtract 100000000 at the end. Now, the variable called result has what I'm looking for. This approach also has the interest side-effect of removing leading zeros from my random number because of the math that I do. Also, thankfully, we can add and remove 100000000 without tripping past our signed 32-bit integer limit here of around 2 billion. If you need more than 8 digits in your random number, I suggest you start appending them together.

By the way, I included many echo's of my output to help make this more understandable. I've cut those down in the following command.

C:\> cmd.exe /v:on /c "set value= & (for /L %i in (1,1,8) do 
@set /a digit = !random! % 10 > nul & set value=!digit!!value!)
& set interim=1!value! >nul & set /a result=!interim! - 100000000"

And that one is my final answer, Regis.

Addendum from Ed: In my SANS class yesterday, an attendee asked how he could fill the screen with Matrix-like spew. Based on this episode, I came up with:

C:\> cmd.exe /v:on /c "for /L %i in (1,0,2) do @set /a !random!"