Tuesday, July 28, 2009

Episode #53: The Final Countdown

Ed starts:

My 40th birthday is coming up this Tuesday (the day that this episode will be posted), so I've been thinking about the passage of time a lot lately, with countdowns, hour glasses, and ticking clocks swirling through my thoughts. This all came to a head a couple of days ago, when I received an e-mail from a buddy of mine who runs capture the flag games. He wanted a countdown timer for his games, something to spice up players' experience in the game. But, instead of using some lame clock or stopwatch app, he asked me to provide him some command-line kung fu to do the trick. I came up with a solution that he liked, and it includes some fun twists and turns that I thought our readers here may find useful or at least enjoyable.

I relied heavily on FOR loops in the following command:

C:\> for /l %m in (1,-1,0) do @for /l %s in (59,-1,0) do @echo %m minutes %s seconds
LEFT & ping -n 2 > nul

1 minutes 59 seconds LEFT
1 minutes 58 seconds LEFT
1 minutes 57 seconds LEFT
1 minutes 56 seconds LEFT
1 minutes 55 seconds LEFT

Here, I've made a countdown timer that will run for two minutes (starting at minute 1, and counting back 60 seconds from 59 to 0, then going to minute zero and counting back the same way). I start off my minute variable (%m), running it from 1 down to 0 in steps of -1. If you want a 10 minute counter, replace that 1 with a 9. My second variable (%s) runs from 59 down to 0, again in steps of -1. At each iteration through the loop, I print out how much time is left. Finally, I ping myself twice, which takes about* a second (the first ping happens immediately, the second happens about* 1 second later).

That was my starting point. But, I wanted to add some flair, adding a little audio and popping up a message on the screen at the end of each minute. So, I added:

C:\> (for /l %m in (1,-1,0) do @(for /l %s in (9,-1,0) do @(echo ONLY %m minutes %s seconds
LEFT & ping -n 2>nul)) & start /max cmd.exe /c "echo ^G %m MINUTE^(s^)
LEFT! & ping -n 6>nul") & echo ^G^G^G^G^G^G^G^G^G

Here, as each minute expires, I'm using the start command to run a program in a separate window (which start does by default... the /b option makes start run a program in the backgroudn of the same window as we discussed in Episode #23 on job control) maximized on the screen with the /max option. The start command allows me to kick off something else while my main loop keeps running, ticking off seconds into the next minute.

The program that the start command will launch is cmd.exe, which I'm asking to execute a command with the /c flag. The command run by cmd.exe will echo a CTRL-G, which makes the system beep, and then echoes the number of minutes left. We then wait for 5 seconds (by pinging localhost 6 times). Because I started the cmd.exe with a /c option, after its command finishes (5 seconds later), the window will disappear. The location of all those parens is vitally important here, so that we're properly grouping our commands together to notch off time.

To add even a little more flair, I added several CTRL-Gs after the timer is done to make the expiration of the full time more audible.

* OK.... yes, you are right. The countdown timer I've describe here isn't super accurate, because the 1-second rule on pings is an approximation. Also, some time will be consumed by the commands gluing this all together, making this stopwatch slower than it should be. Also, if the system is heavily loaded, that'll slow things down even more. But, to a first approximation, we've got a stopwatch here. For more accuracy, you could write a script that relies on the %time% variable we discussed in Episode #49.

Time for Hal:

Emulating Ed's loop is straightforward. I'll even add a command at the end to pop up a window when time runs out:

$ for ((m=1; $m >= 0; m--)); do for ((s=59; $s >= 0; s--)); do \
echo $m minutes $s seconds to go; sleep 1; done; done; \
xterm -e 'echo TIME IS UP!; bash'

Notice that I'm executing bash in the xterm after the echo command. Spawning an interactive shell here keeps the xterm window from closing as soon as it echoes "TIME IS UP!".

But Ed's example got me thinking about ways to have a more accurate clock. One idea that occurred to me was to just use watch on the date command:

$ watch -dt -n 1 date

This will give you a clock that updates every second, with some extra highlighting to show the positions in the time string updating (that's the "-d" option). But this isn't a count down timer, it's a "count up" timer.

Another idea I had was to use at or cron to pop up the warnings (see Episode #50 for more detail on at and cron). The only problem is that both at and cron are limited to 1 minute granularity, so they don't work really well as a count down timer. But if you know your capture the flag session is supposed to end at noon, you could always do something like:

$ echo xterm -display $DISPLAY -e \'echo TIME IS UP\!\; bash\' | at noon

Notice I had to use a bunch of backwhacks so that the echo command passes a syntactically correct command into at. More interestingly, I have to explicitly declare my local $DISPLAY in the xterm command. While at is normally careful to preserve your environment variable settings, it apparently intentionally skips $DISPLAY-- probably because there's no guarantee you'll actually be on that screen when you at job finally executes.

Tuesday, July 21, 2009

Episode #52: Prompts & Pushing It

Ed goes:

I was reading through our recent episodes the other day (a wonderful pastime activity that I encourage everyone to participate in). In Episode #49, Hal mentioned a nice option for incorporating the current time in the command prompt, a useful feature for forensics folks who want to record the time that they executed each command, as well as for general sysadmins and testers who want to see how long given activities take. In Episode #38, I mentioned how we could change our prompt by setting the environment variable called "prompt", but I didn't give a bunch of options there. Let's explore these options in more detail, starting with including the time in our prompt a la Hal.

First off, instead of using the "set prompt=[whatever]" command to set our prompt to various values, we can alternatively use the prompt command to do so. The advantage of the latter approach is that it allows us to see all the glorious options we can use in setting our prompt:

C:\> prompt /?

By default, we've got $P for the path and $G for the greater than sign. To mimic Hal's prompt from Episode #49, we could run:

C:\> prompt $C$T$F$G
( 8:02:19.57)> dir
That gives us a prompt of open paren ($C) followed by the time ($T) followed by close paren ($F) followed by a greater than sign.

For forensics guys, we may want to included the date as well, and remove the parens:

C:\> prompt $D$S$T$G
Mon 07/20/2009 8:04:36.85>
Note that the $S is a space.

Now, these changes are just temporary, applying only to the current cmd.exe and any new cmd.exe processes it starts. To make the changes permanent, we need to alter the PROMPT environment variable in the registry

Mon 07/20/2009  8:14:05.60>reg add "hklm\system\currentcontrolset\control\session manager\environment"
/v prompt /t reg_expand_sz /d $D$S$T$G

The operation completed successfully.
After a reboot, your prompt will be changed for all users.

There is another interesting option we can add to our prompt via the $+ notation. This one will prepend our command prompt with a single plus sign for each directory we've pushed onto our directory stack via the pushd command. Wait a sec... before we get ahead of ourselves, let's look at the pushd and popd commands a bit.

When working in a given directory, sometimes you need to temporarily change into another directory or two, do a little work there, and then pop back into the original directory you were working in. The pushd and popd commands were created to help make such transitions smoother. When changing directories, instead of using the familiar cd command, you could run "pushd [dir]" as in:

Mon 07/20/2009  8:18:02.21> prompt $P$G
C:\Users\Ed> pushd c:\windows\system32
This command will store your current directory in a stack in memory, and then change you to the other directory (in the case above, c:\Users\Ed will be stored on a stack, and you will change to c:\windows\system32). You can do stuff in that other directory, and even change to other directories beyond that. But, when your work is done there, you can go back to your original directory by simply running:

C:\Windows\System32> popd
While you can push a bunch of directories on this directory stack and then pop them off in a LIFO operation, I find the pushd/popd commands most useful for just storing a single directory I know I'll have to pop back into in the near future, so I find myself often running:

C:\[wherever_I_am_working]> pushd .
Then, I do a little more work, and eventually change directories to where I need to work temporarily. When I'm ready to go back to where I was, I can simply popd. This technique is very helpful given that cmd.exe doesn't have the "cd -" option found in bash so that I can simply change into a previous directory I was in earlier.

With that background of pushd and popd under our belts, we can now see what the $+ does in our prompt -- It prepends one plus sign for each directory we've pushed.

C:\> prompt $+$P$G
C:\> pushd c:\windows\system32
+C:\Windows\System32> pushd c:\Users\
++C:\Users> pushd c:\temp
+++C:\temp> popd
++C:\Users> popd
+C:\Windows\System32> popd
The little pluses can help you remember how many things you have pushed on your directory stack.

And, if you wanted to get really elaborate and simulate the behavior of "cd -" by implementing some simple scripts to use in place of cd, you could do the following:

C:\> echo @pushd %1 > c:\windows\system32\cdd.bat
C:\> echo @popd > c:\windows\system32\cd-.bat

Now, instead of using "cd" to change directories, you could always run "cdd" to do so (I use that last d to remind me that it's pushing the directory onto the directory stack). Your new cdd command will work just like the old one, but it will remember the directories you've changed from, pushing them on the directory stack. Then, to go back to where you were before, you could run "cd-" (no space). It looks like this in action:

C:\> cdd c:\users\ed
+C:\Users\Ed> cd-

Whither Ed goest, Hal will go!

Ed, you say you want the date in the prompt in addition to the time? So be it:

$ export PS1='\D{%D %T}> '
07/20/09 10:02:50>

Unlike "\t" for the time code, there's no built-in escape sequence for including the date in the prompt. But bash does recognize "\D{...}" to insert an arbitrary set of strftime(3) escape sequences. This is very flexible, though not quite as terse. I could even add the day name to fully emulate Ed's Windows madness:

07/20/09 10:02:50> export PS1='\D{%a %D %T}> '
Mon 07/20/09 10:08:16>

You'll also notice that Unix allows us to specify spaces as actual spaces rather than "$S". What will those wacky Unix folks think of next?

Windows totally stole the pushd/popd idea from the Unix shell, and it works pretty much the same on both platforms:

$ export PS1='[\w]$ '
[~]$ pushd /tmp
/tmp ~
[/tmp]$ pushd /usr/local/bin
/usr/local/bin /tmp ~
[/usr/local/bin]$ popd
/tmp ~
[/tmp]$ popd

You'll notice that bash prints the directory stack after each pushd/popd operation, sort of as a reminder of where you are. You can also use the dirs command to dump the directory stack (in several different formats) or even clear the stack entirely:

[~]$ pushd /tmp
/tmp ~
[/tmp]$ pushd /usr/local/bin
/usr/local/bin /tmp ~
[/usr/local/bin]$ dirs # default
/usr/local/bin /tmp ~
[/usr/local/bin]$ dirs -l # expand ~ to full pathname
/usr/local/bin /tmp /home/hal
[/usr/local/bin]$ dirs -p # one dir per line
[/usr/local/bin]$ dirs -c # clear the stack
[/usr/local/bin]$ dirs

You can even use popd to selectively remove elements from the directory stack:

[/opt]$ dirs
/opt /dev /usr/local/bin /tmp /usr /var /etc ~
[/opt]$ popd +2
/opt /dev /tmp /usr /var /etc ~

Notice that the elements of the directory list are numbered starting with zero (in C, arrays are numbered starting from zero and this convention was carried through to most Unix utilities), so "popd +2" removes the third element counting from the left. "popd -2" would remove the third element counting in from the right.

Unlike Windows, we don't have to create a script file to make a cdd command on our Unix systems. We can just make an alias:

[~]$ alias cdd=pushd
[~]$ cdd /tmp
/tmp ~

What is interesting is that there's no built-in Unix equivalent for the Windows "$+" prompting functionality. But since the bash shell does command substitution on $PS1 (if necessary) each time it emits a shell prompt, we can hack our own solution together:

[~]$ export PS1='`for ((i=1; $i < ${#DIRSTACK[*]}; i++)); do echo -n +; done`[\w]$ '
[~]$ pushd /etc
/etc ~
+[/etc]$ pushd /tmp
/tmp /etc ~
++[/tmp]$ pushd /usr
/usr /tmp /etc ~
+++[/tmp]$ popd
/usr /etc ~
++[/usr]$ popd
/etc ~
+[/etc]$ popd

As you can see, I've added an expression in backticks at the front of the declaration for $PS1, so whatever text this sequence of commands results in will be incorporated into the shell prompt. Inside the backticks, I'm using a for loop to produce the necessary plus signs. To terminate the loop, I'm comparing against "${#DIRSTACK[*]}" which translates to "the number of elements in the $DIRSTACK array variable". $DIRSTACK is a highly magical environment variable that stores the elements of the directory stack that pushd and popd use.

You'll also notice that I'm starting the loop at offset 1 rather than offset 0. As you can see in Ed's Windows example, the "$+" element only counts the number of directories that have been explicitly pushed onto the stack with pushd. However, the bash $DIRSTACK variable always includes the current working directory. So when you pushd for the first time, the number of elements in $DIRSTACK is actually two: your original directory that you started in, plus the new directory you pushd-ed onto the stack. Anyway, I started my loop counter variable at 1 to more closely emulate the Windows behavior.

Tuesday, July 14, 2009

Episode #51: Leapin Lizards! ARP Stuff!

It's Hal's turn in the Bucket:

Mr. Bucket was pointing out the other day that we haven't had much to say about viewing and manipulating the ARP cache from the command line, and that's a darn shame. So let's remedy that little oversight right now.

The ARP cache is the system's local mapping of ethernet addresses to IP addresses of machines on the same LAN. On Unix systems, you can use "arp -a" to dump the cache or just dump information for a particular host:

$ arp -a ( at 00:04:23:5f:20:98 [ether] on eth0 ( at 00:30:48:7b:22:2e [ether] on eth0
$ arp -a gw ( at 00:04:23:5f:20:98 [ether] on eth0

As with most network administration commands on Unix, the "-n" flag suppresses the hostname information, in case you're having name resolution problems that are causing the command to hang:

$ arp -an
? ( at 00:04:23:5f:20:98 [ether] on eth0
? ( at 00:30:48:7b:22:2e [ether] on eth0

Occasionally monitoring your ARP cache can be interesting, because it can alert you when system hardware is changing on your network and also to ARP spoofing attacks. Here's a little bit of shell fu to watch for changes to a particular ARP entry:

$ while :; do c=`arp -a | awk '{print $4}'`; \
[ "X$c" == "X$p" ] && echo "$c (OK)" || echo "$p to $c (CHANGED)"; \
p=$c; sleep 5; done

to 00:04:23:5f:20:98 (CHANGED)
00:04:23:5f:20:98 (OK)
00:04:23:5f:20:98 (OK)

Notice the equality test where I'm putting an "X" in front of the value of each variable ("X$c" == "X$p")? That's a common trick for doing comparisons where one of the values might be null-- as in our case when we first enter the loop and $p is not set yet. If we just wrote, "[ $c == $p ]", the shell would generate a syntax error message during the first pass through the loop because it would interpret our fu as "[ 00:04:23:5f:20:98 == ]" which is clearly bogus. The extra "X" ensures that neither side of the comparison is ever empty.

Actually, the above example is a little bogus (though it includes plenty of tasty shell fu), because there's already a tool called Arpwatch that will monitor your ARP cache for changes-- expected or otherwise. I'll often run this tool on a couple of machines on critical networks, just to look for signs of hanky panky.

If you're running with root privileges, you can also use the arp command to manually manipulate your ARP cache. For example, "arp -d" deletes ARP entries. However, since ARP dynamically relearns the deleted entries, they don't tend to stay deleted for long:

# arp -d deer
# arp -a ( at 00:04:23:5f:20:98 [ether] on eth0 ( at on eth0
# ping -c 1 deer >/dev/null
# arp -a ( at 00:04:23:5f:20:98 [ether] on eth0 ( at 00:30:48:7b:22:2e [ether] on eth0

The root account can also use "arp -s" to add static ARP assignments to the cache. For example, I was recently debugging a VPN problem for one of my customers and we suspected the Open Source VPN software we were running on our Unix system wasn't properly doing Proxy ARP for the remote clients. So we manually added an ARP entry for the client we were testing with that pointed to the MAC address of our gateway (note that you must do this on the gateway machine that is the owner of the specified MAC address):

# arp -s 00:04:23:5f:20:98 pub
# arp -an
? ( at 00:30:48:7b:22:2e [ether] on eth0
? ( at PERM PUB on eth0

The extra "pub" argument means that the host should "publish" this static ARP entry-- that is, respond with our local MAC address when other hosts on the LAN make ARP requests for In other words, our gateway should perform Proxy ARP for this address.

Since we did not specify the "temp" option in the above command, this static ARP entry will persist until we reboot the machine. If you specify "temp", then the entry will time out as normal when your system flushes its ARP cache. Generally, though, if you're specifying static ARP entries, you're doing it for a reason and you want them to stick around. For example, some highly secure sites will populate static ARP entries for all authorized hosts on their local LANs in order to thwart ARP spoofing attacks (static ARP entries will never be overwritten by dynamic ARP information learned from the network). Of course, the static ARP entries will need to be reloaded every time the system reboots, so you'll need to create a boot script to automatically reload the entries from a configuration file. And you'd need to update the configuration file every time a host was added or a NIC was changed on the LAN. It's a huge hassle and not normally done in practice.

That's about it for ARP on Unix. If memory serves, things aren't too different on Windows, but I bet Ed has some tasty fu for us anyway...

Ed responds to Hal's Bucket Screed:
As Hal anticipated, the arp command on Windows is very similar to the Linux version, but there are a few annoying deltas. To dump the ARP cache, we can run:

C:\> arp -a

Interface: --- 0x10003
Internet Address Physical Address Type 00-0c-29-c2-9f-1b dynamic de-ad-be-ef-00-00 static 00-0c-29-a3-2c-8b dynamic

There are a couple of noteworthy things here: First off, we don't see host names. The Windows arp command works exclusively with IP addresses and not names. Thus, we don't have a -n option, and cannot enter domain names of the machines at command line. If we want to zoom in on individual machines, we need to enter an IP address, not a name:

C:> arp -a

Interface: --- 0x10003
Internet Address Physical Address Type de-ad-be-ef-00-00 static

Next, note that we have that "Type" column, which tells us whether an entry is dynamic or static. This is handy, because we can list all static ARP entries using:

C:\> arp -a | find "static"

Next, note that the Windows arp command uses dashes between octets in the MAC address, and not colons. This is annoying, and I often type it wrong before I type it right.

Hal's ARP-monitoring command is fun and useful, so I tried to mimic its functionality almost identically on Windows, coming up with the following command (brace yourselves):

C:\> cmd.exe /v:on /c "for /l %i in (1,0,2) do @for /f "skip=3 tokens=2" %i in
('arp -a') do @set current=%i & (if X!current!==X!previous!
(echo %i ^(OK^)) else echo !previous! to %i ^(CHANGED^)) & set previous=%i
& ping -n 6 > nul)"

!previous! to 00-0c-29-c2-9f-1b (CHANGED)
00-0c-29-c2-9f-1b (OK)
00-0c-29-c2-9f-1b (OK)
00-0c-29-c2-9f-1b (OK)
00-0c-29-c2-9f-1b (OK)
00-0c-29-c2-9f-1b to 01-02-03-04-05-06 (CHANGED)
01-02-03-04-05-06 (OK)
01-02-03-04-05-06 (OK)
01-02-03-04-05-06 (OK)
To analyze this command, let's work our way from the outside in. I'm invoking a cmd.exe with delayed environment variable expansion so that I can store and compare some variables inside my command. I then have a FOR /L loop that counts from 1 to 2 in steps of zero to keep this puppy running forever. I add a 5-second delay at the end by pinging six times. Now, we get to the core of the command. I run a FOR /F loop to parse the output of my arp command, skipping 3 lines of output to zoom in on what I want, grabbing the second item (the MAC address), which I place into iterator variable %i. At each step through the loop, I set the variable "current" to my iterator variable so I can compare it against my previous MAC address for that ARP table entry with an IF statement. IF statements don't like to deal with FOR loop iterator variables, so I have to move my %i value into "current" for comparison. I then compare it with the previous value, using Hal's little X prepend trick so they are never empty strings. If they match, I print out the MAC address (%i) and the "(OK)" just like Hal does. I have to put ^'s in front of my parens so that echo knows to display them. If the current and previous MAC address from that arp entry do not match, I print out that the previous changed to %i followed by "(CHANGED)". Finally, I store the current value of %i as my "previous" variable and then wrap around. It's not elegant, but it gets the job done, and looks very similar to the output and functionality of Hal's command.

Lessee... what other stuff does Hal have for us?

Yup... we have "arp -d" to delete an ARP entry:

C:\> arp -d

And, we also have arp -s to plant a static ARP entry:

C:\> arp -s 01-02-03-04-05-06

Unfortunately, the Windows arp command does not have a "pub" option like the Linux command to implement proxy ARP functionality.

By the way, this whole discussion of the arp command was triggered by Byte Bucket's comment to Hal and me in our discussion of scheduling tasks. Byte (if I can be so informal) mentioned that on Windows, to define a static ARP entry, Microsoft recommends that you schedule a startup task, by saying, "To create permanent static ARP cache entries, place the appropriate arp commands in a batch file and use Scheduled Tasks to run the batch file at startup." This is a good idea if you ever want to hard code your arp tables to avoid ARP cache-poisoning attacks. Of course, in their recommendation, Microsoft never tells you how to actually do this. Well, never fear... Command Line Kung Fu is here!

We start by creating our batch file for the static ARP entry:

C:\> echo arp -s aa-bb-cc-dd-ee-ff > C:\static_arp.bat

You can add more static arp entries into this file if you'd like, echoing in one arp command per line, appending to the file with >>. I'm storing it right in c:\ itself for easy spotting, although you may find that tucking it away somewhere you keep your admin scripts is a less clutterful approach. Then, we make that a startup task using the schtasks command, which we described in our last episode (#50):

C:\> schtasks /create /sc onstart /tr c:\static_arp.bat /tn static_arp

And there you have it! We've seen how to make a little arp-cache watcher in a single command, as well as how to implement static arp entries at system startup. Good topic, Hal!

UPDATE: Diligent reader Curt points out that in Windows Vista and 7, you can add a static ARP entry using the netsh command, with the following syntax:

C:\> netsh interface ipv4 add neighbors "Local Area Connection" 12-34-56-78-9a-bc

And, I'm delighted to report that this will survive a reboot, so no scheduled task startup script is required. However, this neighbors context for netsh only appears in Vista, 2008, and Win7. In XP or 2003, you can go with the schtasks method I outline above. Thanks for the input, Curt!

Tuesday, July 7, 2009

Episode #50: Scheduling Stuff

Ed begins:

The other day, I was going through the 49 episodes of this blog we've developed so far when it occurred to me -- besides an occasional minor mention of the cron daemon by Hal, we haven't done much at all with scheduled tasks! Heaven forbid! And, there's some pretty important gotchas to avoid when scheduling stuff in Windows.

Back in the Mesozoic Era, dinosaurs like me scheduled our jobs on Windows using the "at" command. It was nice and simple, with no bells or whistles. And it's still there, offering a quick and easy way to schedule a job:

C:\> at [\\machine] HH:MM[A|P] [/every:day] "command"

If you don't specify a \\machine, the command is run locally. You need to have admin credentials on the local or remote machine where you're scheduling the job. The job itself will run with system privileges. Some versions of Windows support 24-hour military time, and some do not. But, all versions of Windows support HH:MM followed by a cap-A or cap-P. If you omit the /every: option, the command runs just once at the time you specify. If you do provide a /every:day, you can specify the day as a day of the month (1-31) or a day of the week (monday, tuesday, etc.) Remember, if you schedule something for the 31st day of the month, it won't run in months with fewer days: February, April, June... well, you remember the mnemonics, I'm sure.

The at command by itself will show which jobs were scheduled using the at command:
C:\> at \\PaulsComputer
Status ID Day Time Command Line
1 Today 7:00 PM WriteTenableBlog.bat
1 Each Th 7:00 PM RecordPodcast.bat
2 Each F 3:00 PM WashHalsCar.bat
3 Each F 3:00 PM PickupHalsDrycleaning.bat
4 Each F 4:00 PM GiveHalBackrub.bat
5 Each F 6:00 PM GoToTherapistSession.bat
6 Each 28 12:00 PM PayPsychBill.bat
Note that there is no need to list who the job will run as, because, when scheduled using the at command, it runs with SYSTEM privileges.

See those little ID numbers at the beginning of each task? We can use them to refer to tasks, especially to delete them. A simple "at 1 /del" will delete task 1. To kill all the tasks, and let God sort them out, we could run:

C:\> FOR /F "skip=2" %i in ('at') do @at %i /del

Regular readers of this blog should instantly know what I'm doing here: just running the at command, parsing its output by skipping column headers and ----- to zoom in on the ID number, which I then delete.

I still use the at command for quick and dirty scheduling where I want simple syntax to make something run with system privs. But, a far more flexible way to schedule jobs is to use the more recent schtasks command. This is the way Microsoft wants us to go for modern task scheduling. The syntax includes a myriad of options for creating, deleting, querying, changing, and invoking tasks. The syntax is so complicated, by the way, I haven't been able to actually memorize it all, and I've tried. I won't reproduce all the usage options here (run "schtasks /?" for details), but will instead focus on creating and querying tasks.

To create a task, you could use the following syntax:

C:\> schtasks /create [/s system_name] [/u username] [/p password] [/ru runuser]
[/rp runpassword] /sc shedule /mo modifier /tn taskname /tr taskrun
/st starttime /sd startdate

As with the at command, schtasks schedules locally unless you specify a remote machine with the /s option (oh, and no \\ is used before the system name here). The /u and /ru give some people a bit of trouble. The /u and /p refer to the credentials you want to use for a remote machine to schedule the task. The at command always used your current credentials, while schtasks gives you an option to use other credentials to do the scheduling. When it actually starts, the task itself will run with the /ru and /rp credentials. If you want system credentials, just use "/ru SYSTEM" and leave off the /rp.

The /sc schedule can be minute, hourly, daily, weekly, once, onlogon (when you specify a given user) and much m ore. The /mo modifier specifies how often you want it to run within that schedule. So, for example, to run every 2 hours, you could use "/sc hourly /mo 2".

The /tn taskname is a name of your choosing. The /tr is the actual command you want to run.

The /st startime is specified in 24-hour military time. Some versions of Windows will accept it in HH:MM format, but many versions of Windows require HH:MM:SS. I always use the latter to make sure my command works across Windows versions. And, finally, /sd has a date in the form of MM/DD/YYYY.

When run by itself (with no options), schtasks shows scheduled tasks (the same output we get from "schtasks /query").

AND HERE IS A REALLY IMPORTANT POINT! The at command shows only the jobs scheduled via the at command itself. The schtasks command shows all jobs scheduled via schtasks as well as the at command. If you are relying only on the at command to display jobs, you are missing all those tasks scheduled via schtasks. Also, wmic has a job alias. You might think that it would show all jobs, right? Wrong! Like the at command, "wmic job show full" displays only those jobs created using the at command, and does not display jobs created via schtasks. You've been warned!

Both the schtasks and at commands, when they schedule a job, create a file summarizing the job in C:\windows\tasks on Win XP Pro and later, and in c:\windows\system32\tasks in Vista, 2008 Server, and Windows 7. The XP-style file is an ugly blob of non-ASCII printable data. The Vista and later files are considerably less ugly XML. I know you may be tempted to delete tasks by removing these files. I caution you against it. I've found that deleting at-style tasks by removing their files in XP works just fine, but it doesn't remove all traces of at-style tasks in Vista and later. Your best bet is to use "at [n] /del" and "schtask /delete [options]".

Even though its got a bazillion-plus options, the schtasks command does not have the option of displaying only those tasks associated with a given user or scheduled to run on a certain frequency or date. But, we can use a little command-line kung fu to tease that information out. The technique is all based on a useful option in "schtasks /query" which allows us to specify the output format with /fo, with options including table, list, or csv. The table and list formats are nice, but csv is especially useful. The /v option gives us verbose output, which holds all of the attributes of our tasks.

With that info, you can create a nice CSV file with all of your tasks to open in a spreadsheet by running:

C:\> schtasks /query /v /fo csv > tasks.csv

C:\> tasks.csv

In your spreadsheet program, you can see the various column names for all the fields. We could then search just for some specific output in the tasks.csv file using findstr for strings or even regex. For example, if you want a list of jobs scheduled to run weekly, you could use:

C:\> schtasks /query /v /fo csv | findstr /i weekly

Or, if you want a list of jobs associated with the SYSTEM user, you could run:

C:\> schtasks /query /v /fo csv | findstr /i system

Using this as your base command, there are a myriad of other options you can search for, including dates and times.

Hal chimes in:

Ed, I'm so stoked you brought up this topic! For the Unix people reading this blog, I want to see a show of hands from people who knew that Unix had its own "at" command. OK, for the three of you who have your hands raised, put your hands down unless you've actually used an "at" job within the last year. Yeah, I thought so.

It's a real shame that the "at" command has dropped so far out of the "common knowledge" for Unix folks because at jobs are really useful. At their most basic level, you can think of an at job as a one-shot cron job:

# echo find /tmp -type f -mtime +7 -exec rm {} \\\; | at 00:00
warning: commands will be executed using /bin/sh
job 1 at Tue Jul 7 00:00:00 2009

You feed in the commands you want executed on the standard input, or you can use "at -f" to read in commands from a file.

You can use "atq" to get a list of all the pending jobs along with their job numbers:

# atq
1 Tue Jul 7 00:00:00 2009 a root

But if you want to view the details of a pending job, you need to use "at -c jobnum" ("-c" for "confirm" is how I always remember it):

# at -c 1
# atrun uid=0 gid=0
# mail root 0
umask 22
USER=root; export USER
PWD=/home/hal; export PWD
HOME=/root; export HOME
LOGNAME=root; export LOGNAME
cd /home/hal || {
echo 'Execution directory inaccessible' >&2
exit 1
find /tmp -type f -mtime +7 -exec rm {} \;

You'll notice that when at sets up the job, it's careful to preserve the environment variable settings in the current shell, including the umask value. It even makes sure to cd into the directory from which you scheduled the at job, just in case your at job uses relative path names. Very smart little program.

Finally, "atrm" allows you to cancel (remove) jobs from the queue:

# atq
1 Tue Jul 7 00:00:00 2009 a root
# atrm 1
# atq

What's cool about the Unix at command compared to the Windows version is that you have much more flexibility as far as time scheduling formats. All of the following are valid:

# at -f mycommands 00:00 7/10       # run mycommands at midnight on Jul 10
# at -f mycommands midnight 7/10 # "noon" and "teatime" (4pm) are also valid
# at -f mycommands 00:00 + 4 days # expressed as relative date offset
# echo init 0 | at now + 2 hours # power off system in two hours

Many more time specifications are allowed-- please consult the at manual page.

For jobs that need to run more than once, you're supposed to use cron. Typically cron jobs are created using an interactive editor via "crontab -e". But sometimes you want to set up a cron job directly from the command line without using an editor-- for example when I'm using a tool like fanout to set up the same cron job on multiple systems in parallel. In these cases, my usual tactic is to do a sequence of commands like:

# crontab -l > /root/crontab                  # dumps current cron jobs to file
# echo '12 4 * * * /usr/local/bin/nightly-cleanup' >> /root/crontab
# crontab /root/crontab # replaces current jobs with modified file
# rm /root/crontab # cleaning up

It's kind of a hassle actually, but it works. By the way if you're root but want to operate on another user's crontab file you can just add the "-u" flag, e.g. "crontab -u hal -l".

The first five columns in the cron entry are the time and day spec for when you want the cron job to operate. The first column is "minutes" (0-59), the second "hours" (0-23), then "day of month" (1-31), "month" (1-12), and "day of week" (0-7, Sunday is 0 or 7). Here are some examples:

12 4 * * * /usr/local/bin/nightly-cleanup             # every morning at 4:12am
0,15,30,45 * * * * /usr/sbin/sendmail -Ac -q # every 15 minutes
15 0 * * 0 /usr/local/sbin/rotate-tape # Sundays at 15 past midnight
0 0 1 * * find /var/log -mtime +30 -exec gzip {} \; # the first of every month
0 0 22 8 * /usr/local/bin/anniversary-reminder # every year on 8/22

Frankly, I don't find myself using the 3rd and 4th columns all that frequently, or really even the 5th column all that much. Most of my jobs tend to run nightly at some regular time.

By the way, you'll discover that old farts like me have a thing about not scheduling cron jobs between 2am and 3am. That's because older cron daemons had trouble dealing with Daylight Savings Time shifts, and would either skip jobs entirely or run them twice depending on which way the clock was shifting. This was fixed by Paul Vixie in his "Vixie cron" package, which is the standard cron daemon on Linux these days, but may still be an issue if you have older, proprietary Unix systems still running around. Check the man page for the cron daemon on your system if you're not sure.