Pages

Monday, March 30, 2009

Episode #17: DNS Cache Snooping in a Single Command

Ed muses:

DNS cache snooping is a fun technique that involves querying DNS servers to see if they have specific records cached. Using this technique, we can harvest a bunch of information from DNS servers to see which domain names users have recently accessed, possibly revealing some interesting and maybe even embarrassing information. In his nifty paper on the subject, Luis Grangeia explains, "The most effective way to snoop a DNS cache is using iterative queries. One asks the cache for a given resource record of any type (A, MX, CNAME, PTR, etc.) setting the RD (recursion desired) bit in the query to zero. If the response is cached the response will be valid, else the cache will reply with information of another server that can better answer our query, or most commonly, send back the root.hints file contents."

I've always liked that paper, and use the technique all the time in pen tests when my clients have a badly configured DNS server. I've always implemented the technique using a Perl script on Linux. But, last Friday, I was on a hot date with my wife at Barnes & Noble, thinking about that paper in the back of my mind, when it struck me. Heck, I can do that in a single* Windows command. Here it is, using the default of A records:

C:\> for /F %i in (names.txt) do @echo %i & nslookup -norecurse %i [DNSserver] | find
"answer" & echo.

This command is built from a FOR /F loop, which iterates over the contents of the file names.txt. In that file, just put all of the names you want to check in the target DNS server's cache, with one name per line. At each iteration through the loop, we turn off command echo (@), display the current name we are checking (echo %i) and then we run nslookup with the -norecurse option. This option will emit queries with the Recursion Desired bit set to zero. Most DNS servers will honor this request, dutifully returning our answer from their cache if they have it, and sending us a list of DNS servers they'd forward to (such as the root name servers) if they don't have it cached. We query against the name we've iterated to (%i) against the chosen DNS server ([DNSserver]). I scrape through the results looking for the string "answer", because if we get back a non-authoritative answer, nslookup's output will say "answer". I added an extra new line (echo.) for readability.

So, if the entry is in the target DNS server's cache, my output will display the given name followed by a line that says "Non-authoritative answer:".

Note that if the target DNS server is authoritative for the given domain name, our output will display only the name itself, but not the "Non-authoritative answer:" text, because the record likely wasn't pulled from the cache but instead was retrieved from the zone files themselves of the target DNS server.

A lot of people discount nslookup in Windows, thinking that it's not very powerful. However, it's got a lot of interesting options for us to play with. And, when incorporated into a FOR /F loop as I've done above, all kinds of useful iterations are possible.

*Well, I use the term "single" here loosely. It's several commands smushed together on a single command line. But, it's not too complex, and it isn't actually that hard to type.

Hal Responds:

Ed, didn't anybody teach you that you should configure your DNS servers to refuse cache queries? Otherwise you could well end up being used as an amplifier in a DNS denial of service attack.

Putting aside Ed's poor configuration choices for the moment, I can replicate Ed's loop in the shell using the "host" command:

$ for i in `cat names.txt`; do host -r $i [nameserver]; done

"-r" is how you turn off recursion with the host command. The "host" command is silent if it doesn't retrieve any information, so I don't need to do any extra parsing to separate out the negative responses like Ed does. In general, "host" is much more useful for scripting kinds of applications than the "dig" command is ("nslookup" has been deprecated on Unix, by the way, so I'm
not going to bother with it here).

If you happen to be on the system where the name server process is running, you can actually get it to dump its cache to a local file:

# rndc dumpdb -cache

The cache is dumped to a file called "named_dump.db" in the current working directory of the name server process. It's just a text file, so you can grep it to your heart's content.

But where is the current working directory of the name server process? Well you could check the value of the "directory" option in named.conf, or just use lsof:

# lsof -a -c named -d cwd
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
named 26808 named cwd DIR 253,4 4096 917516 /var/named/chroot/var/run/named

The command line options here mean show the current working directory ("-d cwd") of the named process ("-c named"). The "-a" means to "and" these two conditions together rather than "or"-ing them, which would be the normal default for lsof.

Man, I wish we could do a whole bunch of episodes just on lsof because it's such an amazing and useful command. But, of course, Ed doesn't have lsof on Windows and he'd get all pouty.

Friday, March 27, 2009

Episode #16: Got That Patch?

Ed kicks off the sparring with:

One of the nice features within WMIC is its ability to check whether a given patch is installed on a box using its Quick Fix Engineering (QFE) alias. (I know... QFE in my old life as a Solaris admin stood for Quad Fast Ethernet. Not anymore. Oh, and don't tell anyone that I used to be a Solaris admin. That's just our little secret.)

Anyway, we can query by KB article number for a given patch, using the following syntax:

C:\> wmic qfe where hotfixid="KB958644" list full

This little gem will show whether the patch associated with MS08-067 is installed, which was the vulnerability the Conficker worm initially used to spread. This command, when used with the appropriate KB number, can really help you to determine whether a given patch is installed, and whether it was installed in a timely fashion. Yes, the output does include an "InstalledOn" date! That's especially helpful if you need to verify that your admins have moved quickly for an out-of-cycle patch, like the one associated with MS08-067.

You can make it run remotely, provided you have admin credentials on a target machine, by adding the following notation before the qfe:

/node:[IPaddr] /user:[admin] /password:[password]

Leave off the password, and it'll prompt you. Change the [IPaddr] to "@[filename]", and it'll check all machines whose names or IP addresses you have listed in that file.

I was once teaching a class full of a bunch of auditors where this command came up, in a pretty comical fashion. There was a delightful little old lady in the class, just a grey-haired sweetheart with a nice grandmotherly voice. I was explaining wmic qfe to the class. She raised her hand, and meekly asked very slowly, "So, does this output include the date the patch was installed?" I responded, "Why, yes, it does." She shot back with an evil cackle, "Yesssss! I will be able to DESTROY people's lives with this!" Ahhh... auditors. You gotta love 'em. :)

Hal Says:

In this case, Ed has the advantage because he only has to deal with a single operating system. It seems like package and patch management is one of the things that no two Unix or Linux providers do the same way. So for purposes of this example, let me just pick two of the more popular Linux package management schemes: the Debian apt system and the Red Hat yum and rpm utilities.

On Debian-derived systems like my Ubuntu laptop, figuring out which packages need to be upgraded is straightforward:

# apt-show-versions -u
libjasper1/intrepid-security upgradeable from 1.900.1-5 to 1.900.1-5ubuntu0.1
libnss3-1d/intrepid-security upgradeable from 3.12.0.3-0ubuntu5 to 3.12.0.3-0ubuntu5.8.10.1

It's also pretty clear from the output that these are both security-related updates.

What's interesting is that Debian doesn't appear to track the install time for the various packages on the system, much to the disappointment of Ed's little old lady auditor I'm sure. However, when system updates are done the new packages are downloaded to /var/cache/apt/archives. If the system administrator doesn't clean up their package cache on a regular basis ("apt-get clean"), you could look at the timestamps on the various package files in this directory to find out when the last update occurred.

On Red Hat systems, "yum list updates" will show packages that need to be updated. However, unlike Debian systems, there's no clue about whether the given patch is security-related or just a functionality update. The good news is that as of RHEL 5.x, there is now a "yum-security" plug-in that you can install ("yum install yum-security") which will allow you to get information about and install only security-related patches:

# yum list-security             # lists available security updates
# yum update --security # installs security-related updates

Red Hat systems do track package installation times. Here's a command to dump out the package name, version, and installation date of a single package:

# rpm -q --qf "%{NAME}\t%{VERSION}\t%{INSTALLTIME:date}\n" tzdata
tzdata 2008i Sun 23 Nov 2008 10:06:32 AM PST

You can also generate a report for all packages on the system:

# rpm -qa --qf "%-30{NAME} %-15{VERSION} %{INSTALLTIME:date}\n"

That will give you a nice, pretty-printed list with neat column boundaries.

Paul Chimes In:


Luckily since we are running OS X, we don't have to worry about vulnerabilities because the "fan boy's" operating system doesn't concern itself with such things. Ha! Kidding of course (however, most of our vulnerabilities and exploits are of the 0day flavor, so patches don't help us, *cough* Safari). OS X comes with a command line utility called softwareupdate. You can list packages that need to be installed using the following command:

# softwareupdate -l
Software Update Tool
Copyright 2002-2007 Apple

No new software available.


In the above command you can see that the system is up-to-date (I tend to apply software updates from Apple as soon as they come out, as usually the vulnerability and exploit has been out for some time). Having a command line version makes it easy to update systems remotely, provided you have root privileges on the remote system. If there were packages available to install I could use the following command to install them:

# softwareupdate -i -a


Which instructs OS X to install (-i) all available updates (-a). OS X is a different beast than Linux/UNIX (borrowing more from *BSD, which we haven't covered in any detail yet), so getting more information about installed software is different. One command I cam across was "pkgutil", which can be used to see which updates have been installed. You can run the following command (without root or administrator privileges):

$ pkgutil --packages | grep com.apple.pkg.update*
com.apple.pkg.update.os.10.5.3
com.apple.pkg.update.os.10.5.4
com.apple.pkg.update.security.2008.005
com.apple.pkg.update.os.10.5.5
com.apple.pkg.update.security.2008.007
com.apple.pkg.update.os.10.5.6
com.apple.pkg.update.security.2009.001


And it will list the currently installed software update packages on the system. Apple typically releases updates in the format of ., so 2009.001 is the first security update package for 2009. If you want to get a bit more specific on the date the package was installed, you can look at the timestamps of the files in the /Library/Receipts/boms/ directory:

# ls -l  com.apple.pkg.update.*
-rw-r--r-- 1 _installer wheel 3420277 May 23 2008 com.apple.pkg.update.os.10.5.3.bom
-rw-r--r-- 1 _installer wheel 481598 Jun 20 2008 com.apple.pkg.update.os.10.5.4.bom
-rw-r--r-- 1 _installer wheel 876891 Sep 6 2008 com.apple.pkg.update.os.10.5.5.bom
-rw-r--r-- 1 _installer wheel 3111901 Dec 8 22:15 com.apple.pkg.update.os.10.5.6.bom
-rw-r--r-- 1 _installer wheel 122728 Jul 31 2008 com.apple.pkg.update.security.2008.005.bom
-rw-r--r-- 1 _installer wheel 460162 Sep 29 23:53 com.apple.pkg.update.security.2008.007.bom
-rw-r--r-- 1 _installer wheel 469254 Jan 30 21:51 com.apple.pkg.update.security.2009.001.bom


A "bom" file in OS X is a "Bill Of Materials" which in the case of an update, is a list of files that were installed as part of that package. Not as pretty as Hal's output, but then again, who's as pretty as Hal?

Wednesday, March 25, 2009

Episode #15: New User Created When?

Last week, Mr. Byte Bucket (middle name "Mercy"), posed a kung fu challenge to me based on a discussion in the pauldotcom IRC channel. He asked:

"How can I determine the account creation time for a local account on a Windows host from the command-line?"

Folks in the channel had suggested:

C:\> wmic useraccount get name,localaccount,installdate

Unfortunately, the installdate is never populated, so we can't get any love there.

Another suggested:

C:\> net user [UserName]

That's a little better, in that it has some actual data. But, the dates it contains are when the password was last set. If the password was set at account creation, it'll be the creation date. But, that inference might be a bit much.

An item I've found very helpful that is a somewhat close proxy for account creation time is the time that the account is first used to logon to the box. We can see that by looking at the creation date of the home directory of the account:

C:\> dir /tc "C:\Documents and Settings\"

Or, on Vista, where they moved User accounts by default to C:\users:

C:\> dir /tc C:\Users\

As we've seen in past episodes, /t means time, and the c option means creation time. Look at the creation time of the directory of the user your are interested in, and that's often even more useful than the original creation time of the account itself.

But, it's kind of side-skirting the issue, no? How can you find the actual time of account creation, independent of its use to logon? For that, we can turn to the event logs, provided the system is configured to "Audit account management", which sadly, is turned off by default.

If you have it turned on, though, you can query it on XP Pro using the great built-in VBS script called eventquery.vbs, used thusly:

C:\> cscript c:\windows\system32\eventquery.vbs /L security /FI "id eq 642"

That shows us what we want on Windows XP and 2003 Server. Frustratingly, our great buddies at Microsoft removed eventquery.vbs from Vista. Thanks for nuthin' guys.

But, what Microsoft takes, the often give back, in a completely different and far more complex and bewildering form. In place of eventquery.vbs, we now get wevtutil, a command-line tool for interacting with event logs. We can query logs using:

C:\> wevtutil qe security /f:text "/q:*[System[(EventID=4720)]]" | more

The wevtutil query syntax is impossibly complex, and something I frankly loath. Note that you have to get the case right on EventID or else it won't work. But, this command will show you a huge amount of information about any accounts created locally on the system, including the date, time, SID creating the account, SID of the created account, UAC settings for the account, and so on.

Fun, fun, fun!

Hal Says:

I really wish I could claim that Linux and Unix were somehow superior to Windows in terms of solving this problem, but like Windows we don't track account creation events as a general rule. Obviously there are exceptions because of additional levels of logging, such as when you use sudo to create accounts or enable kernel-level auditing. However, even these approaches could be subverted by a clever attacker.

So we're left using the same sorts of proxies that Ed uses in his Windows example. You can normally find the date of last password change in the third field of a user's /etc/shadow entry:

hal:<hash censored>:14303:0:99999:7:::

The value is in days since the beginning of the Unix "epoch" (Jan 1, 1970). If you happen to have the "convdate" program installed (it's part of the "inn" package on my Red Hat systems), you can use it to convert these dates:

# /usr/lib/news/bin/convdate -c \
`awk -F: '/^hal:/ {print $3 * 86400}' /etc/shadow`

Fri Feb 27 16:00:00 2009

"convdate -c" converts the number of seconds since the Unix epoch to a human-readable date. So we use awk to extract the third field of my /etc/shadow entry and multiply this value by 86400 (24hrs * 60min * 60sec).

There are some problems with this approach, however. First, as Ed points out, this only gets you the time of the last password change for the user. If you force users to change passwords regularly, this will only get you the creation time of recently created accounts. Second, this approach only works if you keep user account information in the local passwd and shadow files-- if you use directory services like Kerberos and LDAP, then all bets are off. Third, it's certainly possible for an attacker who's broken root to create a local account and modify the third field of /etc/shadow, or simply not populate this field to begin with.

Ed's next suggestion is to look at the creation date on the user's home directory. As I mentioned in Episode #11, Unix doesn't track creation times on files. So you're left with using inode numbers as suggested in Episode #11 in order to make a guess at relative creation dates of different user home directories. It might actually be more fruitful to look at the last modified times on various "dot files" in the user's home directory, since users don't tend to mess around with these much once they get their environment customized the way they want it:

# ls -ltd /home/hal/.[^.]* | tail -1
-rw-r--r-- 1 hal users 1254 Aug 20 2007 /home/hal/.bashrc

There's a couple of interesting things going on in the command above. First we're telling "ls" to give us a detailed listing so we see the timestamps ("-l"), not to list the contents of matching directory names but just list the directories themselves ("-d"), and to sort the listing by last modified time with the most recent entries first ("-t"). Notice that we're also using the syntax ".[^.]*" just to match the dot files and directories without matching the ".." link that refers to the parent directory. "tail -1" just pulls off the oldest entry.

Still there are problems with this approach as well. The biggest problem is why would an attacker necessarily create a home directory for the back-door accounts they've created on your system? Even if they did do this for some reason, you can't trust the timestamps on the files because the attacker may have modified them-- even setting them backwards if they've successfully broken root.

How about Ed's idea of checking the first login date for the account? There are a couple of different places we could go to look for this. First we could use the "last" command to dump the wtmp log entries for a particular user in reverse chronological order (most recent to oldest):

# last hal | tail -3
hal pts/0 elk.deer-run.com Mon Mar 9 07:02 - 10:02 (02:59)

wtmp begins Sun Mar 8 04:04:53 2009

As you can see from the output above, however, the wtmp logs normally get turned over every so often. It's possible that the first login for this user actually happened prior to March 8th, but we can't see it because it's no longer in the log.

There's also the Syslog stream for the system. Login events usually end up in a file like /var/log/auth.log or /var/log/secure (see /etc/syslog.conf for where "auth" logs end up on your particular flavor of Unix). Again, however, these logs normally get "rotated" on a regular basis, so you'll want to make sure you search the entire set of logs for the oldest entry:

# grep hal /var/log/secure* | tail
[...]
/var/log/secure.4:Mar 1 21:37:49 deer sshd[23446]: Accepted password for hal from 192.168.100.1 port 12501 ssh2
/var/log/secure.4:Mar 1 21:37:49 deer sshd[23446]: pam_unix(sshd:session): session opened for user hal by (uid=0)
/var/log/secure.4:Mar 1 21:45:42 deer sshd[23446]: pam_unix(sshd:session): session closed for user hal

Notice that in fact these login events do pre-date the wtmp entries we saw from the "last" command above. But even these logs don't go much further back than our wtmp data and we're left to wonder if there were earlier login events that we're not seeing. By the way, since attackers who gain access to the system can modify the logs and remove traces of their logins, you're better off having a copy of these logs on a central, secure log server for your enterprise.

One thing you must to be aware of, however, is that both wtmp style and Syslog logging of user access is not mandatory on Unix systems. In other words, you get these logs because the application developers of SSH, /bin/login, etc have all decided to add the code to their applications to update the logs as appropriate. However, an attacker who plants a back door on your system is unlikely to elect to log the activity of that back door. So again you need to enable extra levels of logging like kernel-level auditing in order to have an application-independent log of events on the system.

Bottom line is that I suggest establishing an external control of some sort that monitors your user database (in whatever form it happens to live in) and alerts you to not only new account creations, but also account deletions, and account lock/unlock activity. You should probably also monitor for unauthorized changes to the passwords for "system" type accounts like root, oracle, and the database access accounts for web applications and the like.

Monday, March 23, 2009

Episode #14 - Command Line Shortcuts

Paul Writes In:

Since the gloves are coming off, I would like to highlight a couple of features in the UNIX/Linux command line that save me some time and help foster my loving relationship with Bash (just don't tell my wife). As I've stated before, I really hate to type, so I tend to use as many shortcuts as I can, for example:

If I run the following command:

$ ls -l /home/windows


It will list the contents of the /home/windows directory (don't worry it gets better). If I've copied files into that directory and want to run that command again I can type:

$ !ls


Which will execute the last command that I ran containing the string "ls". Lets keep the fun going and run the following command:

$ ^windows^linux


This command will run my previous command and replace the string "windows" with "linux" (some would say "upgrade" my command). If I want to run my previous command I can use the following:

$ !!


I also use the following technique quite often:

$ ls !$


The !$ is the last parameter of your previous command. Now, you may feel like a hotshot using this Kung Fu, however try not to use them in conjunction with the "rm" command (like !rm) because you may be in the "/" directory instead of "/tmp" :)

Hal Says:

Strictly speaking, "!ls" means run the last command line that starts with "ls", not the last command that contains "ls":

$ ls -l /usr/bin/file
-rwxr-xr-x 1 root root 16992 May 24 2008 /usr/bin/file
$ file /bin/ls
/bin/ls: ELF 64-bit LSB executable, AMD x86-64, ...
$ !ls
ls -l /usr/bin/file
-rwxr-xr-x 1 root root 16992 May 24 2008 /usr/bin/file

As Paul points out, however, it can be dangerous to just pop off with "!somecommand" when you're not sure what's in your history. Luckily, there's the ":p" option which prints the command that would be executed without actually running the command:

# !/etc:p
/etc/init.d/named restart
# !!
/etc/init.d/named restart
Stopping named: ......... [ OK ]
Starting named: [ OK ]

Notice that ":p" puts the command into the "last command" buffer so that you can immediately execute it with "!!" if it turns out to be the command you want.

But what if it isn't the command you want? Did you know you can search your history interactively? Just hit <Ctrl>-R and start typing characters that match some string in your command history-- unlike "!command", the matching happens anywhere in the string, not just at the front of the command line. Additional <Ctrl>-R's will search backwards from the current match using the given search string. Once you find the command you want, just hit <Enter> to execute it, or use the normal command-line editing tools to change it. Use <Ctrl>-C to abort the command without executing anything.

By the way, while "!!" gives you the previous command, "!-2" gives you the command before the previous command, "!-3" goes three commands back, and so on. For example, suppose I was watching the size of a log file and making sure the partition it was in was not running out of space, I might alternate two commands:

# ls -l mail.20090312
-rw------- 1 root root 392041 Mar 12 14:14 mail.20090312
# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/Root-Var 20G 3.7G 15G 20% /var
# !-2
ls -l mail.20090312
-rw------- 1 root root 392534 Mar 12 14:16 mail.20090312
# !-2
df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/Root-Var 20G 3.7G 15G 20% /var
# !-2
ls -l mail.20090312
-rw------- 1 root root 393068 Mar 12 14:20 mail.20090312
# !-2
df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/Root-Var 20G 3.7G 15G 20% /var

By the way, the "!<n>" thing also works with argument substitutions like "!$":

# cp ~hal/.profile ~hal/.bashrc ~paul
# chown paul ~paul/.profile ~/.bashrc
# cd !-2$
cd ~paul

And speaking of "!$", did you know that "!*" means "use all previous arguments"? For example:

# touch file1 file2 file3
# chmod 600 !*
chmod 600 file1 file2 file3
# chown hal !-2*
chown hal file1 file2 file3

The replacement operator ("^foo^bar") is awesomely useful. Here's a cool trick that I use a lot:

# make -n install
...
# ^-n
make install
...

In other words, I used a null substitution to remove the "-n" operator from the previous command line once I had assured myself that "make" was going to do the right thing.

The only problem with the replacement operator is that it only works on the first instance of the string in the previous command:

# cp passwd passwd.new
# ^passwd^shadow
cp shadow passwd.new

Obviously, this is not what we want. The following syntax works:

# cp passwd passwd.new
# !:gs/passwd/shadow/
cp shadow shadow.new

I'm just not sure that's any quicker than manually editing the previous command.

Ed comments:

Holy frijoles! That's a lot of fu, Hal. Lot of fu.

On Windows, accessing shell history isn't quite as fine-grained, but we've got some useful options in cmd.exe.

With a cmd.exe on the screen, we can hit the F7 key to see our command history.

Scroll up or down, and you can then hit the Enter key to re-run the command. Hit the right or left arrow, and it'll retype the command, letting you edit it and then re-run it.

See those numbers next to the commands when you hit F7? If you hit the F9 key, it prompts you for a command number to re-execute. Type in the appropriate number, hit Enter, and your command is re-typed, letting you edit it.

To clear this history, you can hit ALT-F7.

If you want to see your recent history (up to 50 commands stored by default) in standard output, run:

C:\> doskey /history
So, it's not as scrumdiliously awesome as what you guys have over on Linux, but it's very workable. Heck, some might even say that it's usable... barely. ;)

Friday, March 20, 2009

Episode #13 - Find Vulnerable Systems In A Nessus Export

Paul Says:

I use this command all the time to get a list of IP addresses that are vulnerable to a specified vulnerability in a Nessus .nsr output file:

Linux/OS X Command:
$ grep -h "CVE-2008-4250" *.nsr | cut -d"|" -f 1 | sort -u 

For bonus point, funnel those IP addresses through to Metasploit's msfcli and get shell on all of them in one command :)

Hal's Comments:

It's always a danger sign when you end up piping grep into cut-- usually this means you can create a single awk expression instead (and don't even get me started on people who pipe grep into awk):

$ awk -F'|' '/CVE-2008-4250/ {print $1}' | sort -u

Paul's Comments:

That is slick! I've never been truly happy with cut and will spend some more time with the search feature in awk, its looks MUCH cleaner.

Ed Responds:

I'm glad you two have made up and are friends again. As for my answer... first off, aren't nsr reports deprecated? Isn't nbe the way to go these days?

Anyway, to do this at a shell that actually makes you work for a living, you could run:

C:\> for /F "delims=:| tokens=2" %i in ('findstr CVE-2008-4250 *.nsr') do @echo %i

Starting from the center command and working outward, I'm invoking findstr to look for the string CVE-2008-4250 in all .nsr files. That command will execute in my FOR /F loop because it's in single quotes (' '). I'll have one line of output for each line that contains that string, of the form filename:line. I take those lines of output and iterate over them in my FOR /F loop, with delimeters of : and |. That way, it'll split up my file name (before the colon) and IP address in the NSR file (before the |). I set my iterator variable to the token 2, so that it will take on the IP address from the file. I simply then echo out the contents of that variable.

All in all, a pretty standard use of FOR /F loops to parse the output of a command, in this case, the findstr command. You could sort it alphanumerically (sigh... not numerically) by putting parens around the whole shebang and piping it through sort, if you really want to. There you have it.


Paul Responds:

Ed and I had a discussion about Nessus file formats, and I will spare everyone any confusion and provide the following link:

https://discussions.nessus.org/thread/1124?tstart=0

At one time, .nsr was the way to go, however I recommend that people start looking into the .nessus (XML) format. We'll save that for a future episode :)

Wednesday, March 18, 2009

Episode #12 - Deleting Related Files

Hal Says:

I had been deliberately holding back on this problem because I didn't want to make things too tough on Ed and that poor excuse for a command shell he's been saddled with. But since he had the temerity to suggest that Unix wasn't a "real operating system" back in Episode #11 (who needs to track file creation times anyway?), the gloves have come off.

So today's problem is as follows: Delete all files whose contents match a given string AND ALSO delete a related, similarly named file in the same directory. For example, you've got a lot of spam in your Sendmail /var/spool/mqueue directory and you need to match the spammer's email address in the qf<queueID> file and then delete both the qf<queueID> file (header and delivery info) and the df<queueID> file (message contents).

Getting the matching file names is just a matter of using "grep -l", and obtaining the queue ID values from the file names is just a matter of using "cut":

# grep -l spammer@example.com qf* | cut -c3-

Add a tight loop and you're done:

# for i in `grep -l spammer@example.com qf* | cut -c3-`; do rm qf$i df$i; done

And, finally, I'll administer the coup de grace by using xargs instead of a loop:

# grep -l spammer@example.com qf* | cut -c3- | xargs -I'{}' rm qf{} df{}

So, Skodo, think you're ready to play with the big-time shells?

Ed (aka Skodo) responds:

Hal says he "Didn't want to make things too tough on Ed..." Well, thank you for your niceties, but easy-to-use and sensical command shells are for wimps. "Big-time shells..." I wonder if we count the number of copies of cmd.exe in the universe and compare it to the number of bash shells, which would come out "big-time"? Still, I do have to confess, cmd.exe is about the most uglified and frustrating shell ever devised by man. But, I can take care of your so-called challenge with the following trivial-to-understand command:

C:\> cmd.exe /v:on /c "for /f %i in ('findstr /m spammer@example.com qf*') do @set
stuff=%i & del qf!stuff:~2! & del df!stuff:~2!"
Although an explanation of this really straightforward command probably isn't necessary (it's pretty obvious, no?), I'll go ahead and insert one just for completeness. I'll start in the middle, work my way through the end, and wrap around to the beginning.

Putting all sarcasm aside, I'm doing a bunch of gyrations in this command to get really flexible string parsing beyond what I can get with normal Windows FOR loops. I start out in the middle by running the findstr command, with the /m option, which makes it find the name of files that contain the string "spammer@example.com" at least one time. I'm looking only through files called qf*. The output of the findstr command will be one qf file name per line. The findstr command will run inside the FOR /F loop because I put it inside of forward single quotes (' '), with the iterator variable %i taking on the value of each of the lines of the output of findstr.

So far, so good. But, now we get to the fu part here, and I really mean FU. Originally, I considered parsing %i using another FOR /F loop to rip it apart as a string, so I could peel off the qf in front to get the unique part of the file name. However, that won't work nicely, because FOR /F parsing cannot do substrings. So, I briefly thought about defining the letters q and f as delimiters in my FOR /F so I could parse them off, but the remainder of the file name may have those letters in them as well, which means I would miss some files with my over-exuberant FOR /F q and f delimiters. There must be another way, one that lets us get substrings.

Clearly, we need better parsing of the %i variable. What to do? Well, we can't apply substring parsing directly to iterator variables of FOR loops, because substring parsing is only available for environment variables. I wish we could just sub-stringify %i, but it doesn't work. Instead, we can assign its value to an environment variable, which I've called "stuff". Then, we can parse stuff to snip off the first two characters (the q and the f) using !stuff:~2!. I then delete the files referred to with qf!stuff:~2! and df!stuff:~2!.

But, what's that monstrosity up front with the cmd.exe /v:on /c? Well, cmd.exe does immediate environment variable expansion by default, expanding our stuff variable immediately as the command is invoked. We want delayed expansion, so that stuff can take on different values as our loop iterates. We do that by first invoking a cmd.exe with /v:on to tell it to do delayed environment variable expansion, to execute a command for us (/c), with that command being our FOR loop. All of that nonsense, just to get flexible variable parsing. But, this parsing is pretty useful, especially when combined with FOR /F string parsing. But don't get me started on that.

So, there you have it. Lots of fun little gems in this one. Thanks for the challenge, Hal. Inspired by your post, I'm now going to install sendmail on a Windows box and write an anti-spam tool using the above command.... NOT!

Special Guest Fu from @jaykul:

@jaykul, a PowerShell master, provided this useful PowerShell command to implement a solution to Hal's challenge:

#PowerShell> sls spammer@example.com -list -path qf* | rm -path {$_.Path -replace "\\qf",
"\[qd]f"}
@jaykul helpfully notes that sls stands for select string.

Ed comments: It's amazing how much simpler and more elegant PowerShell is compared to cmd.exe. I only wish we had it 10 years ago, and could rely on it being widely deployed now! Faster, please!

Monday, March 16, 2009

Episode #11 - Listing Files by Inode as a Proxy for Create Time

Hal Says:

One of the problems with classic Unix file systems (FFS, UFS, ext[23], etc) is that they don't track the creation time of files ("ctime" in Unix is the inode change time, not the creation time). However, forensically it's often very useful to know when a given file was created.

While there's no way to know the exact creation date of a file from file system metadata, you can use the assigned inode number of the file-- because inodes tend to be assigned sequentially-- as a proxy to figure out the relative creation dates of files in a directory:
$ ls -li /etc | sort -n
total 4468
1835010 drwxr-xr-x 5 root root 4096 Nov 23 10:04 lvm
1835011 drwxr-xr-x 10 root root 4096 Nov 23 10:04 sysconfig
1835013 drwxr-xr-x 8 root root 4096 Nov 23 10:01 X11
1835014 drwxr-xr-x 2 root root 4096 May 24 2008 rpm
1835018 -rw-r--r-- 1 root root 435 Jul 14 2007 reader.conf
1835019 -rw-r--r-- 1 root root 105 Jul 14 2007 modprobe.conf
...
1837339 -rw-r--r-- 1 root root 2200 Jul 22 2008 passwd
1837348 -rw-r--r-- 1 root root 814 Jul 22 2008 group
1867786 drwxr-xr-x 4 root root 4096 May 24 2008 gimp
1867804 drwxr-xr-x 2 root root 4096 Jul 14 2007 sane.d
1867868 drwxr-xr-x 7 root root 4096 Jul 22 2008 gdm
1867890 drwxr-xr-x 2 root root 4096 Jul 22 2008 setroubleshoot
1867906 drwxr-xr-x 3 root root 4096 Aug 8 2007 apt
1867925 drwxr-xr-x 3 root root 4096 Aug 8 2007 smart
1867929 drwxr-xr-x 5 root root 4096 Dec 11 14:24 raddb
1867954 drwxr-xr-x 10 root root 4096 Dec 15 09:03 vmware
1867972 drwxr-xr-x 2 root root 4096 Aug 8 2007 syslog-ng
1868042 drwxrwsr-x 2 root mailman 4096 Jul 22 2008 mailman
1868075 drwxr-x--- 3 root root 4096 Jul 22 2008 audisp
1900546 drwxr-xr-x 2 root root 4096 Jul 22 2008 purple
1933364 drwxr-xr-x 2 root root 4096 Nov 23 14:08 vmware-vix
2293777 -rw-r--r-- 1 root root 362031 Nov 23 14:04 services

At the top of the output you can see that the inodes are clustered tightly together, indicating these files were probably all created about the same time-- typically when the system was first installed. Towards the end of the output, however, you can see other "clusters" of inode numbers corresponding to groups of files that were created around the same time. In this case, these are mostly the configuration directories for software packages I added after the initial OS install.

Ed Responds:

"...A proxy to figure the relative creation dates of files"? Oh my... If I may indulge in a little trash talk, you'd think that a real operating system would have some better way of tracking file creation times than resorting to inode numbers.

Just to pick an alternative operating system at random off the top of my head, let's consider... um... Windows. Yeah, Windows.

Oh yes, we have file creation time, which can be displayed using the really obscure dir command.

In all seriousness, by default, the dir command displays file modification date and time. If you want it to display creation time, simply run it with the /tc option. The /t indicates you want to twiddle with the time field (yeah, it stands for "twiddle" ;). The options after it are c for creation date/time, a indicates last access, and w is for last written. For example:

$ dir /tc

Lot simpler than Hal's fu above, and it gets the job done.

Oh, and Hal wanted them sorted. Sadly, we don't have a numeric sort in Windows, just an alphanumeric one. But, that lament is for another day, because we can sort based on time stamp right within dir, as follows:

$ dir /tc /od

The /o indicates we want to provide a sort order, and we're sorting by date, oldest first. To reverse the order (newest first), use /o-d, with the minus reversing the date sort.

Friday, March 13, 2009

Episode #10 - Finding Names of Files Matching a String

Hal Says:

This is one of my favorite questions to ask job interviewees, so pay attention!

Question: How would you list the names of all files under a given directory that match a particular string? For example, list all files under /usr/include that reference the sockaddr_in structure.

Most interviewees' first approximations look like this:
$ find /usr/include -type f -exec grep sockaddr_in {} \;

The only problem is that this gives you the matching lines, but not the file names. So part of the trick is either (a) asking me if it's OK to look at the grep manual page or help text (which is really the response I'm looking for), or (b) just happening to know that "grep -l" lists the file names and not the matching lines:
$ find /usr/include -type f -exec grep -l sockaddr_in {} \;

The folks who really interest me, however, are the ones who also strike up a conversation about using xargs to be more efficient:
$ find /usr/include -type f | xargs grep -l sockaddr_in

How much faster is the xargs approach? Let's use the shell's built-in benchmarker and see:
$ time find /usr/include -type f -exec grep -l sockaddr_in {} \; >/dev/null

real 0m12.734s
user 0m2.097s
sys 0m10.713s
$ time find /usr/include -type f | xargs grep -l sockaddr_in >/dev/null

real 0m0.410s
user 0m0.108s
sys 0m0.344s

You really, really want to use "find ... | xargs ..." instead of "find ... -exec ..."

Paul Says:

That's an awesome tip! I immediately put this to good use when using Metasploit. One of the requests we most often get from students when using metasploit is a way to find the exploit for a particular vulnerability. Metasploit has built in a search feature, but grep is far more powerful and comprehensive. Since all of the modules and exploits within metasploit are just Ruby files, you can use the method above to seek out functionality in Metasploit:

find ./modules/ -type f | xargs grep -li 'ms08\_*' | grep -v ".svn"

The above command will find all modules that contain references to "ms08", indicating an exploit for a vulnerability released by Microsoft in 2008.

Ed throws in his two cents:

On Windows, we have two string search tools: find and findstr. The latter has many more options (including the ability to do regex). We can use it to answer Hal's interview question with the /m option to print only the file name. Why /m? I guess because "Name" has an "m" in it, and /n was already taken to tell findstr to print line numbers.

So, the results is:
C:\> findstr /d:[directory] /m [string] [files]

The [files] lets you specify what kind of files you want to look in, such as *.ini or *.txt. To look in any kind of file, just specify a *. Also, to make it recurse the directory you specify, add the /s option.

How about an example? Suppose you want to look in C:\windows and its subdirectories for all files that contain the string "mp3". You could run:

C:\> findstr /s /d:c:\windows /m mp3 *
Another useful feature of findstr is its ability to find files that contain only printable ASCII characters using the /p flag. That is, any file with unprintable, high-end ASCII sequences will be omitted from the output, letting you focus on things like txt, inf, and related simple text files often associated with configuration:
C:\> findstr /s /p /d:c:\windows /m mp3 *
Be careful with the /p, however. You may be telling findstr to leave out a file that is important to you simply because it has one high-end ASCII sequence somewhere in the file.

Also, thanks, Hal, for now making me lust after not only xargs, -exec, ``, head, tail, awk, sed, and watch. Now, I really want a real "time" command in Windows. And, no, I'm not talking about the goofy built-in Windows time command that shows you the time of day. I'm talking about seeing how long it took another command to run. Thank goodness for Cygwin!

Wednesday, March 11, 2009

Episode #9 - Stupid Shell Tricks: Display the Nth Line

Hal Says:

Here's a stupid little shell idiom. How do you print only the nth line of a file? There are only about a dozen ways to do this in the Unix shell, but the one programmed into my wetware is:
 $ head -<n> <file> | tail -1


Paul Responds:

I'm kind of partial to awk, awk is my friend, its quick, dirty, and powerful (and I seem to learn about new techniques all the time, which makes it fun!):

 $ awk 'FNR == 42' file


Also, I like this command because its shorter, and despite popular belief UNIX people don't really like to type :)

Ed adds a little Windows perspective:

Ahhh... what I wouldn't give for head or tail in Windows. That sounds like a new motto for a Pauldotcom T-shirt or bumper sticker.

You can get most of the functionality you are describing here in Windows using the following construct:

C:\> find /v /n "" <file> | findstr /b /L [<n>] 


This may look crazy, but without head or tail, we need to trick Windows into doing what we want, as usual. What I'm doing here is using the find command to prepend line numbers (/n) to lines in the file that do not (/v) contain the string "". As we saw in Episode #3, searching for lines that do not have nothing shows all lines. Thus, the first portion of this command is actually prepending line numbers (in the form of [N], with the brackets) to each line in the file and sending them to standard out. I then pipe the result to the findstr command. The /b option tells findstr to display lines that have the string we are searching for at the beginning of a line. That way we won't get accidental collisions if [<n>] shows up inside of the file anywhere. We'll just be looking for the [<n>] that the find command prepended. I use a /L to indicate a literal string match. Otherwise, the double bracket around the n will confuse findstr.

The output is almost just what we want. There is one downside, though. There will be a [n] prepended to our line. But, that's pretty close, no?

Well, if you insist, you can remove that [n] with a FOR /F loop to do some parsing, but that starts to get really ugly if you just want to see the contents of the line. Anyway, because I luv ya, here goes:

C:\> find /v /n "" <file> | findstr /b /L [<n>] > temp.txt &
for /F "delims=[] tokens=2" %i in (temp.txt) do @echo %i & del temp.txt


Told you it was ugly. But, when you only have FOR /F loops to parse, you sometimes have to do this kind of thing.

Monday, March 9, 2009

Episode #8: Netstat Protocol Stats

Ed Says:

On Windows, the netstat command has really got a lot of features that are useful for analyzing network behavior. Even without installing a sniffer, you can learn a good deal about network traffic with a stock Windows machine by running netstat with the -s flag to see statistics for all supported protocols. You can select an individual protocol's stats with -p [proto], with TCP, UDP, ICMP, and IP supported. On Vista, they also added IPv6, ICMPv6, TCPv6, and UDPv6.
C:\> netstat -s -p ip

That'll show you IPv4 stats including packets received and fragments created.
C:\> netstat -s -p tcp

This one shows the number of active opens and reset connections, among other things. Those stats are useful if you suspect some kinds of denial of service attacks.

Hal Comments:

The Linux "netstat -s" command will also dump statistics:
$ netstat -s
Ip:
115851638 total packets received
237 with invalid headers
0 forwarded
0 incoming packets discarded
115825742 incoming packets delivered
72675668 requests sent out
2914 reassemblies required
1457 packets reassembled ok
Icmp:
34672 ICMP messages received
18 input ICMP message failed.
...

Unfortunately, while there are command line options to dump just the TCP or just the UDP statistics, they don't work consistently across different Linux distributions. In some cases they even include other protocol statistics, like IP and ICMP stats, along with the TCP or UDP stats.

I did come up with a fairly gross hack for pulling out sections of the report for a specific protocol:
$ netstat -s | awk '/:/ { p = $1 }; (p ~ /^Tcp/) { print }'
Tcp:
64684 active connections openings
25587 passive connection openings
1043 failed connection attempts
236 connection resets received
15 connections established
114808177 segments received
71655514 segments send out
24271 segments retransmited
11 bad segments received.
2906 resets sent
TcpExt:
1640 invalid SYN cookies received
15 ICMP packets dropped because they were out-of-window
57520 TCP sockets finished time wait in fast timer
...

The first part of the awk expression matches on the ":" character in the "header" line of each protocol section and sets our magic "p" variable to the current protocol name. That value remains in "p" until we reach the next header, and so on. The second part of the awk expression does a regular expression match against the current value of "p" and prints the current line as long as "p" matches the protocol we're looking for. That gets us the header line itself, plus all of the following lines of output up until the next header.

Why is this so clunky? Basically, Unix commands are generally poor at "remembering context" across multiple lines, so you often end up with these sorts of hacked solutions.

Paul Says:

As byte_bucket mentioned, things work a bit differently on OS X. Hal's command above needs to have a lower case "tcp" in order to work in OS X:

$ netstat -s | awk '/:/ { p = $1 }; (p ~ /^tcp/) { print }'


Also the following command:

$ netstat -s -p tcp


Works great on both Linux and OS X. I find these commands very useful for network troubleshooting, especially given slow performance or high error counts on the network switch.

Friday, March 6, 2009

Episode #7 - Aborting a System Shutdown

Ed says:

Sometimes, when using a Windows box, Really Bad Things (TM) happen, forcing the system to shut down. For example, if someone exploits the system, and their exploit accidentally kills lsass.exe or services.exe, Windows is very unhappy. It pops up a dialog box expressing its discontent, telling you that it will reboot in 60 seconds.

But, suppose you don't want it to reboot that quickly? Maybe you need just a little more time to save a file, close something out, launch your retaliatory missiles, or whatever. Most of the time, you can abort a shutdown by running:
C:\> shutdown /a

Of course, without an lsass.exe or services.exe process, the box is pretty well hosed. But, this command can give you a little bit of extra time in event of dire emergencies, limping along with a machine that is only partly dead. You can then make the box reboot on your own time frame with the following command:
C:\> shutdown /r /t [N_seconds]

If you omit the /t, it'll reboot in 30 seconds. Use /t 0 to make it reboot now.

Hal Comments:

I've always hated the Unix shutdown command. I find the "write all" behavior more annoying than useful. I normally use "reboot", "halt", or "init 0" (stop and power down). That being said:
# shutdown -c           # cancels scheduled shutdown
# shutdown -r +1 # shut down and reboot in 1 minute
# shutdown -r 14:30 # shut down and reboot at 2:30pm

Interestingly, you can't schedule shutdowns with finer than one-minute granularity, though I suppose you could do something like:
# sleep 30; shutdown -r now


Paul Comments:

Interesting to note that the OS X shutdown command does not have the "-c" option allowing you to halt the shutdown.

Wednesday, March 4, 2009

Episode #6 -- Command-Line Ping Sweeper

Ed Says:

Here's a Windows command to do ping sweeps at the command line:

C:\> FOR /L %i in (1,1,255) do @ping -n 1 10.10.10.%i | find "Reply"

Here, I've got a FOR /L loop, which is a counter. My iterator variable is %i. It starts at 1, steps up by 1 through each iteration through the loop, going up to 255. I want to ping through a /24-sized subnet. I then turn off command echo (@), and ping each IP address once (-n 1). I scrape through the output using the find command, looking for "Reply". The find command is case sensitive, so I put in the cap-R in "Reply". Or, you could use /i to make the find case insensitive.

By the way, you can speed it up by adding "-w 100" to have a timeout of 100 milliseconds between each ping, rather than the normal.

(Note... I had "-t 100" here earlier, but fixed it for "-w 100". Thanks to @bolbroe for the catch. The fact is, I so often use -t with Windows ping to make it keep pinging a la Linux, it feels very natural to put -t in. But, the issue here is to make it wait, with -w, for 100 milliseconds.)

Hal Comments:

I have to admit that my first impulse here was to respond with "sudo apt-get install nmap". But Ed's going to be a stickler for our "built-in shell commands only" rule, so I guess I have to come up with something else.

Here's a fun approach that's very different from Ed's loop:
# ping -b -c 3 255.255.255.255 >/dev/null 2>&1; arp -an | awk '{print $2}'

ping the broadcast address a few times and then scrape out your ARP table to get the IP addresses of the responding hosts (the old "ARP shotgun" approach). The only problem is that this only works for hosts on your local LAN.

So I guess my final solution is a lot like Ed's:
$ for i in `seq 1 255`; do ping -c 1 10.10.10.$i | tr \\n ' ' | awk '/1 received/ {print $2}'; done

By the way, notice the "tr \\n ' '" hiding in the middle of that shell pipeline? The problem is that the ping command generally produces multi-line output and I need to confirm that the packet was received (last line of output) before printing the IP address I pinged (first line of output). So I'm using tr to convert the multi-line output into a single line that's easier to tokenize with awk. This is a useful little shell programming idiom for your toolkit.

Ed stirs the pot a little bit more:

I like your broadcast ping approach. Nifty! Unfortunately, modern Windows boxen don't respond to broadcast pings. Thus, your command will find Linux and other machines on your same subnet, but not the Windows boxes. I tested it in my lab, and found all my Linux machines happily telling me about their existence, but my super stealthified (NOT!) Windows boxes were silent. Thus, while the broadcast ping is a nifty alternative for some special edge cases (targets on same subnet, don't care to find Windows boxes), I think the sweeper is the better way to go.

Monday, March 2, 2009

Episode #5 - Simple Text Manipulation - Reverse DNS Records

Paul Says:

There are many times when I run commands to collect information, such as hostnames and IP addresses, and the output is, well, less than desirable. For example, lets say that you have a file called "lookups.txt" that contains the following:

207.251.16.10.in-addr.arpa domain name pointer server1.srv.mydomain.net.
208.251.16.10.in-addr.arpa domain name pointer server2.srv.mydomain.net.

The output is not easy to read, so I like to manipulate it such that I get a list of IPs and hostnames:

$ awk -F . '{print $4 "." $3 "." $2 "." $1 " " $6 "."$7"."$8"."$9}' lookups.txt | cut -d" " -f1,6
10.16.251.165 server1.srv.mydomain.net.
10.16.251.166 server1.srv.mydomain.net.

Hal Comments:

The problem with your awk expression, Paul, is that you're assuming that all of the fully-qualified hostnames are four levels deep. What if your file also contains lines like:

16.254.16.10.in-addr.arpa domain name pointer www.mydomain.net.
17.254.16.10.in-addr.arpa domain name pointer mydomain.com.

The awk doesn't choke and die, but you do end up with weird output:

$ awk -F . '{print $4 "." $3 "." $2 "." $1 " " $6 "."$7"."$8"."$9}' lookups.txt | cut -d" " -f1,6
10.16.251.207 server1.srv.mydomain.net
10.16.251.208 server2.srv.mydomain.net
10.16.254.16 www.mydomain.net.
10.16.254.17 mydomain.com..


Yuck! Frankly, this looks like a job for sed to me:

$ sed 's/\([0-9]*\)\.\([0-9]*\)\.\([0-9]*\)\.\([0-9]*\).in-addr.arpa domain name pointer\(.*\)\./\4.\3.\2.\1\5/' \
lookups.txt

10.16.251.207 server1.srv.mydomain.net
10.16.251.208 server2.srv.mydomain.net
10.16.254.16 www.mydomain.net
10.16.254.17 mydomain.com

sed expressions like this end up looking like nasty thickets of backwhacks, because of all the "\( ... \)" expressions, but this approach allows us to re-order the octets of the IP address and remove all of the extra text in one fell swoop.

And, yes, a lot of people (including me) would probably use Perl instead of sed for this, because Perl's regular expression syntax allows for a much more compact command line. But Paul, Ed, and I have agreed to avoid diving into pure scripting languages like Perl.

Paul (aka Grasshopper) Says:

Yes, I was assuming a static hostname, and wrote it as a one off to quickly parse my particular output. I now see that sed is even more powerful than I thought! This will certainly be a nice addition to some of the command line one-liners I use a on regular basis. Many times when doing a penetration test you have to move information, such as IP addresses, between tools and this will make the job much easier.

Ed (aka Ed) Says:

I really do wish we had awk or sed on Windows. I know, I know... we can get them with Cygwin or other shells that we could add in. But, our ground rules here force us to rely on built-in commands. That means, to parse in Windows, we rely on FOR /F loops, which can parse files, strings, or the output of commands.

When I first saw Paul's post above, I came up with this pretty straight-forward approach:

C:\> FOR /F "tokens=1-4,10-14 delims=. " %a in (lookups.txt) do @echo %d.%c.%b.%a %e.%f.%g.%h


Here, I'm parsing the file, using iterator variables starting with %a (FOR /F will automatically allocate more vars while it parses) and delimiters of . and spaces (gotta have that space there, because the dot overrides default parsing on spaces). I tokenize my variables around the first four and tenth through fourteenth places in the line, the IP address and domain name. Then, I dump everything out in our desired order. Simple and effective.

But, Hal brings up an interesting twist. Like Paul's approach, mine also has those ugly variable number of periods at the end, because we can't always assume that the domain name has four elements. I thought about it for a while, trying to push my first FOR /F loop to deal with this, and it got real ugly, real fast. Lots of IF statements made it impractical. So, I came up with a simpler approach: embedded FOR /F loops, the outer one to parse the file, and the inner loop to parse a string from the outer loop's results. Here it is:

C:\> FOR /F "tokens=1-5" %a in (lookups.txt) do @(@FOR /F "tokens=1-4 delims=." %i in ("%a") do @echo %l.%k.%j.%i %e)


What's this mess? Well, I use my outer FOR loop to parse lookups.txt into five components, using the default delims of spaces. %a will contain the IP address, with dots and all. The fifth item (%e) is the domain name. Then, in my inner FOR loop, I parse the string %a, using delims of periods and a variable of %i. That'll drop each octet of our IP address into a variable, which we can echo out. Furthermore, it preserves our domain name as one chunk in %e, regardless of the number of entities it has in it. I then just echo the IP address (reversing the octets, of course) followed by the domain name. There's one small drawback here: I leave the trailing period at the end of every domain name. There's only one there, and it's there for all of them, unlike the earlier approach. Still, this is very workable, and keeps the command syntax almost typable. :)