Friday, May 29, 2009

Episode #42: Listing and Dropping SMB Sessions

Ed jumps:

On a Windows machine, sometimes an admin or user needs to see currently open SMB sessions going from or to their box, perhaps being used to mount a remote file share, get remote registry access, or otherwise plunder... ahem... I mean access the system.

To get a list of SMB sessions a Windows machines has opened to destination servers (which could be other Windows boxen or smbd's on Linux or Unix), you could run:

C:\> net use
New connections will be remembered.

Status Local Remote Network

-------------------------------------------------------------------------------
OK Z: \\10.1.1.105\c$ Microsoft Windows Network
The command completed successfully.
That shows you outbound SMB connections, those that your machine is acting as a client on. To flip things around and see who has opened an SMB session with your machine (i.e., to display who your box is acting as an SMB server to right now), you could run:
C:\> net session

Computer User name Client Type Opens Idle time

-------------------------------------------------------------------------------
\\FRED ED Windows 2002 Serv 0 00:00:40

The command completed successfully.
Note that it shows me the client computer name and the user who has made the connection. The client type refers to the operating system that initiated the inbound session (Windows 2002 is how XP is depicted here). We also see idle time.

That's all well and good, but what if you run those commands and notice some evil SMB session either to or from your box? Perhaps there is an SMB session set up by a bad guy or unauthorized user, and you want to kick them out.

If you want to drop sessions from the client-side, you could run:
C:\> net use \\[ServerMachineName] /del
You'll be prompted about whether you really want to drop that connection. When prompted, hit Y and Enter. If you don't want to be prompted, just add a "/y" to the command above.

Or, if you want to blow away all SMB sessions that your client has initiated with server machines out there, you could run:
C:\> net use * /del /y
Now let's move to the server side. This one is important if you are responding to incidents in which a bad guy has opened an SMB session with one of your Windows servers, perhaps across your intranet. Maybe the server is vitally important, and you aren't allowed to pull the plug. Yet, you need to act fast to bump the bad guy off. Many Windows admins know how to do this at the GUI (launch compmgmt.msc, go to System Tools-->Shared Folders-->Sessions. Right click on evil session and select "Close session"). But, I find disconnecting SMB connections from the server-side much easier to do on the command line with:
C:\> net session \\[ClientMachineName] /del
That'll drop that pesky user and session, and keep your box running. You may want to disable that user account the bad guy relied on via the "net user [AccountName] active:no" command, as mentioned in Episode #34: Suspicious Password Entries.

It's interesting to notice the lack of symmetry with disconnecting client versus server SMB sessions. Dropping connections to servers with "net use" supports the * wildcard above, and supports the /y option to suppress the "Do you want to continue..." prompt. Dropping connections from clients supports neither the * nor does it prompt you to verify that you want to drop them.

Hal retorts:

Assuming your Unix/Linux distro has the smbfs tools from the Samba project installed, mounting and unmounting Windows shares from a client is pretty straightforward. You can either use the "smbmount" command or just "mount -t cifs ..." as root:

# mount -t cifs //server/hal /mnt -o user=hal,uid=hal,gid=hal   # mount and map ownerships
# umount /mnt # unmount file system

The "mount" command will prompt you to enter the password for the specified "user=" and then map all the owner/group owner settings on files based on the specified "uid="/"gid=" options.

Figuring out what Windows shares your client has mounted is straightforward too. You can use either "mount" or "df" (and you don't need to be root here):

$ mount -t cifs
//server/hal on /mnt type cifs (rw,mand)
$ df -t cifs
Filesystem 1K-blocks Used Available Use% Mounted on
//server/hal 627661376 146659564 448604092 25% /mnt

The only caveat here is that the user GUI may provide an alternate method for mounting Windows shares that may make it more difficult to figure out all of the file systems a given system has mounted. For example, when I mount Windows shares via the Gnome-based GUI on my Ubuntu system, it uses GVFS to handle the mount. There's really very little helpful information you can get out of GVFS on the command-line:

$ mount -t fuse.gvfs-fuse-daemon
gvfs-fuse-daemon on /home/hal/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=hal)
$ df -t fuse.gvfs-fuse-daemon
df: no file systems processed

The "mount" command tells me where the share is mounted, but not where it's mounted from. "df" has no clue at all. I hate GVFS.

So it may be more productive to interrogate your Samba server about what clients are currently accessing shares. You can use the "smbstatus" command on your Samba server host for this. What's interesting is that you don't have to be root to use "smbstatus". I'm not entirely certain that's a good thing, since it gives you information about other users' shares in addition to your own:

$ smbstatus 
Samba version 3.0.33-3.7.el5
PID Username Group Machine
-------------------------------------------------------------------
32752 hal hal elk (192.168.4.1)
32733 hal hal elk (192.168.4.1)
5320 laura laura wapiti (192.168.4.2)

Service pid machine Connected at
-------------------------------------------------------
hal 32733 elk Tue May 26 14:57:15 2009
laura 5320 wapiti Tue May 12 11:33:32 2009
iTunes 5320 wapiti Tue May 12 11:33:29 2009
hal 32752 elk Tue May 26 15:02:29 2009

No locked files

You can see I'm mounting my "hal" share twice (once from the command line with "mount -t cifs" and once via GVFS, though you can't tell that from the above output). My wife Laura has got her homedir mounted on her desktop machine, along with her iTunes music folder.

If you have root access, you can use the "smbcontrol" command to forcibly disable currently active shares. You can either disable particular shares by PID (see the "smbstatus" output above) or ruthlessly crush all systems mounting a particular share:

# smbcontrol 32733 close-share hal       # close a single share instance, PID 32733
# smbcontrol smbd close-share hal # nuke all clients mounting "hal"

It should be noted, however, that the disconnected user can simply re-mount the given share at will. So if you really want to keep them off the server you'll need to remove their account (or disable the password) before knocking them off with "smbcontrol".

One other item worth mentioning before I sign off this Episode is that the Samba tools also include a minimal version of the "net" command for your Unix/Linux systems. But many features are missing-- like "net use" for example. So I haven't found the Samba "net" command all that useful in general.

Wednesday, May 27, 2009

Episode #41: DoS or No DoS, That Is the Question

Ed muses:

I was talking with a sysadmin buddy a couple of months ago, who told me he thought his system was under a SYN flood Denial of Service attack, but he wasn't sure. I asked, "Why aren't you sure?" He told me that he couldn't get ahold of his network guys to look at the router and IDS. I said, "You don't need them... just measure it on your end system." "How?" he asked. "Count the number of half-open connections... Oh, and you should count the number of full-open connections too, in case you have a connection flood," I answered. "How?" he repeated.

I told him to use our good friend, netstat. Half-open TCP connections are generated by a SYN flood when an attacker uses a spoofed source address that never sends RESETs to tear down half-open connections. Netstat shows such items in its output as "SYN_RECEIVED". We can count the number of half-open connections using:
C:\> netstat -na | find /c "SYN_RECEIVED"
I'm simply using the /c option of the find command to look for connections in that state. Note that find is case sensitive, so I put in all caps for SYN_RECEIVED. The find command with /i is case insensitive.

Please note that the number of normal half-open connections for most systems is relatively small, typically under a hundred. If you see several hundred, you may have a SYN flood.

Another possibility involves the attacker launching a connection flood, not just a SYN flood. Here, the bad guy won't spoof anything, but will actually complete the three-way handshake with your system again and again. Some bot-net attacks do this by sending HTTP requests to a flood target because it blends in with normal web surfing. We can count those with netstat too, using:
C:\> netstat -na | find /c "ESTABLISHED"
Now, the number of established connections is heavily dependent on the nature and use of your given machine. A busy mail server or web server may have several hundred, or it might not. It all depends. What we need to look for here is a deviation from normal behavior for the system, with a lot more connections that we normall expect.

But, the beauty here is that we are using built-in tools to determine whether we've got a SYN or connection flood, without having to bother the network or IDS guys.

Hal comments:

This is, of course, a lot easier in the Unix shell than in Windows. In fact, I can actually give you counts for all current socket states with a single command line:

$ netstat -an | awk '/^tcp/ {print $6}' | sort | uniq -c     
13 ESTABLISHED
29 LISTEN

Thanks for giving me an easy one, Ed. Maybe I'll do the same for you sometime. Maybe.

Monday, May 25, 2009

Episode #40: Ed's Heresy

Ed opens up by speaking a bit of heresy:

Let me start by saying that I love the command line. I have stated publicly that I find it an incredibly powerful paradigm for interacting with computers. When I really get on a roll, I've even been known to say that the GUI was a mistake that humanity should have avoided.

That said, I'd like to utter a bit of heresy for this here command-line blog. Sometimes... just occasionally... every once and a while... I do something on my Windows box using a GUI. Some of the GUIs just make it easier to get things done. Others are useful so I can check to make sure a change I made at the command line had my desired effect.

So, what does this have to do with command-line kung fu? Well, I launch pretty much every GUI-based tool on my Windows box using the command line. The cmd.exe window is my main user interface, which I periodically use to launch ancillary GUI tools that act as helpers.

You see, launching Windows GUI tools from the command line helps to avoid the constant churn of Microsoft moving things from version to version. Rather than digging through Start-->Programs-->Accessories... or whatever, I just kick off the GUI from my command line.

Truth be told, my work flow is a synthesis of command-line and GUI, with cmd.exe doing about 70% of the work, assorted GUIs doing another 20%, and 10% for VBS or (increasingly) Powershell. I've memorized many of the most useful GUIs that can be launched from the command line. They essentially come in three forms: MSCs (Microsoft Controls), EXEs, and CPLs (Control Panel Tools). Here are my faves, each of which can be launched at the Windows command-line, so you won't have to dig through the Windows GUI ever again:

lusrmgr.msc = Local User Manager
eventvwr.msc = Event Viewer
services.msc = Services Controller
secpol.msc = Security Policy Editor - This one is really useful because it allows you to alter hundreds of registry key equivalents and other settings that would be a pain to do at the command-line.

taskmgr.exe = Task Manager
explorer.exe = Windows Explorer
regedit.exe = Registry Editor
mmc.exe = Generic "empty" Microsoft Management Console, into which I can Add/Remove Snap-ins to manage all kinds of other stuff
msconfig.exe = Microsoft Configuration, including autostart entries and services - Note that this one is not included in cmd.exe's PATH on Windows XP (it is in the PATH on Vista). You can invoke it on XP by running C:\windows\pchealth\helpctr\binaries\msconfig.exe
control.exe = Bring Up the overall Control Panel

wscui.cpl = Windows Security Center control
firewall.cpl = Windows Firewall Config GUI
wuaucpl.cpl
= Windows Automatic Update Configuration

If you'd like to see the other control panel piece parts, you can run:

C:\> dir c:\windows\system32\*.cpl


There are others beyond this list, but these are my trusty aids, extending my GUI. Of all of these, the ones I use most are secpol.msc (because of its access to hundreds of settings), msconfig.exe (as a quick configuration checker), eventvwr.msc (because Windows command-line tools for viewing events are kind of a pain), and good old regedit.exe (makes random adventures in the registry easier than with the reg command).

So, Hal and Paul... are there any GUI-based tools you find yourself launching from the command-line a lot? For Linux, Hal, is there a GUI tool you launch from the command-line because it's just easier to get a given task done in the GUI? And, surely Paul must launch GUIs from the Mac OS X command-line, given the platypus of an operating system he's saddled with. What say you, gentlemen?

Hal confesses:

Wow, I feel like this is an impromptu meeting of "GUI Users Anonymous" or something. As long as we're all testifying, I have to admit that I've always found both printer configuration and Samba configuration to be a huge hassle, and I will often end up using whatever GUIs happen to be available for configuring them.

Often I'll use the GUI to figure out a basic template for the configuration changes I need and then use command-line tools to replicate those configuration templates on other systems. While it's not always clear what configuration files the GUI might be tweaking, remember that you can use the trick from Episode #29 to find them: "touch /tmp/timestamp", make changes via the GUI, and then "find /etc -newer /tmp/timestamp" to find the changed files.

Similarly, there are a few GUIs in the Unix universe that actually try and teach you the command-line equivalents of tasks you're currently doing with the GUI. AIX enthusiasts will be familiar with SMIT-- a menu driven tool that also lets you see what it's actually doing under the covers. Another example would be the highly-useful NmapFE (now Zenmap) front-end for Nmap, which is a useful tool for driving Nmap from a GUI while simultaneously learning Nmap command-line flags.

These days a lot of new Linux users are experiencing Linux almost entirely via the GUI. While I think this is excellent from the perspective of driving new adoption, at some point it's helpful to start digging around "under the covers" and figure out what's happening in terms of the actual commands being executed. This turns out to be straightforward because typically the GUI configuration information is just stored in text files. If somebody shows you the basics of find and grep, you can actually do a huge amount of self-discovery.

For example, let's suppose you're a Ubuntu user like me and you're curious about what exactly is happening when you select the menu choice for "Home Folder" and the graphical file browser pops up. Way back in Episode #10 I showed you how to find the names of files that contain a particular string:

$ sudo find / -type f | sudo xargs grep -l 'Home Folder'
[...]
grep: /usr/share/acpi-support/NEC: No such file or directory
grep: Computers: No such file or directory
grep: International.config: No such file or directory
grep: /usr/share/acpi-support/Dell: No such file or directory
grep: Inc..config: No such file or directory
[...]

Huh? What's with all the error messages?

What's going on here is that the find command is emitting file names containing spaces-- "/usr/share/acpi-support/NEC Computers International.config" and ".../Dell Inc..config"-- which are being misinterpreted by xargs. The normal fix for this problem is to slightly adjust both commands:

$ sudo find / -type f -print0 | sudo xargs -0 grep -l 'Home Folder'

"find ... -print0" tells find to terminate its output with nulls (ASCII zero) instead of whitespace. Similarly, "xargs -0 ..." tells xargs to look for null-terminated input and don't treat white space in the incoming file names as special.

The above command is going to generate a ton of output and it may take a while to sort through everything and find the file that's actually relevant. On my Ubuntu system, the menu configuration files live in the /usr/share/applications directory:

$ less /usr/share/applications/nautilus-home.desktop
[Desktop Entry]
Encoding=UTF-8
Name=Home Folder
[...]
Exec=nautilus --no-desktop
Icon=user-home
Terminal=false
StartupNotify=true
Type=Application
Categories=GNOME;GTK;Core;
OnlyShowIn=GNOME;
[...]

The "Name=" parameter is the name that appears for the particular menu choice and the "Exec=" parameter shows you the command that's being invoked.

You could even put together a quick little bit of shell fu to output the value of "Exec=" for a given menu item:

$ awk -F= '/^Exec=/ {print $2}' \
`grep -l 'Name=Home Folder' /usr/share/applications/*`

nautilus --no-desktop

Here we're using "grep -l ..." to output the file name that matches the "Name=" parameter we're searching for. We then use backticks to make the file name output of the grep command be the argument that our awk statement works on. The awk specifies "-F=" to split lines on "=" instead of whitespace, then looks for the line that starts with "Exec=" and prints the stuff after the "=". You could easily turn this into a shell script or alias if you find yourself doing it frequently.

Friday, May 22, 2009

Episode #39: Replacing Strings in Multiple Files

Hal Starts Off:

Wow, our last several Episodes have been really long! So I thought I'd give everybody a break and just show you a cool little sed idiom that I use all the time:

# sed -i.bak 's/foo/bar/g' *

Here we're telling sed to replace the all instances string "foo" with the string "bar" in all files in the current directory. The useful trick is the "-i.bak" option which causes sed to make an automatic backup copy of each file as <filename>.bak before doing the global search and replace.

By the way, you can even do this across an entire directory structure, with a little help from the find and xargs commands:

# find . -type f | xargs sed -i.bak 's/foo/bar/g'

Of course, you could use other search criteria than just "-type f" if you wanted to be more selective about which files you ran sed against.

Oh dear, I hope this isn't one of those "easy for Unix, hard for Windows" things again. Ed gets so grumpy when I do that.

Ed jumps in:
You nailed it, Hal, with that characterization. Unfortunately, cmd.exe doesn't include the ability to do find and replace of strings within lines of a file using a built-in command. We can search for strings using the find command, and even process regex with findstr. But, the replacement part just doesn't exist there.

Thus, most reasonable people will either rely on a separately installed tool to do this, or use Powershell.

For a separately installed tool, my first approach would be use Cygwin, the free Linux-like environment for Windows, and then just run the sed command Hal uses above. Nice, easy, and sensical.

Alternatively, you could download and install a tool called replace.exe.

Or, there's another one called Find And Replace Text, which, as you might guess, is called FART for short.

To do this in Powershell efficiently, I asked Tim Medin, our go-to guy for Powershell, to comment.

Tim (our Powershell Go-To Guy) says:
This morning when Ed asked me to do a "quick" write up for Powershell, I thought to myself, "This won't be too bad..." I was wrong.

By default there are aliases for many of the command in Powershell, so I'll show both the long and short version of the commands (yes, even the short command is long relative to sed).

The Long Way
PS C:\> Get-ChildItem -exclude *.bak | Where-Object {$_.Attributes -ne "Directory"} |
ForEach-Object { Copy-Item $_ "$($_).bak"; (Get-Content $_) -replace
"foo","bar" | Set-Content -path $_ }

The Short Way (using built in aliases)
PS C:\> gci -ex *.bak | ? {$_.Attributes -ne "Directory"} | % { cp $_ "$($_).bak";
(gc $_) -replace "foo","bar" | sc -path $_ }

This command is rather long, so let's go through it piece by piece.
gci -ex *.bak | ? {$_.Attributes -ne "Directory"}

The first portion gets all files that don't end in .bak. Without this exclusion, it will process file1.txt and the new file1.txt.bak. Processing file1.txt.bak results in file1.txt.bak.bak, but it doesn't do this endlessly, just twice.

The Where-Object (with an alias of ?) ensures that we only work with files and not directories because Get-Content on a directory throws an error.

ForEach-Object { Copy-Item $_ "$($_).bak"; (Get-Content $_) -replace "foo","bar" |
Set-Content -path $_ }
Once we get the files, not directories, we want, we then act on each file with the ForEach-Object (alias %). For those of you haven't yet fallen asleep, I'll further break down the inner portion of the ForEach-Object:

Copy-Item $_ "$($_).bak"
First, we copy the file to our backup .bak file. We have to use the $() in order to use our variable in a string so we can append .bak.

Finally, we get to the search and replace (and it's about time, too!).
(Get-Content $_) -replace "foo","bar" | Set-Content -path $_

Get-Content (gc) gets the contents of the file. We wrap it in parentheses so we can act on its output in order to do our replace. The output is then piped to Set-Content (sc) and written back to our file.

We could make this work a little better if we used variables, but then we are more in script-land instead of shell-land which probably violates the almighty laws of this blog. The use of variables turn this more into a scripting exercise instead of shell (OK, we may already be there). For kicks, I'll show you how we can use variables show you so you can add it to your big bloated belt of windows-fu.
$a = (gci | ? {$_.Attributes -ne "Directory"}); $a | % { cp $_ "$($_).bak";
(gc $_) -replace "foo","bar" | sc -path $_ }

The difference between our original command and this command is that the $a variable grabs a snapshot of the directory before we copy files, so we won't operate on the new .bak files.

After all this work we have done the same thing as the mighty sed. Sadly even the power of Powershell is no match for efficiency of sed.

Ed closes it out:
Thanks for that, Tim. Nice stuff!

Wednesday, May 20, 2009

Episode #38: The Browser Count Torture Test

Hal just can't resist:

One of my customers was interested in some stats on which browsers were hitting their web site most often. My first thought was to use a Perl script to parse the user agent strings out of the Apache access_log files. But, hey, I write for Command Line Kung Fu, so how far can I get just using standard Unix tools? Besides, trying to do the same thing in the Windows shell will be a pain in the... humiliating defeat... interesting learning experience for Ed, and I'm committed to the growth of my friends.

First let me show you what I came up with, then I'll explain it in detail:

# grep 'POST /login/form' ssl_access_log* | \
sed -r 's/.*(MSIE [0-9]\.[0-9]|Firefox\/[0-9]+|Safari|-).*/\1/' | \
sort | uniq -c | \
awk '{ t = t + $1; print} END { print t " TOTAL" }' | \
sort -nr | \
awk '/TOTAL/ { t = $1; next }; { $1 = sprintf("%.1f%%\t", $1*100/t); print}'

46.4% Firefox/3
27.0% MSIE 7.0
14.3% Safari
5.3% MSIE 6.0
3.0% Firefox/2
2.4% -
1.2% MSIE 8.0
0.3% Firefox/1

Here's the line-by-line interpretation:


  1. I didn't want to count every single page access, but was instead more interested in counting browsers by "user session". Since the site requires a user login for access, I used posting the secure login form as a proxy for recognizing individual sessions. Close enough for jazz.

  2. Now, to pull out the browser name/version from the user agent string. The data here is annoyingly irregular, so it looks like a good task for sed. Notice that I'm using the "-r" option in GNU sed to use extended regular expression syntax: not only does this allow me to use "|" in the regexp, but it also means I don't need to backwhack my parens to create sub-expressions.

    The regular expression itself is interesting. I'm creating a sub-expression match on either "MSIE <vers>.<sub>", "Firefox/<vers>", or "Safari" (I don't find tracking Firefox sub-versions or Safari version numbers that interesting, but as always "your mileage may vary"). Anything that doesn't match one of these browser patterns ends up matching a hyphen ("-") character, which are plentiful in Apache access_log entries.

    I place ".*" before and after the sub-expression, which matches the rest of the line before and after the browser string. However, since that text is not included in the sub-expression, when I replace the matching line with the sub-expression then the rest of the text is dropped. That leaves us with an output stream of just the browser info, or "-" for lines that don't match one of the major browsers we're tracking.

  3. Now that we've got a data stream with the browser info, it's time to count it. "... | sort | uniq -c" is the common idiom for this, and we end up with output like:

        290 -
    34 Firefox/1
    363 Firefox/2
    5534 Firefox/3
    632 MSIE 6.0
    3207 MSIE 7.0
    139 MSIE 8.0
    1708 Safari

  4. The next line is a common awk idiom for totalling a column of numbers. We print out each line as it's processed, but also keep a running total in the variable "t". After all the input has been processed, we use an "END" block to output the total. Now our output looks like:

        290 -
    34 Firefox/1
    363 Firefox/2
    5534 Firefox/3
    632 MSIE 6.0
    3207 MSIE 7.0
    139 MSIE 8.0
    1708 Safari
    11907 TOTAL

  5. The next "sort -nr" not only puts our data into numerically sorted order, but also has the side-effect of moving the "TOTAL" column up to the first line of output. We're going to make use of this in the awk expression on the next line.

  6. The last awk expression is a little psychotic, so let's take it piece by piece. The first section, "/TOTAL/ { t = $1; next }", matches our initial "TOTAL" line and puts the total number of entries into the variable "t". The "next" causes awk to skip on to the next line without printing the current line ("TOTAL").

    The other portion of the awk code will handle all of the other lines in the output. What we're doing here is replacing the raw count number in the first column with a percentage. The "sprintf(...)" format string looks a little weird, but it means a floating point value with one decimal place ("%.1f"), followed by a literal percent character ("%%"), followed by a tab ("\t"). The numeric value we plug in is the raw count from column 1 of the output, times 100, divided by the "TOTAL" value we extracted from the first line of output.


And there you have it. The agonized squealing you're hearing is Ed wondering how he's going to even get close to this in the Windows shell. I can't wait to see what he comes up with.

Ed responds:
Wow! That's some serious fu there, Hal. And, I mean both serious and fu.

Thanks for the interesting learning opportunity, kind sir. How delightful!

As you know, we're kinda hampered with cmd.exe in that we get regex support from findstr, which cannot do extended regular expressions like sed -r. Therefore, we cannot do the funky "|" in the regex. Our resulting command will have to include more piece-parts for each browser.

And, as we discussed in Episode # 25: My Shell Does Math, we have access to simple integer math in cmd.exe via "set /a", but floating point and division cause problems.

Still, we can get some useful output that tells us the number of each kind of browser and a handy total like this:

C:\> echo MSIE > browser.txt & echo Firefox >> browser.txt & echo Safari
>> browser.txt & echo - >> browser.txt & (for /f %i in (browser.txt) do
@echo %i & type ssl_access_log | find "POST /login/form" | find /c "%i" & echo.)
& del browser.txt
MSIE
873

Firefox
1103

Safari
342

-
2327

In this command, I'm first building a little file called browser.txt containing the different browsers strings that I'd like to count. I'll then iterate over that file using a FOR /F loop. I'd much rather do this by iterating over a string containing "MSIE FIREFOX SAFARI -", but unfortunately, FOR /F parses strings into a series of variables all in one FOR /F iteration, making it useful for parsing a string into different variables (like %i %j %k, etc.). But, FOR /F used with a string does not pull apart a string into pieces that vary at each iteration through the loop. Boo, FOR /F! So, we compensate by building a little file with one browser per line, and then we iterate over that.

For each browser in browser.txt, we display the browser name (echo %i), and scrape through our ssl_access_log using the plain old find command to look for lines with "POST /login/form". I then take the output of that, pipe it through find with a /c option to count the number of occurrences of the %i iterator, which is the name of each browser. Note that the - will total all browsers, since their log entries have a dash in them. After my little looping escapade, I delete the temporary browser.txt file that I created at the beginning.

The output, while not as beautiful as Hal's, still is useful -- you see the number of POST login actions per browser, and the total. Why, you could even add a little "& calc.exe" at the end to pop up a calculator to do your percentages. :)

Monday, May 18, 2009

Episode #37: Show Account Security Settings

Ed engages:

Yesterday, I was doing a presentation for a bunch of auditors, and a nice question came up from the attendees: "How can I quickly see the local account security settings on a Windows box from the command line?" When I gave the answer, I saw a lot of people's eyes light up. Of course, whenever an auditor's eyes start to flicker, we should all watch out. :)

Seriously, though... the vast majority of the people in the room quickly wrote down a note with my answer, so I figured it would make a good episode here.

On Windows, you can see overall security settings for all accounts on the box using the command:

C:\> net accounts
Force user logoff how long after time expires?: Never
Minimum password age (days): 0
Maximum password age (days): 42
Minimum password length: 0
Length of password history maintained: None
Lockout threshold: Never
Lockout duration (minutes): 30
Lockout observation window (minutes): 30
Computer role: WORKSTATION
The command completed successfully.

A simple little command like that shows really useful information, for auditors, pen testers, general security personnel... great stuff. We've got password aging information, minimum password length, password history (so users can't just reset their password to an older one they used to have), the threshold of bad logon attempts for account lockout, the time duration of account lockout, and the amount of time before a locked out account is re-activated.

The output I show above is the default settings for most versions of Windows, including Win2K, WinXP, and Vista (Yup... minimum password lenght of 0 by default!). On Win2k3, the only difference is that the "Computer role:" says SERVER.

Another nifty related command is:

C:\> net accounts /domain

You can run this on any system that is a member of the domain, and it'll show you the domain-wide settings for accounts.

Pretty cool, and all in one place.

So, what've you got for us on Linux, big guy?

Hal reports in:

I'm sure you all are getting fairly tired of this, but I have to give my usual disclaimers:

1) Different Unix systems handle password security settings in different ways, so we're just going to focus on Linux

2) The answer is different if you're working with a network-based authentication database like LDAP or Kerberos, but for purposes of this article we're just going to stick to local password files

With those disclaimers in mind, the basic answer is simple:

# chage -l hal
Last password change : Jul 14, 2007
Password expires : never
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7

The "chage" command can be used to get (and set) basic password security parameters for accounts on your Linux system (other Unix variants often use the "passwd" command for this). This is actual output from one of my test systems and shows you the standard Linux defaults for these parameters, which are obviously not terribly secure. You may change the defaults by modifying the /etc/login.defs file, but be aware that the defaults you set in login.defs will only apply to new accounts that you create with the built-in "useradd" program that comes with Linux. If you use some other scheme for creating accounts, then you'll have to use the "chage" command to manually set these values after you create each account.

If you compare the "chage" output with the output of Ed's "net accounts" command, you'll notice that "chage" doesn't have anything to say about password history settings or "lockout on failure" parameters. That's because this level of password security is a property of the lower-level PAM configuration on most Unix systems. On Linux, the pam_cracklib and pam_unix modules take care of password history and strong password enforcement, while pam_tally is responsible for "lockout on failure". Unfortunately there's no way to audit the settings for these modules other than to look at the actual PAM configuration files, usually found in /etc/pam.d.

Friday, May 15, 2009

Episode #36: File Linking

Paul pops off:

Creating links between files is a handy feature in UNIX/Linux systems. There are many instances where you need to have a copy of the file (or dirctory) in a particular location, but only want to maintain one original. For example, I was running a program to check the security of my Apache configuration file. It expected the file to exist in "/usr/local/apache2/conf/httpd.conf", but the original file was located at "/etc/httpd/conf/httpd.conf". To solve this problem I created a "soft" link as follows:

$ ln -s /etc/httpd/conf/httpd.conf /usr/local/apache2/conf/httpd.conf


The above "ln" command takes the "-s" flag to indicate a soft link, which creates a pointer to the original file. Next you specify the original file, followed by the file that will point to the original. Many will forget which one comes first (the original or the pointer), so don't forget that the original file always comes first :) Oh, and you can view the links by using the ls -l command:

$ ls -l /usr/local/apache2/conf/httpd.conf
lrwxrwxrwx 1 root root 26 Apr 21 13:57 /usr/local/apache2/conf/httpd.conf -> /etc/httpd/conf/httpd.conf


Hal chimes in:

Let me show you one more useful trick with the "ln" command. You can actually create symlinks to an entire directory of files with a single "ln" command:

# cd /usr/local/bin
# ln -s ../depot/clamav/current/bin/* .

First we "cd" to /usr/local/bin. The "ln" command creates a link to every object under /usr/local/depot/clamav/current/bin. The names of the links in /usr/local/bin will have the same name as the files under .../clamav/current/bin.

This is how I manage software that I've built from source on my systems. In fact, .../clamav/current is itself a symlink to a directory like .../clamav/0.95.1. Whenever I build the latest version of ClamAV, I install it in its own .../clamav/<vers> directory and just change the .../clamav/current symlink to point to the latest and greatest version. Since all the symlinks under /usr/local/{bin,sbin,etc,lib,include} are expressed using the .../clamav/current link, every other link in the hierarchy automatically starts pointing at the right version as soon as I change the .../clamav/current link. And it's easy to revert too, just in case the new version isn't working for some reason. Slick.

Ed responds:

Sadly, Microsoft never got around to implementing a pure-play shortcut-creating feature inside of cmd.exe. Because of that, several folks have released third-party tools that do so. Some nice ones include the NT resource kit tool simply called shortcut.exe, Pixelab's xxcopy, and NirSoft's NirCmd.

But, downloading a third-party tool isn't our way at this here blog. So, we must explore other options.

While cmd.exe itself doesn't have a feature for creating shortcuts, wscript, which is built in, does. There are many examples out on the Internet for creating shortcuts with wscript, but I've boiled them down to their bare minimum:

set WshShell = WScript.CreateObject("WScript.Shell" )
set oShellLink = WshShell.CreateShortcut( Wscript.Arguments.Named("shortcut") & ".lnk" )
oShellLink.TargetPath = Wscript.Arguments.Named("target")
oShellLink.Save

The above script takes two arguments: the name of the target you want to create a shortcut to (/target:) and the shortcut name itself (/shortcut:). Note that the target could be a file or a directory. To create a shortcut using this script, we could dump all of that stuff above into a file called shortcutter.vbs, and then run it with the wscript interpreter.

"Ah... but that would be a scripting solution and not a command line," you might say. "You need to create a single command line that addresses the challenge."

Thanks for the delightful reminder. What, are you on Hal's payroll? Don't you have anything better to do with your time than taunt me? ;)

OK... I'll take your input and respond with this for a command line:

C:\> echo set WshShell = WScript.CreateObject("WScript.Shell" ) > shortcutter.vbs &
echo set oShellLink = WshShell.CreateShortcut( Wscript.Arguments.Named("shortcut") ^& ".lnk" )
>> shortcutter.vbs & echo oShellLink.TargetPath = Wscript.Arguments.Named("target")
>> shortcutter.vbs & echo oShellLink.Save >> shortcutter.vbs &
wscript shortcutter.vbs /target:[source] /shortcut:[shortcut]


It pretty much types itself, doesn't it? Easy!

Uh.... or not.

I'm simply creating the vbs script, which I'm naming shortcutter.vbs, and then invoking it to create the shortcut. I don't delete it at the end, because I want to keep it around for future uses. These things come in handy, you know.

Wednesday, May 13, 2009

Episode #35: Remotely Locking Out User While Preserving Session

Ed kicks it off:

We received a request the other day from Mr. Fordm via the Pauldotcom IRC channel. He was wondering if there was a way to lock out a user engaged in an active session on a machine. This kind of thing comes up from time to time, often during abrupt employee termination. Here's the scenario: User John Doe gets canned. He's sitting at his computer logged on in his cubicle and the IT or security staff is instructed to just get him off the machine immediately. Any delay, and there is a chance he'd launch the missiles against friendly targets or something.

The security guy suggests just remotely shutting the system down. But, no... management wants more. They want to preserve the currently logged on session so they can see if John had started to launch the missiles by typing:

C:\> wmic missiles call launch target=...*

So, how can we lock the user out while preserving the GUI session which might hold some juicy info?

First off, we want to change the user's password. Otherwise, he or she would log right back in once we lock the session. Let's assume the user is logged in via a local account, and change the password by using remote command execution via WMIC. We covered remote command execution in Episode #31, which we'll use to invoke the "net user" command to change the password:

C:\> wmic /node:[IPaddr] /user:[Admin] /password:[password] process call
create "net user [user] [NewPassword]"

You can go further, disabling the account so that no one can login with it until you re-enable it, by running:

C:\> wmic /node:[IPaddr] /user:[Admin] /password:[password] process call
create "net user [user] /active:no"

Remember, if you want to get back into this user's session later, you'll have to re-enable that user by running:

C:\> wmic /node:[IPaddr] /user:[Admin] /password:[password] process call
create "net user [user] /active:yes"

Next, we've got to lock the session. On first blush, you might think to use the following command, wrapped up inside of WMIC for remote execution:

C:\> rundll32.exe user32.dll,LockWorkStation


When executed by a local user currently logged on to a Windows box, this will lock the workstation. Nice... but... executed remotely, using WMIC as shown above, won't do the trick on most versions of Windows. You see, this command against a remote target won't be able to get access to the user's currently logged on console GUI session, so nothing happens.

You might think that we can get a little more intricate by running the logoff command against the user, again wrapped up inside of WMIC:

C:\> logoff


Nope... same problem. Works great locally, but remotely, it can't interact with that console session. And, worse... if it did work, it would eliminate the session with the juicy information we want to preserve when it logs off the user.

So, what to do? There's a great command for doing just this kind of thing: tsdiscon.

You can run it as follows:

C:\> wmic /node:[IPaddr] /user:[Admin] /password:[password] process call
create "tsdiscon"
Alternatively, the tsdiscon command has an option to run remotely:

C:\> tsdiscon console /server:[IPaddr] /v

This works like a champ on XP, locking the user at the console out, while preserving the session.

Note that tsdiscon, when run remotely, will pass through your current user's authentication credentials to the target IPaddr machine. Thus, make sure you are logged in with a user and password combination that are also in the admin group of the target machine, or that have domain admin privileges.

Unfortunately, while this works great on XP, the tsdiscon command doesn't allow you to disconnect the console session for Windows Vista or 2008 Server. I've confirmed this in my lab, and have found references to that limitation in Microsoft documentation. On Vista and 2008, you can use tsdiscon to disconnect RDP/Terminal Services sessions other than the console session (you can get a list of sessions on Vista and 2008 by running "query session" or by running "qwinsta" on XP). Sadly, I haven't found a remote command-line method for closing the console session on Vista or 2008 server while preserving that session. The rwinsta command in XP and Vista resets a session on a Vista or XP box, when used as follows:

C:\> wmic /node:[IPaddr] /user:[Admin] /password:[password] process call
create "rwinsta console"

...but you'll lose all of the current session information and running programs when rwinsta kills the session. Still, that'll let you lock out the user so he can't launch the missiles... but at the cost of losing the cmd.exe session history showing that he tried to launch them. For most purposes, that'll suffice. And, I guess it provides yet another reason to stay on XP (as if you needed any more of them).

If you know of a way to remotely disconnect a console user session on Vista using built-in command-line tools, please do send in a suggestion to suggestions@commandlinekungfu.com, and I'll gladly add it to this article.

*Current versions of Windows do not expose the missiles alias within wmic. In Windows 9 (code name: "We miss Bill"), though, it will be built-in, along with the callable method "launch". Just wait.

Hal reports from the bunker:

As Ed points out, the first trick is to lock the user's account. I'm going to assume that the system is using local password files, rather than a networked authentication database such as LDAP or Kerberos. These latter systems have their own command-line interfaces which allow you to lock user accounts, but they're outside of the scope of this blog.

So we need to SSH into the user's workstation and gain root privileges via su or sudo. Note that this assumes you have an SSH server running for remote maintenance tasks. A lot of Linux workstation builds don't automatically configure an SSH server by default. You're "Seriously Out of Luck" in these cases, and the best that you can do is try to seize the workstation before the user has a chance to launch their missiles. If you have an intelligent switch fabric, you might want to move the user's workstation onto an isolated VLAN before seizing the workstation. That way, the user might have a chance to trash their own system, but less opportunity to launch missiles at other targets.

Once you're into the system, use "passwd -l" to lock the user's account ("passwd -u" will unlock the account again, btw). Let's use Paul as our example fall guy again:


# passwd -l paul

"passwd -l" can have different effects, depending on what flavor of Unix you're using. On Linux systems, the usual practice is to introduce a "!" character at the front of the user's password hash. This renders the hash invalid so users can't log in, but it's easy to undo the change if you decide you later want to let the user into the system. Some Linux systems go further and set the "account disabled as of ..." field in the /etc/shadow file (it's the second-to-last field for each entry) to a date in the past so that even just resetting the password hash is insufficient to unlock the account.

On older, proprietary Unix systems like Solaris, "passwd -l" usually changes the user's password hash to an invalid string like "*LK*", which unfortunately loses the user's original password hash. However, at least on Solaris systems, the cron daemon will actually refuse to execute jobs for users whose password entry is "*LK*". This means clever users can't set up automated tasks to re-open access to their systems (or launch missiles). When locking accounts on Linux systems, you should also make sure to disable any cron jobs that user may have set up:

# crontab -l -u paul > /root/paul.crontab
# crontab -r -u paul

Here we're making a backup copy of Paul's crontab under /root and then removing all cron jobs. You could later restore Paul's crontab with "crontab -u paul /root/paul.crontab".

If you're worried about the user logging into the workstation remotely after you've turned on the screen locker, then you also need to be careful that the user has no "authorized_keys" files, or even ".[sr]hosts" and hosts.equiv files if you're allowing "HostBasedAuthentication":

# mkdir /root/paul-trustfiles
# mv ~paul/.ssh/authorized_keys ~paul/.[rs]hosts /etc/*hosts.equiv /root/paul-trust

OK, that should be sufficient for keeping that naughty Paul out of the machine. As far as turning on the screen locker, there are a lot of different options on different Unix systems, but let's just stick with the popular (and widely available) "xlock" program. Whatever program you choose to use, the biggest trick to remotely enabling the screen locker is to first acquire the necessary credentials to access the user's X display:

# export DISPLAY=:0.0
# cp ~paul/.Xauthority /root
# su -c paul 'xlock -mode blank -info "This workstation administratively locked"'

On the first line, we set our "DISPLAY" environment variable to match the user's display-- normally ":0.0", you can validate this with the "who" command if you're not sure. On the second line, we grab the user's "magic cookie" file, which allows us access to the X server on the specified "DISPLAY". Finally, we turn on the xlock program with just a blank, black screen. Also, the above example demonstrates that you can also specify an informational message that the user sees when they try to unlock their workstation.

Note that our example starts the xlock program as user "paul", which means the password for the "paul" account-- rendered invalid with the "passwd -l" command earlier-- will be required to unlock the screen. You could actually dispense with the "su paul -c" and start the xlock program as root, thus forcing somebody to enter the root password to unlock the screen. Of course, if Paul happens to know the root password for his workstation, this is not a good idea (you certainly don't want to lock the root account on the system)! However, another possibility would be to actually "su" to some other user account when starting up the screen locker, just to make things more difficult for Paul. But I think you're probably better off using Paul's account, since we know that user has an invalid password. To unlock the screen again, once Paul has been safely escorted out of the building, just kill the xlock process.

Monday, May 11, 2009

Episode #34: Suspicious Password Entries

Hal Says:

Older, proprietary Unix systems tend to include the "logins" command, which is a handy little command for searching your passwd and shadow files for different sorts of information. In particular, "logins -p" (find accounts with null passwords) and "logins -d" (find accounts with duplicate UIDs) are useful when auditing a large user database. Unfortunately, the "logins" command doesn't exist in Linux. But of course you can emulate some of its functionality using other command-line primitives.

Finding accounts with null passwords is just a simple awk expression:

# awk -F: '($2 == "") {print $1}' /etc/shadow

We use "-F:" to tell awk to split on the colon delimiters in the file and then look for entries where the password hash in the second field is empty. When we get a match, we print the username in the first field.

Finding duplicate UIDs is a little more complicated:

# cut -f3 -d: /etc/passwd | sort -n | uniq -c | awk '!/ 1 / {print $2}'

Here we're using "cut" to pull the UIDs out of the third field of /etc/passwd, then passing them into "sort -n" to put them in numeric order. "uniq -c" counts the number of occurrences of each UID, creating one line of output for each UID with the count in the first column. Our awk expression looks for lines where this count is not 1 and prints the UID from each matching line.

Another useful password auditing command is to look for all accounts with UID 0. Normally there should only be a single UID 0 account in your password file ("root"), but sometimes you'll see attackers hiding UID 0 accounts in the middle of the password file. The following awk snippet will display the usernames of all UID 0 accounts:

# awk -F: '($3 == 0) {print $1}' /etc/passwd

More generally, you can use the "sort" command to sort the entire password file numerically by UID:

# sort -t: -k3 -n /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/bin/sh
bin:x:2:2:bin:/bin:/bin/sh
...

"-t" is used to specify the column delimiter, "-k" specifies which column(s) to sort on, and "-n" means do a numeric (as opposed to alphabetic) sort. The advantage to viewing the password file this way is that all the UID 0 accounts bubble right up to the top of the output, plus it's easier to spot accounts with duplicate UIDs this way.

Ed responds:

OK, sports fans... Hal really threw down the gauntlet here. Brace yourselves, because this is gonna get ugly. You've been warned.

Unlike Linux with its /etc/passwd and /etc/shadow files, Windows doesn't make it easy to access user account password hashes. Thus, it's much harder for us to determine if a password is blank... but we do have some options.

For starters, we could blatantly violate our ground rules here and use third-party tools. One option that pops into mind is fgdump, my favorite tool for dumping Windows hashes. We could run:

C:\> fgdump -c & find "NO PASSWORD" 127.0.0.1.pwdump & del 127.0.0.1.pwdump


This command invokes fgdump, which runs against localhost by default, with the -c option to turn off the dumping of cached credentials, making it give us only the local SAM database. Unfortunately, fgdump doesn't have the option of displaying the hashes on standard output, but instead stores its results in a file called [IPaddr].pwdump. So, we then run the find command to look for output that contains the words "NO PASSWORD" in this file, and then delete the file.

Now, keep in mind that if a given user has a password that is 15 or more characters in length, that user will authenticate using only the NT Hash, and Windows will set the LANMAN to a value of a hash of padding, that old AAD3B4... stuff. In its output for such accounts, fgdump will display "NO PASSWORD" for the LANMAN hash of such accounts, even though they do have an NT hash with an NT password. Thus, to avoid false positives with accounts that have passwords greater than 14 characters, we should tweak our command to:

C:\> fgdump -c & find "NO PASSWORD*********************:NO PASSWORD" 127.0.0.1.pwdump
& del 127.0.0.1.pwdump

Easy.

Yeah, it's easy, if you throw away our treasured rule of using only built-in tools.

But, there's another way, which violates a completely different ground rule we've got around here. Instead of using a third-party tool, we could rely on built-in functionality via a Visual Basic Script. There's a great script from the
awesome Scripting Guys, available here, which attempts to change each user's password from blank to blank. If it is successful, you've got a user with a blank password. Nice and easy!

But, this one also throws another precious ground rule under the bus. That is, we aren't supposed to be using scripts, but instead we rely on single (albeit at times complex) commands.

For a third option, why don't we try to mimic the operation of this VB script at the cmd.exe command line? We could change a user password to blank by running:

C:\> net user [user] ""


This command tells the system to change the password of [user] to blank. If the password policy allows such passwords, it will succeed. Ummm... that's no good for us, because it succeeds regardless of the current user's password. So, this command violates a third coveted rule around here: Commands have to actually work.

Oooookay then. Is there a fourth option? Turns out there is, but it gets pretty ugly. The basis of this command is to rely on "net use" to make an SMB connection locally, thusly:

C:\> net use \\[hostname] "" /u:[user]


If the guest account is disabled, and you otherwise have a default security policy, the system displays the following text if [user] has a blank password:

System error 1327 has occurred.

Logon failure: user account restriction.
Possible reasons are blank passwords not allowed, logon hour restrictions,
or a policy restriction has been enforced.

Note that first possible reason -- blank passwords.

Also, note that this same message comes up if there are policies defined that restrict the account from logging on during certain times of day or other policy restrictions. But, still, in most environments, this is an indication that the password is blank. Not perfect, but good enough for most cases.

Building on this, here ya go, a "single" command that checks to see if local accounts have blank passwords, without using any third-party tools or scripts:


C:\> FOR /F "tokens=2 skip=1" %i in ('wmic useraccount list brief') do @echo.
& echo Checking %i & net use \\[hostname] "" /u:%i 2>&1 | find /i "blank" >nul
&& echo %i MAY HAVE A BLANK PASSWORD & net use * /del /y > nul

Wow! There's a mess, huh? Here's what I'm doing...

I'm setting up a FOR /F loop to iterate on the output of the command 'wmic useraccount list brief', which will show all of the locally defined accounts on the box. I'm parsing the output of that command by skipping the first line (which is column headers) and setting the value of my iterator variable to the second item in each line (the first is the Account Type, the second is the SYSTEM\username).

I'm then echoing a blank line to our output (echo.) to make things prettier followed by displaying a message that I'm checking a given account (echo Checking %i). Then, I try to make an SMB connection to our local hostname (you really should put in your own system's hostname... using \\127.0.0.1 isn't reliable on every version of Windows). The attempted SMB connection has a password of blank ("") and a user name of our current iterator variable (/u:%i).

Now, if the account has a blank password, I'll get an error message that says: "Logon failure: user account restrictions. Possible reasons are blank passwords...". Remember that if you have any of those other restrictions defined on the box, our little one-liner will give you false positives.

Then, I take our error message and dump it into a replicated copy of our standard output (2>&1) so that I can scrape through what was Standard Error with the find command to look for the word "blank". I dump the output of that to nul so we don't see it's ugliness. Then, if the find command is successful (&&), I print out a message saying that %i MAY HAVE A BLANK PASSWORD. Note the weasel word "MAY". That's because there may be other account restrictions applied to the account or system.

Finally, I drop any residual SMB connections we've made (net use * /del /y), dumping its output to nul.

Whew! I tried several other methods for doing this at the command line, but they got even uglier, believe it or not.

Also, note that the above command depends on the Guest account being disabled. If you have that account enabled, it'll show that no accounts have blank passwords, as you'll never get the requisite error message. But, for most production environments, you really should disable that Guest account, you know. You can do that at the command line with:

C:\> net user guest active:no


Be careful, though, to make sure that you don't have any apps that actually rely on the Guest account being enabled.

Now, let's see what else Hal has in store for us in his initial challenge...

Ahhh... userID numbers, better known as SIDs in Windows. Well, Windows assigns those at account creation, attempting to make sure that they are all unique. Therefore, we should never have the same value for two different accounts at the same time... right? Just to make sure, we can dump them using:

C:\> wmic useraccount get sid, name


Unfortunately, the output shows name first followed by sid. That's a crummy aspect of wmic... it shows you attributes in its output alphabetically by attribute name. "Name" comes before "Sid" alphabetically, so we get name, sid even though we asked for sid, name. We can reverse them using a FOR /F loop to parse, and then sort them, using the following command:

C:\> (for /F "tokens=1,2 skip=1" %i in ('"wmic useraccount get sid, name"')
do @echo %j %i) | sort



So, here, I'm running the wmic command inside a FOR /F loop. I've embedded the command in single-quote followed by double-quote at the beginning, and double-quote followed by single-quote at the end. The reason for this is two fold... Normally, we require just the single quotes at the beginning and end to run a command inside the parens of a FOR /F loop. But, if the command has a comma or quote in it, we must either use a ^ before the comma or quote, or put the whole thing inside of single-quote double-quotes as I have here. I used the latter because of the parens around the entire FOR /F loop, which I used so I could pipe it through the sort command. I've found that the ^, or ^" in FOR /F loops have problems when you put parens around the entire FOR /F loop, so I've taken to using the ' " and " ' as they work regardless of the ( ) around the FOR /F loop. It's a little trick I figured out on a flight to Defcon years ago.

So, where was I? Oh yeah... we've now got a list of SIDs and usernames, sorted. Our sort is alphabetic, not numeric, which kinda stinks. Still, you could eyeball the resulting list and see if any of them are identical. Sadly, there is no built-in "uniq" command in Windows. Man, Hal and Paul have it easy, don't they?

If you really want a uniq, you could download a "uniq" command for Windows. Or, you could simulate one. Are you ready for a sick little trick to detect whether a file has all unique lines using built-in tools in Windows?

For this stunt, we'll rely on the built-in Windows fc command, which compares two files (fc stands for "file compare". We can use it as follows:

C:\> (for /F "tokens=1,2 skip=1" %i in ('"wmic useraccount get sid, name"')
do @echo %j %i) | sort > accounts.txt & sort /r accounts.txt > accountsr.txt
& fc accounts.txt accountsr.txt & del accounts.txt & del accountsr.txt


The idea here is to use the sort /r command to create a list of accounts in reverse order, and then compare it to the original list of accounts. If there are no duplicate SIDs, your output will simply show the list of accounts forward, followed by the list of accounts backward. If there are one or more duplicate SIDs, you will see a blank line in the middle of your output as fc tries to show you the differences. Let me illustrate with an example.

Here is the output when we have all unique SIDs:

Comparing files accounts.txt and ACCOUNTSR.TXT
***** accounts.txt
S-1-5-21-2574636452-2948509063-3462863534-1002 SUPPORT_388945a0
S-1-5-21-2574636452-2948509063-3462863534-1003 ASPNET
S-1-5-21-2574636452-2948509063-3462863534-1004 HelpAssistant
S-1-5-21-2574636452-2948509063-3462863534-1005 skodo
S-1-5-21-2574636452-2948509063-3462863534-1006 nonadmin
S-1-5-21-2574636452-2948509063-3462863534-1064 __vmware_user__
S-1-5-21-2574636452-2948509063-3462863534-1072 frank
S-1-5-21-2574636452-2948509063-3462863534-1073 dog
S-1-5-21-2574636452-2948509063-3462863534-1074 fred
S-1-5-21-2574636452-2948509063-3462863534-500 Administrator
S-1-5-21-2574636452-2948509063-3462863534-501 Guest
***** ACCOUNTSR.TXT
S-1-5-21-2574636452-2948509063-3462863534-501 Guest
S-1-5-21-2574636452-2948509063-3462863534-500 Administrator
S-1-5-21-2574636452-2948509063-3462863534-1074 fred
S-1-5-21-2574636452-2948509063-3462863534-1073 dog
S-1-5-21-2574636452-2948509063-3462863534-1072 frank
S-1-5-21-2574636452-2948509063-3462863534-1064 __vmware_user__
S-1-5-21-2574636452-2948509063-3462863534-1006 nonadmin
S-1-5-21-2574636452-2948509063-3462863534-1005 skodo
S-1-5-21-2574636452-2948509063-3462863534-1004 HelpAssistant
S-1-5-21-2574636452-2948509063-3462863534-1003 ASPNET
S-1-5-21-2574636452-2948509063-3462863534-1002 SUPPORT_388945a0
*****


And, here is the output when we have a dupe:


Comparing files accounts.txt and ACCOUNTSR.TXT
***** accounts.txt
S-1-5-21-2574636452-2948509063-3462863534-1002 SUPPORT_388945a0
S-1-5-21-2574636452-2948509063-3462863534-1003 ASPNET
S-1-5-21-2574636452-2948509063-3462863534-1004 HelpAssistant
S-1-5-21-2574636452-2948509063-3462863534-1005 skodo
S-1-5-21-2574636452-2948509063-3462863534-1006 nonadmin
S-1-5-21-2574636452-2948509063-3462863534-1064 __vmware_user__
S-1-5-21-2574636452-2948509063-3462863534-1072 frank
***** ACCOUNTSR.TXT
S-1-5-21-2574636452-2948509063-3462863534-501 Guest
S-1-5-21-2574636452-2948509063-3462863534-500 Administrator
S-1-5-21-2574636452-2948509063-3462863534-1074 fred
S-1-5-21-2574636452-2948509063-3462863534-1072 frank
*****

***** accounts.txt
S-1-5-21-2574636452-2948509063-3462863534-1072 dog
S-1-5-21-2574636452-2948509063-3462863534-1074 fred
S-1-5-21-2574636452-2948509063-3462863534-500 Administrator
S-1-5-21-2574636452-2948509063-3462863534-501 Guest
***** ACCOUNTSR.TXT
S-1-5-21-2574636452-2948509063-3462863534-1072 dog
S-1-5-21-2574636452-2948509063-3462863534-1064 __vmware_user__
S-1-5-21-2574636452-2948509063-3462863534-1006 nonadmin
S-1-5-21-2574636452-2948509063-3462863534-1005 skodo
S-1-5-21-2574636452-2948509063-3462863534-1004 HelpAssistant
S-1-5-21-2574636452-2948509063-3462863534-1003 ASPNET
S-1-5-21-2574636452-2948509063-3462863534-1002 SUPPORT_388945a0


See, the frank and dog account have the same SID, as indicated on either side of that blank line in the middle there. If there are any dupe SIDs, you'll see that tell-tale blank line in the middle of the output. Sure, it's a kluge, but it is a quick and dirty way of determining whether there is a duplicate in a stream of information.

And, we end on a much simpler note. Hal wants to find accounts with superuser privileges (UID 0 on Linux). We can simply look for accounts in the administrators group using:

C:\> net localgroup administrators


So, there you have it. I warned you that it would get ugly, and I am to deliver on my promises. In the end, we were able to achieve nearly all of what Hal did in Linux, making certain assumptions.

Friday, May 8, 2009

Episode #33: Recognizing Sub-Directories

Hal takes requests:

Loyal reader Lloyd Alvarez writes in with a little problem. He's writing Javascript that needs to get a directory listing and be able to easily discriminate sub-directories from other objects in the directory. The trick is that he needs code for both Unix/Linux and Windows. Well where else would you come for that kind of service than Command Line Kung Fu blog?

Turns out that Lloyd had doped out a Unix solution on his own:

$ ls -l | gawk '{printf("%s,",$1);for(i=9;i<=NF;i++) printf("%s ",$i);printf(":")}'
total,:drwxr-xr-x,dir1 :drwxr-xr-x,dir2 :-rw-r--r--,file1 :-rw-r--r--,file2 ...

Yow! That's some pretty ugly awk and even uglier output, but Lloyd was able to parse the resulting stream in his Javascript and easily pick out the directories.

But let me make that even easier for you, Lloyd old buddy:

# ls -F
dir1/ dir2/ file1 file2 otherdir1/ otherdir2/ otherfile1 otherfile2

Yep, the "-F" option causes ls to append a special character to each object in the directory to let you know what that object is. As you can see, directories have a "/" appended-- should be easy to pick that out of the output! Other suffix characters you might see include "@" for symbolic links, "*" for executables, and so on. Regular files get no special suffix (see the above output).

Maybe Lloyd would prefer to just get a list of the sub-directories without any other directory entries:

$ find * -maxdepth 0 -type d
dir1
dir2
otherdir1
otherdir2

Or if you're not into the find command:

$ for i in *; do [ -d $i ] && echo $i; done
dir1
dir2
otherdir1
otherdir2

Too many choices, but then that's Unix for you! I'll step out of the way now and let Ed shock Lloyd with the Windows madness.

Ed jumps in:

Good stuff, Lloyd! Thanks for writing in.

In Windows, the easiest way to do this is to use the dir command, with the /d option. That option is supposed to simply list things in columns, but it adds a nice little touch -- directory names now have square brackets [ ] on either side of them. Check it out:

C:\> dir /d
[.] [dir1] file1 [otherdir1] otherfile1
[..] [dir2] file2 [otherdir2] otherfile2

So, just find and parse out those brackets, Lloyd, and you should be good to go. Oh, and remove the . and .. if you don't want then. Unfortunately, we cannot eliminate them with a /b (for bare), because that removes the brackets.

For Hal's additional fu for looking for just directories, we can rely on the fact that Windows considers "directoriness" (is that a word?) as an attribute. So, we can list only the directories using:

C:\> dir /ad /b
dir1
dir2
otherdir1
otherdir2

Or, if you want only files (i.e., NOT directories):
C:\> dir /a-d /b
file1
file2
otherfile1
otherfile2


You could do this kinda stuff with FOR /D loops as well, which give you even more flexibility. For example, if you just want directory names with a slash after them, to give you similar output to Hal's "ls -F", you could run:

C:\> FOR /D %i in (*) do @echo %i/
dir1/
dir2/
otherdir1/
otherdir2/

Or, if you really like the colons that your current scripts parse, you could do:

C:\> FOR /D %i in (*) do @echo :%i

By altering that echo statement, you can roll the output however you'd like.

Fun, fun, fun!

Wednesday, May 6, 2009

Episode #32: Wiping Securely

Ed gets his groove on:

The Encrypted File System on Windows stinks. It's something of an embarrassment for me that it is called EFS, because those are my initials too. My biggest beef with EFS is that it leaves cleartext copies of files around in unallocated space if you simply drag and drop a file into an EFS-protected directory. No, seriously... isn't that awful? Doh!

But, included with all the stinkatude of EFS is one gem: the cipher command, when used with the /w: option. I don't use EFS to encrypt, but often rely on cipher to wipe. I used it just last week to help a buddy who was trying to recondition his computer so he could give it to his son for school. He had some work files he needed to clear out, and was planning on simply deleting them through the recycle bin. Ouch! That won't work very well, as those files would still be recoverable. I broke into a mini-lesson about file systems and secure deletion options.

For wiping files, there are many good options available for download, such as sdelete from Microsoft SysInternals or DBAN to completely blow away a file system. But, what if you are stranded on a desert island and need to securely delete something using only built-in tools? Windows 2000 and better (not XP Home... Microsoft purposely cripples that version) have the cipher command, which includes the /w: option to wipe the unallocated space on a volume that you specify like this:

C:\> cipher /w:c:\folder

This command will cause Windows to overwrite all the unallocated space on the volume with c:\folder three times. First, it overwrites with zeros, then with ones, and then random numbers. Unfortunately, there's no option to specify overwriting any number of times other than three. Well, unless you want to... uh... do the obvious:

C:\> for /L %i in (1,1,9) do @cipher /w:c:\folder

This command will overwrite your unallocated space 27 times. Oh, and it'll take a long time on any reasonably sized partition with a lot of open space.

Whenever using cipher to wipe files, there are some hugely important notes to keep in mind.

First off, you have to include the colon between the /w and the folder name. Do people at Microsoft stay up late at night thinking of ways to make their syntax more horrible and inconsistent than ever, or does it just happen?

Second, and this one is huge.... note that cipher won't actually delete any current files in c:\folder or that folder itself! A lot of people think cipher will securely delete the folder (c:\folder) you specify, and that's not right. It securely deletes all unallocated (already deleted) files and folders on the entire partition that c:\folder inhabits. That's a much more thorough (and likely time consuming) process, but realize that it will leave behind c:\folder and its contents. If you want to get rid of them, delete them, and then do a cipher /w:c:\ to wipe the whole partition.

Now, there are major debates as to whether overwriting three times is enough for a good wipe. I've read the debates, and am comfortable that, for modern media, three times overwriting is good enough for most uses. If I need stronger destruction of data, it's best to simply take a hammer to the hard drive.

Hal wipes out:

Most Linux distros these days ship with the "shred" command, which overwrites files and then optionally deletes them:

# shred -n 3 -z -u myfile

Here "-n 3" specifies three overwrite passes, "-z" means to do a final overwrite with zeroes (nulls) to make it less obvious you've been shredding your files, and "-u" means to remove the file once the overwrites are performed.

But you should be aware that using "shred" on an individual file like we do in the above example may still leave traces of the file on disk. That's because most Linux systems these days use the ext3 file system, which has a file system transaction journal. Even after using "shred" on the file, the contents of the file may be recoverable from the journal using a tool like "ext3grep".

So the most secure option is to "shred" the entire disk (which overwrites the journal as well):

# shred -n 3 -z /dev/sdb

In these cases, you don't want to remove the disk device file itself once you're done with the overwriting so we leave off the "-u" option. This is also why "-u" is a separate option that must be explicitly set-- overwriting entire disks is the more common use case.

What if you're on a non-Linux system and don't have "shred" installed? Well, you could certainly download the "shred" source code (or "srm", another popular file deletion tool). But don't forget that you also have "dd", which I often use to wipe disks:

# dd if=/dev/urandom of=/dev/sdb bs=4096
# dd if=/dev/zero of=/dev/sdb bs=4096

The first command overwrites /dev/sdb with pseudo-random data-- use /dev/urandom instead of /dev/random for this because /dev/random can block waiting for additional entropy. The second overwrites your disk with zeroes. Run the commands multiple times depending on the number of overwrites you're most comfortable with.

Loyal reader Jeff McJunkin also points out that you can use "dd" to wipe the unallocated space in a partition, just like Ed is doing with "cipher":

# dd if=/dev/urandom of=junk bs=4096; rm junk

This will consume all remaining disk space in the partition with a file called junk-- the "dd" command will stop when the partition fills-- and then removes it immediately. Be sure to do this command as root, because the last 5% of the space in the file system is normally reserved for root-owned processes and not accessible to normal users.

Monday, May 4, 2009

Episode #31: Remote Command Execution

Ed starts out:

One of the most frequent questions I get regarding the Windows command line involves how to run commands on a remote Windows machine and get access to the standard output of the command. Sure, Microsoft SysInternals psexec rocks, but it's not built in. On Linux and Unix, ssh offers some great possibilities here, but neither ssh nor sshd are built-in to Windows (and what's with that? I mean... we need that. Call Microsoft right now and demand that they build in an ssh and sshd into Windows. Installing a third-party version is certainly doable, but we need it built in... starting about 5 years ago, thank you very much.)

Anyway, while there are many options for running a command on a remote Windows machine using built in tools (such as using at, schtasks, or sc), one of my faves is good old WMIC:

C:\> wmic /node:[targetIPaddr] /user:[admin] process call create "cmd.exe /c [command]"


That'll run [command] on the target, after prompting you for the given admin's password.

You won't see the standard output, though.

To get that, change it to:


C:\> wmic /node:[targetIPaddr] /user:[admin] process call create "cmd.exe /c [command] >> 
\\[YourIPaddr]\[YourShare]\results.txt"


Make sure you have [YourShare] open on your box so the target machine and [admin] user can write to your share. The results.txt file will have your standard output of the command once it is finished.

Oh, and to execute a command en mass on a bunch of targets, you could use /node:@[filename.txt], in which the filename has one line per machine name or IP address on which you want to run the given command.

Not nearly as elegant as what I'm sure my sparring partners will come up with for Linux, but it is workable.

Hal Replies:

Thanks for throwing us a bone here, Ed. With SSH built into every modern Unix-like operating system, remote commands are straightforward:

$ ssh remotehost df -h

Sometimes, however, you need to SSH as a different user-- maybe you're root on the local machine, but the remote system doesn't allow you to SSH directly as root, so you have to use your normal user account. There's always the "-l" option:

$ ssh -l pomeranz remotehost df -h

But what if you want to scp files as an alternate user? The scp command doesn't have a command line option like "-l" to specify an alternate user.

One little-known trick is that both ssh and scp support the old "user@host" syntax that's been around since the rlogin days. So these commands are equivalent:

$ ssh -l pomeranz remotehost df -h
$ ssh pomeranz@remotehost df -h

Personally, I never use "-l"-- I find "user@host" more natural to type and it works consistently across a large number of SSH-based utilities, including rsync.

Unlike wmic, SSH does not have built-in support for running the same command on several targets. The "Unix design religion" is that you're supposed to do this with other shell primatives:

$ for h in $(< targets); do echo ===== $h; ssh $h df -h; done

By the way, note the "$(< targets)" syntax in the above loop, which is just a convenient alternate form of "`cat targets`".

Unfortunately, the above loop is kind of slow if you have a lot of targets, because the commands are run in serial fashion. You could add some shell fu to background each ssh command so that they run in parallel:

$ for h in $(< targets); do (echo ===== $h; ssh $h df -h) & done

Unfortunately, this causes the output to be all garbled because different commands return at different speeds.

Frankly, you're better off using any of the many available Open Source utilities for parallelizing SSH commands. Some examples include sshmux, clusterssh, and fanout (which was written by our friend and fellow SANS Instructor, Bill Stearns). Please bear in mind, however, that while remote SSH commands allow you to easily shoot yourself in the foot, these parallelized SSH tools allow you to simultaneously shoot yourself in both feet, both hands, the head, and every major internal organ all at the same time. Take care when doing these sorts of things as root.

Friday, May 1, 2009

Episode #30: Twiddling with the Firewall

Ed kicks it off:

One of the real gems of the Windows command line is netsh. I use it all the time. In Episode #2, about a thousand years ago (gosh... can it be only 2 months?), we talked about using netsh (yeah, and iptables) to display firewall config information. But, you know, there are some commands I run even more often that alter the firewall.

In particular, if I'm in my lab doing an analysis that requires me to shut off the built-in Windows firewall, or when I'm conducting a pen test and want to hack naked without a firewall, I drop the firewall with:

C:\> netsh firewall set opmode disable


That's soooo much easier than digging through the blasted GUI to find where Microsoft buried the firewall configuration in the given version of Windows.

Simple command, but I use it all the time.

To turn it back on, you'd run:

C:\> netsh firewall set opmode enable


If you want to poke holes through the firewall based on port, you could run:

C:\> netsh firewall add portopening protocol = [TCP|UDP] port = [portnum] name = [NameOfRule]
mode = enable scope = custom addresses = [allowedIPaddr]


This syntax would configure the firewall to allow traffic in for the port you specify only if it comes from allowedIPaddr, to make your rule a little safer.

And, to remove the rule, just change "add" to "del", and only type in the command up to the port number.

Finally, as we mentioned in Episode #2, to show the overall config of the firewall, you can run:

C:\> netsh firewall show config


Hal chimes in:

Firewall software is another one of those areas where there are a number of competing options on different Unix/Linux platforms. For example you've got ipfilter on Solaris and {Free,Net}BSD, pf on OpenBSD, and IP Tables on Linux. I'm going to stick with IP Tables for purposes of this discussion, since it's probably what most of you all deal with the most.

Unfortunately, IP Tables is rather cranky to work with from the command line if you stick with the basic "iptables" command. The developers tried to smash all of the possible functionality configuration options into a single command, and IP Tables is capable of some complicated stuff. The result, however, is that there's a pretty steep learning curve for doing even basic operations. This is why simplified firewall configuration GUIs are provided by the major Linux distributions.

But let's try and cover some of the same command-line territory that Ed does in his Windows examples. First, as far as starting and stopping the firewall goes, your best bet is to just run "/etc/init.d/iptables [start|stop]" as root. The init script hides a lot of complexity around loading and unloading kernel modules, firewall rules, and default packet handling policies. For example, here are the manual command equivalents for "/etc/init.d/iptables stop":


# iptables -P INPUT ACCEPT
# iptables -P OUTPUT ACCEPT
# iptables -P FORWARD ACCEPT
# iptables -F

Yep, four commands-- and that's without even showing you the commands that some init scripts use to unload the IP Tables kernel modules. The first three commands above set the default permit ("ACCEPT") policy ("-P") for inbound ("INPUT") and outbound ("OUTPUT") packets, as well as for packets being passed through the system ("FORWARD"). It's possible that your particular firewall configuration doesn't change the default policies to block packets ("DROP"), but it's best to be sure. The final "iptables -F" command flushes all filtering rules, which means all packets will now simply be handled by the default "ACCEPT" policies.

The simplest possible example of adding a rule to allowing traffic into your system on a particular port would be something like:

# iptables -A INPUT -p tcp --dport 80 -j ACCEPT

This allows ("-j ACCEPT") inbound traffic ("-A INPUT") on 80/tcp ("-p tcp --dport 80"). Deleting the rule is as simple as running the same command but with a "-D INPUT" option (delete from the "INPUT" chain) instead of "-A INPUT" (add to the "INPUT" chain).

However, depending on how your particular Linux vendor sets up their firewall, adding rules directly to the "INPUT" chain may not be the right thing to do. Many vendors set up their own rule chains that pre-empt the default chains. Also, you may have to add rules to the "OUTPUT" chain (or vendor-specific equivalent) to allow the return packets to escape your host, unless you have a "default permit" configuration in the outgoing direction. For these reasons, it's best to inspect your current rule sets ("iptables -L -v") before making changes.

I should mention that a simplified command-line interface is now becoming available in some Linux distributions, notably Ubuntu. If all you're interested in is a host-based firewall, the "ufw" command makes configuration and maintenance much easier. Here are some sample "ufw" commands:

# ufw enable                # enable filtering
# ufw disable # turn off firewall
# ufw status # show current rules
# ufw allow 80/tcp # allow 80/tcp to any IP on this host
# ufw delete allow 80/tcp # delete above rule

You can also do more complex rules like:

# ufw allow proto tcp from 1.2.3.4 to any port 25

If you're more used to Cisco extended ACL syntax, or use ipfilter on other Unix systems, the "ufw" command-line syntax will be pretty natural for you.