Tuesday, April 27, 2010

Episode #92: Shifty Passwords

Hal blames Ed

A recent article on lifehacker.com suggested an interesting way of generating difficult to guess passwords: simply move your fingers over to the right one key. Thus, the password "12345" becomes "23456". Not very useful for purely numeric passwords, but with a reasonably large character set it starts to be useful. For example, "CommandLineKungFu" becomes "Vp,,smf:omrLimhGi".

On the GIAC Pen Testing Alumni mailing list (affectionately known as the "GPWN" mailing list), somebody suggested creating a tool to "right shift" strings in a password cracking dictionary to catch folks who relied on this trick for creating passwords. So of course Ed had to go looking for a Windows CMD.EXE solution. I'll let Ed explain later just how well that crazy notion ended up working out for him.

In the Unix world, this is clearly a job for the tr command. tr simply converts one list of characters to another list, so all we have to do is create a list of the characters in normal keyboard order and a corresponding list of the characters "right shifted" one place.

Yeah, I could do this manually. But then I thought it might be fun to throw a little shell fu at the problem:

$ r1='`1234567890-='
$ r1s=`echo $r1 | sed -r 's/(.)(.*)/\2\1/'`
$ echo $r1s

Here I'm defining a variable called $r1 which is all of the characters from the top row of the keyboard from left to right. I then create a new variable called $r1s ("$r1 shifted") by using sed to pop the first character off of the front of $r1 and shift it around to the end of the string. In other words, everything will shift right one place when we call tr and the "=" character (the last character in the row) will "wrap around" and become a backtick (the first character in the row).

Now we need to do the same thing again for the first row, but this time we'll be holding the shift key down. Since this is the "upper-case" version of the row, we'll name the variables $R1 and $R1s:

$ R1='~!@#$%^&*()_+'
$ R1s=`echo $R1 | sed -r 's/(.)(.*)/\2\1/'`
$ echo $R1s

We'll need to repeat this process six more times for the lower- and and upper-case versions of the remaining three rows on the keyboard. Be careful on row #3 where the quote characters are! You'll need to do something like this:

$ r3="asdfghjkl;'"
$ r3s=`echo $r3 | sed -r 's/(.)(.*)/\2\1/'`
$ R3s=`echo $R3 | sed -r 's/(.)(.*)/\2\1/'`

Because $r3 is going to contain a single quote, we quote the string of characters using a double quote, which is fine since the string contains no other special characters that might be interpolated in the double quotes.

When all is said and done, you'll end up with eight variables-- $r1, $R1, $r2, $R2, $r3, $r4, and $R4-- plus their eight "right shifted" versions-- $r1s, $R1s, ... and so on. We can now use these in our tr expression:

$ echo CommandLineKungFu | tr "$r1$R1$r2$R2$r3$R3$r4$R4" "$r1s$R1s$r2s$R2s$r3s$R3s$r4s$R4s"
$ cat dict.txt | tr "$r1$R1$r2$R2$r3$R3$r4$R4" "$r1s$R1s$r2s$R2s$r3s$R3s$r4s$R4s" >shift-dict.txt

The first command shows you how to "right shift" a single word-- useful for testing to make sure you got your variable settings right. The second command is what you would use to "right shift" an entire password dictionary.

Note that you can easily "unshift" text by simply reversing the order of the arguments to tr:

$ echo CommandLineKungFu | \
tr "$r1$R1$r2$R2$r3$R3$r4$R4" "$r1s$R1s$r2s$R2s$r3s$R3s$r4s$R4s" | \
tr "$r1s$R1s$r2s$R2s$r3s$R3s$r4s$R4s" "$r1$R1$r2$R2$r3$R3$r4$R4"


So we could even use our little tr hack for trivial obfuscation, similar to the old tried and true ROT-13 "cipher".

OK, that's it from the world of the Unix command line. Now get ready to enter some strange waters on the Windows side of the blog...

Tim agreed with Ed that this would be a good idea...it wasn't.

So we need to shift some characters, it is really easy to by hand. In fact, it is so easy I do it by accident every now and then; however, using a windows shell to do the same task is not so easy.

My first attempt at this problem involved arrays. I would load up an array with a row of characters, than make another array with the characters shifted by one. So far so good, but it went downhill fast. Once we have the arrays, it turns into a programming excessive of iterating through each character in the string to be transformed, finding the character in the first array, figuring out its index, get the character of the same index in the second array, output the new character, rinse, and repeat. This solution required I get a visa to visit Scriptland, and it was denied. I needed a better approach.

My second attempt involved hash tables. If you aren't familiar with the hash table data structure, it maps (unique) keys to associated values. For example, if the phonebook were a hash table the keys would be the names, and the values would be the phone numbers. There is a one to one mapping of names to phone numbers. It is designed to lookup names to get phone numbers, but not the other way around.

We want to create a hash table that looks something like this.

Key  Value
--- -----
` 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 0
0 -
- =
= `
If we look up 1 in the hash table, the value returned is 2. We can use this to transform our password. So let's load up the first row.

PS C:\> $ht = @{}
PS C:\> $row = "``1234567890-="
PS C:\> 1..($row.Length) | % { $ht[$row[$_ - 1]] = $row[$_ % $row.Length] }
We create the hash table, then we create a variable that holds the first row of characters. Actually loading the hash table is a little tricky.

The range operator is used to count from 1 to the length of the $row variable. We then load the hash table one character at a time. We add the first pair where the 0th character in the string (remember base 0) is the key, and the associated value is the 1th character in the string. We continue until we get to the last number in our range. When we get to the last number in the range, the key is the last character in the string, and the value is the first character. We wrap around with our by using the modulus operator (%) since 13 mod 13 = 0.

We continue to load the "capitalized" version of the fist row.

PS C:\> $row = "~!@#$%^&*()_+"
PS C:\> 1..($row.Length) | % { $ht[$row[$_ - 1]] = $row[$_ % $row.Length] }
We have a problem, after the second row (qwerty) is loaded, the capitalized version of the row stops on the lowercase version. By default, the hash table keys are case insensitive, but we can create a case sensitive hash table.

PS C:\>$ht = New-Object Collections.Hashtable ([StringComparer]::CurrentCulture)
And then load the hash table:

PS C:\>$ht[" "] = " "
PS C:\>$row = "``1234567890-="
PS C:\>1..($row.Length) | % { $ht[$row[$_ - 1]] = $row[$_ % $row.Length] }
PS C:\>$row = "~!@#$%^&*()_+"
PS C:\>1..($row.Length) | % { $ht[$row[$_ - 1]] = $row[$_ % $row.Length] }
PS C:\>$row = "qwertyuiop[]\"
PS C:\>1..($row.Length) | % { $ht[$row[$_ - 1]] = $row[$_ % $row.Length] }
PS C:\>$row = "QWERTYUIOP{}`|"
PS C:\>1..($row.Length) | % { $ht[$row[$_ - 1]] = $row[$_ % $row.Length] }
PS C:\>$row = "asdfghjkl;'"
PS C:\>1..($row.Length) | % { $ht[$row[$_ - 1]] = $row[$_ % $row.Length] }
PS C:\>$row = "ASDFGHJKL:`""
PS C:\>1..($row.Length) | % { $ht[$row[$_ - 1]] = $row[$_ % $row.Length] }
PS C:\>$row = "zxcvbnm,./"
PS C:\>1..($row.Length) | % { $ht[$row[$_ - 1]] = $row[$_ % $row.Length] }
PS C:\>$row = "ZXCVBNM<>?"
PS C:\>1..($row.Length) | % { $ht[$row[$_ - 1]] = $row[$_ % $row.Length] }
So now we have hash table that contains our character mapping, and we can use it for our transformations.

C:\> "CommandLineKungFu".ToCharArray() | % { $ht[$_] }
Each character is output separately, not what we wanted. We can use some .NET to put the string back together.

PS C:\>[system.string]::join("",("CommandLineKungFu".ToCharArray() |
% { $ht[$_] }))

The Join function takes two parameters, a separator and an array of elements to concatenate. This gives us pretty output.

We can even take the passwords from a file and shift them:

PS C:\> cat password.lst | % { [system.string]::join("",($_.ToCharArray() |
% { $ht[$_] })) }
Now that was pretty ugly, but it can only get worse.

Ed Gets Worse
Sometimes, when dealing with cmd.exe, I feel like I'm working with the Ruprecht of shells, a reference to the movie Dirty Rotten Scoundrels. I mean, Hal's auto-generating his shifts with bash and Tim is building frickin' hash tables. Ruprecht-shell and I, though, don't have such fancy capabilities and constructs. As I mentioned in Episode #82 (under the headline Ed's Got Sed), we can make substitutions of individual characters or groups of characters. That is, if we want to change every a in a password to s, we could run:

C:\> cmd.exe /v:on /c "set stuff=abcdef & set stuff=!stuff:a=s! & echo !stuff!"

When I initially approached this problem, I started at that a, and went forward in my abc's, making a into s, b into n, c into v, and so on for the shift right. Big mistake. Consider:

C:\> cmd.exe /v:on /c "set stuff=abcdef & set stuff=!stuff:a=s! & set stuff=!stuff:b=n!
& set stuff=!stuff:c=v! & set stuff=!stuff:d=f! & set stuff=!stuff:e=r!
& set stuff=!stuff:f=g! & echo !stuff!"

Uh-oh. See those two g's? The first comes from the d that was shifted to a f, which was then later shifted to g. The second comes from the f, which was shifted to a g. Doh! Shifting alphabetically just won't work, because of the potential for multi-shifts. Tim then pointed out that we could avoid this problem if we shifted more carefully, not going alphabetically, but instead relying on the keyboard order itself to prevent double shifting entirely. If we are going to shift right one slot on the keyboard, we want to start doing that with the keys on the right hand side of the keyboard, and move our way left-ward.

Putting all of that together, here is the command to perform the alteration:

C:\> cmd.exe /v:on /c "set stuff=abcdefghijklmnopqrstuvwxyz0123456789abcdefghijklmnopqrstuvwxyz0123456789
& set stuff=!stuff:^-=^=! & set stuff=!stuff:0=^-! & set stuff=!stuff:9=0!
& set stuff=!stuff:8=9! & set stuff=!stuff:7=8! & set stuff=!stuff:6=7! & set stuff=!stuff:5=6!
& set stuff=!stuff:4=5! & set stuff=!stuff:3=4! & set stuff=!stuff:2=3! & set stuff=!stuff:1=2!
& set stuff=!stuff:^[=^]! & set stuff=!stuff:p=^[! & set stuff=!stuff:o=p! & set stuff=!stuff:i=o!
& set stuff=!stuff:u=i! & set stuff=!stuff:y=u! & set stuff=!stuff:t=y! & set stuff=!stuff:r=t!
& set stuff=!stuff:e=r! & set stuff=!stuff:w=e! & set stuff=!stuff:q=w! & set stuff=!stuff:^;=^'!
& set stuff=!stuff:l=^;! & set stuff=!stuff:k=l! & set stuff=!stuff:j=k! & set stuff=!stuff:h=j!
& set stuff=!stuff:g=h! & set stuff=!stuff:f=g! & set stuff=!stuff:d=f! & set stuff=!stuff:s=d!
& set stuff=!stuff:a=s! & set stuff=!stuff:^.=^/! & set stuff=!stuff:^,=^.! & set stuff=!stuff:m=^,!
& set stuff=!stuff:n=m! & set stuff=!stuff:b=n! & set stuff=!stuff:v=b! & set stuff=!stuff:c=v!
& set stuff=!stuff:x=c! & set stuff=!stuff:z=x! & echo !stuff!"
See? The command is so simple that it practically types itself. Actually, I think this wins the record for the longest command we've ever had on this blog.

Of course, this only shifts lower-case characters (and unshifted numbers). If you really want to shift upper case as well, you can add those transforms to the above syntax. I can think of no better way to spend a Spring day.

And, in the immortal words of Ruprecht.... THANK YOU. ;)

Tuesday, April 20, 2010

Episode #91: How Much Per Day?

Hal has an issue

In another instalment of "True Stories of the Shell Patrol", here's a useful little bit of shell fu I recently threw together for a customer. This customer had an application that wrote copious log files, automatically generating a new log file for each day. The trick was that if an individual log file reached 10M in size, the app would start a new log file. So some days there would be just a log file named "YYYYMMDD.log", and yet on other days there would be not just "YYYYMMDD.log" but also "YYYYMMDD-01.log", "YYYYMMDD-02.log", and so on.

The customer was interested in knowing how many bytes of logs the app had written each day. I thought about it for a minute and came up with:

$ for d in `ls | cut -c1-8 | uniq`; do
 echo -en "$d\t"
 cat $d* | wc -c
20100414 9110137
20100415 23232501
20100416 34485619
20100417 6052615

Here we're simply using cut to pull off the date strings from the front of each file name, and then using uniq to filter out the duplicates that we'll get from days where there are multiple log files. Inside the loop, we use echo to output the date string and a tab. The "-e" option gets echo to recognize "\t" as a tab, and the "-n" means "don't output a newline", so that way our byte count appears on the same line as the date string. Next we push all of the files for the given date through "wc -c" to count the number of bytes for each day. Note that "wc -c $d*" would not work, since this would give us the byte counts for the individual log files for each day.

We ended up getting interested in the top ten days by log file size. But once I had created the loop above, answering the "Top 10" question was easy:

$ for d in ... done | sort -nr -k2 | head

Just use our original loop and pipe the output to sort. Here we're doing a descending ("reversed" or "-r") numeric ("-n") sort on the second column ("-k2"). The head command pulls off the first 10 lines.

Ah, so easy for the Unix shell! CMD.EXE and Powershell? Let's see what the guys can come up with...

Tim has lots of issues

I've had to do the same thing a few times. This command can be very helpful. Here is the command we can use in PowerShell:

PS C:\> ls | group {$_.Name.Substring(0,8)} | select Name,@{Name="Size";
Expression={ ($_.group | Measure-Object Length -Sum).Sum }}

Name           Size
----        -------
20100414    9110137
20100415   23232501
20100416   34485619
20100417    6052615
We take the directory listing and pipe it into Group-Object where the grouping is based on the first eight characters of the Name property. To get the first eight characters we use the Substring method. The Substring method is available on any string, and the Name property is a string.

The groups are piped into Select-Object where we use a calculated property to compute the size of all the files in the group. To specify a calculated property we create a hash table using the @{} syntax. Inside the curly braces we need two elements: Name and Expression. Inside the Expression's script block we pipe the group into Measure-Object where the length property is summed. We now have a new property named Size.

While getting the size is a little goofy, retrieving the top 10 is pretty easy. All we need to do is use Sort-Object followed by Select-Object to grab the top 10.

... | sort size -Descending | select -First 10
Now what does Ed have for us?

Ed's Issues Go Far, Far Deeper

When I first saw Hal’s challenge this time around, I winced. Whenever he wants, Hal can pull “uniq” or “wc –c” out of his butt and use them. Me? Well, I don’t have the luxury of such useful commands. And, given our rule around here for only using built-in commands and features, I’ve often gotta make what I don’t have, using only my bare hands, spit, bisquick, duct tape, chewing gum, and copious time. Adhering to the old “teach a person to fish & feed him for life” adage, let me show you how I built the two piece parts (“unique” and “wc –c”) that I needed to make this one work.

For peeling off the file name’s unique components, we can use the following command:
C:\> cmd.exe /v:on /c "set previous= & for %i in (*) do @set name=%i & 
     set current=!name:~0,8!& if NOT !previous!==!current! (echo !current! & 
     set previous=!current!)"
I start here by invoking a cmd.exe with delayed variable expansion (cmd.exe /v:on /c) so that my variables can change values as my command runs. Then, I clear out a variable called “previous” (set previous= ), where I will later store the previous value of the file name component as I loop through my directory. I then invoke a FOR loop with an iterator variable of %i. Note that I just want to iterate through files in my current directory, so I just use a plain, vanilla FOR loop, withouth /L, /F, /R, or /D. I’m going to iterate through all files in my current directory (in (*)).

At each iteration through my loop, I turn off command echo (@) and use the set command to store the current file’s name (%i) in a variable called name. I have to do this, because we can’t perform substring operations on iterator variables. I then, use set again to do my substring operation, pulling in the first 8 characters of the file name by starting at offset zero and going up 8 spaces, storing the results in a variable called “current” (set current=!name:~0,8!&). It’s really important to leave no space between that ! and the &. If you put a space there, that space will show up in your current file’s name, and will cause problems later on when we need to measure total file size.

Then, I have an IF statement, checking to see if my previous name value is the same as my current name value. If they are NOT the same, it means I’ve not encountered this name previously, so it’s unique. I then simply echo it out, and set my new value of the previous to current. Then, I iterate around. The result is a unique list of substrings which are the file names. Note that this approach is dependent on the names being sorted so that similarly prefixed file names come near each other, which the FOR loop does automatically (in alphabetical order).

Now that we’ve spewed out the unique name prefixes, for the next part we have to calculate the total size of everything that starts with that prefix (roughly mimicking “wc –c”, a least focused on file sizes). My first approach to doing this involved simply having the dir command itself calculate this size, as follows:

C:\> cmd.exe /v:on /c "set previous= & for %i in (*) do @set name=%i & 
     set current=!name:~0,8!& if NOT !previous!==!current! (echo !current! & 
     dir !current!* | find "File(s)") & set previous=!current!"
              3 File(s)      2,080,000 bytes
              1 File(s)        389,120 bytes
              2 File(s)         72,704 bytes
Here, after the IF statement of my uniquifier command, I simply echo out the current prefix (echo !current!) followed by a dir command to show the directory listing of !current!*. That should be all of the files that start with that prefix. I then pipe that dir output through the find command to locate the line with the string “Files(s)” in it, because that shows the total size of everything that matched the dir wild-card search. I have to follow this up with setting my previous to my current prefix, so that my home-brewed uniq still works. I really kinda like the format of this output, as it shows the file counts plus the full size.

But, we aim to please here, matching Hal’s command output as closely as we can. To get a step closer, I’ll simply do a little parsing on the output of the dir command:
C:\> cmd.exe /v:on /c "set previous= & for %i in (*) do @set name=%i & 
     set current=!name:~0,8!& if NOT !previous!==!current! (for /f "tokens=3" 
     %a in ('dir !current!* ^| find "File(s)"') do @echo !current! %a) & 
     set previous=!current!"
20100414 2,080,000
20100415 389,120
20100416 72,704
Here, I’ve simply placed a FOR /F loop after my uniquifier IF, parsing to pull out the third item (“tokens=3”) into iterator variable %a from that dir command. In my do clause, I display the current prefix followed by the %a value, which is the total size.

I know what you are thinking… You are thinking that I’ve got commas in my sizes, and Hal doesn’t. Sigh… You always did love Hal more. Ever since we were kids, it was always “Hal, Hal, Hal.” I could never understand why you favored him, Mom. That’s exactly what I told my shrink on the couch in our last session, and… uh… never mind.

Anyway, we can get rid of the commas by taking a different approach to calculating the size. Instead of parsing through dir output, we could use a FOR loop to iterate through file names associated with our prefix, and then use %~za to represent the file size, which we’ll total up. Sounds crazy, I know, but YOU were the one that wanted to lose the commas.

So, without further adieu, I give you the command that mimics Hal’s, including his output format:
C:\> cmd.exe /v:on /c "set previous= & for %i in (*) do @set name=%i & 
     set current=!name:~0,8!& if NOT !previous!==!current! (set /a totalsize=0 >nul 
     & (for %a in (!current!*) do @set /a totalsize=!totalsize! + %~za >nul) 
     & echo !current! !totalsize!) & set previous=!current!"
20100414 2080000
20100415 389120
20100416 72704
Here, after my IF statement, I set a variable of totalsize to zero (set /a totalsize=0). I use set /a, because I want to do math here, not string assignment. The set /a command displays its result on Standard Out, which I don’t want to see now, so I throw it away (>nul). I then run a FOR loop, again one that’ll iterate through file names in my current directory. I’ll use an iterator variable of %a here. The files I want to iterate through are my current prefix, followed by a *. At each iteration through the loop, I want to do some math, setting my totalsize to it’s previous value plus %~za, which will expand into the number of bytes in the file represented by %a.

After that loop is done, I then echo my current prefix plus the totalsize of everything with that prefix.

Voila! Easy as pie, you see! As long as you’ve got enough bisquick.

Now, my sort in cmd.exe doesn't sort numerically at all, and there's no built-in way to do it. I would likely just dump my results in a file and then open them in a spreadsheet, where I'd do the sort.

Tuesday, April 13, 2010

Episode #90: pwnership

71M kicks off this episode:

This week's episode idea comes from Carlos "Dark0perator" Perez, one of the MetaSploit developers. He asked me about using PowerShell for post exploitation enumeration. For those you who don't know what that is, it is pillaging a compromised host for useful information. Oh, and don't call it PEE for short. This doesn't have to specific to attacking systems, this same information can be useful for Incident Response.

PowerShell supports all of the classic Windows shell commands and executables, so I won't reinvent the wheel in an attempt to create a PowerShell version of qwinsta, netstat, ipconfig, or the net commands (users, accounts, session,...). PowerShell is very useful for parsing the output of those commands, but our focus for this episode is the information, not parsing it.

Enumeration involves pulling information from the system. The PowerShell naming scheme is Verb-Noun where the Verbs are standardized. The standard verb for retrieving information is Get. In the default shell in PowerShell v2 there are approximately 50 cmdlets that use the Get verb. That is the base of what we have to work with to retrieve information that may be useful to us.

PowerShell Version

First, we need to know which version of PowerShell is running on the machine. This may make a difference later since some cmdlets have been added or changed in version 2. The Get-Host cmdlet is what we will use. The only output that is really useful is the version, so we will just look at that.

PS C:\> Get-Host | Select Version



Another potentially useful bit of information is the patches that are installed. We can use Get-HotFix to retrieve that information, but unfortunately, Get-HotFix is only available in v2.

PS C:\> Get-HotFix

Source Description HotFixID InstalledBy InstalledOn
------ ----------- -------- ----------- -----------
ROCKY7 Update KB958830 MYDOM\tim 10/27/2009 12:00:00 AM
ROCKY7 Security Update KB971468 NT AUTHORITY\SYSTEM 2/23/2010 12:00:00 AM
ROCKY7 Security Update KB973525 MYDOM\jmallen 10/15/2009 12:00:00 AM
ROCKY7 Update KB973874 MYDOM\jmallen 10/15/2009 12:00:00 AM

We know what patches were installed, by whom they were installed, and when they were installed. At first glance this might not seem very useful, but it is. We can see what patches were installed, but more importantly we could figure out what patches weren't installed. One could guess that if the patches weren't installed on this machine, they might not be installed on others, and that would could give you a good path for exploiting other systems.

This output also provides some juicy usernames. The users who do the patching need administrative rights. If we can get a hold of one of these users' credentials, hash, or tokens we likely will have be able to get elevated privileges elsewhere.

Finally, we can take a look at the installation date to determine the organization's deployment schedule. Microsoft pushes patches on the 2nd Tuesday of the month. From the output above we can see that KB971468 wasn't installed until two weeks later. If we looked at the full list of installation dates we might see a pattern and be able to confirm that the patch cycle is two weeks. You could probably assume that the patch cycle is the same across the entire organization and that gives you, the penetration tester, two weeks between when a vulnerability is announced and when the patch is applied to the system. That two week window can be a good time to compromise a bunch of machines.

The cmdlet also has a ServicePackInEffect property we can use to determine the service pack. Windows 7 and Windows 2008 R2, both come with PowerShell v2, but neither has a service pack (yet), so this isn't particularly useful on these platforms. I tested it on a few Windows 2003 boxes with PowerShell v2 and the results were really weird. It was as if the values were assigned to the wrong properties. The SerivePackInEffect property sometimes contained a KB# while the KB property was blank. The later entries were correct and it did show the service pack. It was rather odd, but it worked.

Users and Groups from ACLs

If you had access to a shared drive you could use the Acl to pull group and user names.

PS C:\> ls -r | Get-Acl | select -ExpandProperty Access | select IdentityReference -Unique
MYDOM\Domain Admins
MYDOM\Domain Users

This next command is similar, but returns a smaller scope of users. It typically returns a subset of the above command but in rare cases it could contain additional users.

PS C:\> ls -r | Get-Acl | select Owner -Unique


These commands do a recursive directory listing, so be careful! It is noisy and painfully slow. Although, when it is done you do have a nice list of security principles.

Available Snap-ins

Getting a list of available snap-ins lets you know what the machine is used for.

PS C:\> Get-PSSnapin -Registered

Name : Microsoft.Exchange.Management.PowerShell.Admin
PSVersion : 1.0
Description : Admin Tasks for the Exchange Server

Name : Microsoft.Exchange.Management.Powershell.Support
PSVersion : 1.0
Description : Support Tasks for the Exchange Server

Name : VMware.VimAutomation.Core
PSVersion : 2.0
Description : This Windows PowerShell snap-in contains Windows
PowerShell cmdlets used to manage vSphere.

A machine that has additional systems-admin PowerShell tools is probably allowed to talk through the firewall to some juicy servers. If the machine has the Exchange snap-ins, you can bet it is a mail server and is a box that an admin uses. It also means that you might be able to send mail, control mail flow, or get a list of *all* the users in the domain. If it has the VMWare snap-in you can use that as a launching point to pwn an entire virtual infrastructure. In short, additional snap-ins means additional cmdlets and additional fun, fun, fun!


Another important piece is the .NET framework. This is has the potential to be the most powerful tool in PowerShell. The .NET framework gives you access to the power of a full programming language. On some assessments, the rules of engagement may state that you are not allowed to install software or download thrid party executables, but it may be acceptable to use PowerShell so you can now run a sniffer.


Ok, I know I said I wasn't going to reinvent the wheel, but this wheel is Z-rated. While Get-WmiObject (alias gwmi) provides the same functionality as wmic, the implementation is much better. Instead of receiving weirdly formatted results you get nice object with nicely named properties. Gone are the days of parsing wide results and trying to figure out which property value goes with which heading.

Carlos' enumeration script for MetaSploit has 15 wmi commands it runs to pull information. Here are the PowerShell equivalents.

gwmi win32_useraccount
gwmi win32_group
gwmi win32_volume
gwmi win32_logicaldisk | select description,filesystem,name,size
gwmi win32_networkloginprofile | select name,lastlogon,badpasswordcount
gwmi win32_networkclient
gwmi win32_networkconnection | select name,username,connectiontype,localname
gwmi win32_share | select name,path
gwmi win32_ntlogevent | select path,filename,writeable
gwmi win32_startupcommand
gwmi win32_product | select name,version
gwmi win32_qfe
gwmi win32_service list brief get-service
gwmi win32_process get-process
gwmi win32_rdtoggle no equivalent in PowerShell

The output of these commands can be rather long so I recommend piping the results into a csv file. It keeps all the property names and values and allows for easy sorting and parsing later.

PS C:\> gwmi win32_useraccount | Export-Csv C:\output\accounts.csv

And if you want to use the results you can use Import-Csv to read the file and get all the object properties.

While there aren't a lot of PowerShell specific commands here, Microsoft is continually shifting to PowerShell, and all of their server products are going to support it. Exchange can be entirely controlled with PowerShell, SQL and System Center are coming, and SharePoint 2010 supports it. PowerShell support isn't just limited to Microsoft -- Citrix and VMware are among the other companies adding support for PowerShell. Mastering PowerShell will be an essential skill for a network admins, security admins, or pen testers. While there isn't a wide breadth of cmdlets that are useful for obtaining this information, PowerShell is a handy thing to have in your tool belt. Worst case, it is a fancy way to call the classic Windows commands.

Now get ready for the Ninja's to do some serious fu.

3D (a name that is entirely in Hexadecimal, thank you very much) Responds:

Ahhhh…. Post Exploitation Enumeration. I love the topic. In fact, it’s the primary reason I’ve devoted so much of my time over the years to command line kung fu. In my penetration tests, I’d often find myself staring at a C:\> shell prompt on a successfully compromised Windows box. I decided to devote all of my spare time to maximizing my ability to manipulate a system from such a prompt so that I could really shine during a penetration test. Gaining shell access isn’t the end of your penetration test… it’s just when things start to get interesting. If you really want to express the true business risk associated with the vulnerabilities you’ve found, gaining shell access and performing solid post-exploitation analysis and pivoting is the way to do it. I’ve got a huge section in my SANS 560 course devoted to the art of using shells for successful pillaging in post-exploitation.

You know, this blog is in some way partly an offspring of my post-exploitation musings. I started to tweet about some of the commands I was using after exploiting a system, Hal started to respond to those tweets, and we opted to take our little spats into blog form. The rest, as they say, is history. Well, maybe “doskey /history” in this case, but you get my point.

One of the big things I like to do on a system after I’ve compromised it is to find out what other systems it has recently communicated with. I do this with a quick barrage of commands, starting with searching for established TCP connections:
C:\> netstat –na | find “EST”

Next, I look to see which machines on the same subnet we’ve talked with recently by dumping the ARP cache:
C:\> arp –a

Then, I look at the DNS cache, to see which names we’ve resolved recently:
C:\> ipconfig /displaydns

Also, I like to check out the current SMB connections made from this machine to file servers and related systems:
C:\> net use

And those made to this machine from Windows clients and such:
C:\> net sessions

Tim says he likes to get information about installed patches and software. I do too. To get a list of hotfixes, I simply run WMIC asking for Quick Fix Engineering information:
C:\> wmic qfe list full

That’ll show me the list of patches, their install date, and the effective Service Pack of the system (which is usually one more than the latest service pack Microsoft has released... Applying even one hotfix after a given Service Pack bumps up the effective Service Pack number for that hotfix by one).

Getting a list of installed software is pretty straightforward too. I usually start by checking where my system partition is:
C:\> echo %systemroot%

You see, a lot attackers just assume that the operating system is installed in C:\. If the sysadmin moved it to D:\ or elsewhere, they’ll get confused when their commands for pillaging don’t seem to find much stuff in C:\. Once I’ve checked that systemroot, I can then run:
C:\> dir “C:\Program Files” > inventory.txt

I’ll transfer the inventory.txt file back to my own machine for more detailed analysis, looking for vulnerable software installed on the machine that may be included on other systems I want to attack next, pivoting through my conquered host. I focus on vulnerable third-party apps, such as Adobe Reader, the Java Runtime Environment, the Office suite, and browsers. The nicest part here is that the output of my dir command will include last update dates of each of those files, so I can tell how far out of patch each program is.

For a list of users and groups defined on my local box, as well as members of the administrators group, I run:
C:\> net user
C:\> net localgroup
C:\> net localgroup administrators

To find installed administrative snap-ins and the things the given machine was used to administer, I often hunt for custom MSC files saved by administrators for quick access to various administrative functions in the GUI. Such admins often save their files in their own user directories, so I look inside of C:\Documents and Settings, or C:\Users, depending on the version of Windows I’ve gained access to. Note that I don’t particularly care about the “normal” msc files, such as lusrmgr.msc (for local user management) or eventvwr.msc (the Event Viewer), which are stored in system32. I want the custom msc files, because they’ll hold the additional installed snap-ins that I’m focused on:
C:\> dir /b /s C:\users\*.msc

Once I’ve found an msc file, I hunt for the string “String ID” in the file, which shows me the administrative snap-ins that msc file was used to administer:
C:\> type Console1.msc | find /i "String ID"
<string id="1" refs="1">Favorites</string>
<string id="2" refs="1">Event Viewer (Local)</string>
<string id="3" refs="2">Console Root</string>
<string id="4" refs="1">Local Computer Policy</string>
<string id="5" refs="1">IP Security Policies on Local Computer</string>

Nice… it looks like this administrator used this machine to configure the Local Computer Policy GPOs and the IP Security Policies.

For the WMI stuff, which is accessed at cmd.exe via the WMIC command, I’ve written gobs of articles over the years. There is so much chocolatey goodness in the WMIC command that I’ve already documented, I’ll just refer you to simple Google search of my work in that arena. On second thought, let’s do a Bing search, since that’s the same company that gave us the wonderful WMIC command:


These are simply some of the highlights of the things you may want to do in post-exploitation during a penetration test. I always try to keep an open and inquisitive mind once I gain access, looking for other items that are system-specific to the machine for plundering. But, remember… always stay within scope and follow your rules of engagement for the test!

4a1 Takes a Trip Down Memory Lane

As 3d points out, PEE is almost the raison d'etre for this blog. Long-time readers will recognize a lot of tips and tricks in this article that have cropped up in previous Episodes. But this article gives us a chance to consolidate a lot of those scattered tricks into one place, and of course we also get to add in Tim's tasty Powershell confections.

First it's probably useful to figure out what kind of machine you're on and what privilege level you currently have. "uname -a" and "id" are useful for this:

$ uname -a
Linux elk 2.6.31-20-generic #58-Ubuntu SMP Fri Mar 12 04:38:19 UTC 2010 x86_64 GNU/Linux
$ id
uid=1000(hal) gid=1000(hal) groups=4(adm),20(dialout),24(cdrom),46(plugdev),107(lpadmin),...

On Linux systems, "cat /etc/*-release" will usually produce additional output that describes the specific distro and version number you've cracked.

Now let's look at getting network-related information for the machine. First, "ifconfig -a" will dump out the current configuration of all network interfaces on the system, and "netstat -in" will give you some network interface statistics that will show you which interfaces are being used most heavily. You'll also probably want to look at the network routing table ("netstat -rn") and local DNS servers ("cat /etc/resolv.conf"). If you've happened to already broken root on the system you're exploiting, then using a command like "iptables -vnL" to dump the current firewall config on the system can be useful as well.

Similar to Windows, we can use "arp -an" to dump the current ARP cache on the system, which can give you some idea of systems that your pwned host has been talking to recently. But perhaps even more interesting is the kernel routing cache ("route -Cn"), which actually shows you the per-host routes taken by recent network traffic.

There's "netstat -an" for showing current network connections and services listening on network ports. Linux systems allow you to add the "-p" option to output process information associated with these network ports. Or, if you've already broken root, you can use "lsof -i" to dump much more detailed information about network connections and their associated processes.

Speaking of processes, there's the venerable "ps -ef" ("ps auxww" on BSD) command for listing currently running processes. Figuring out which services are configured to start at boot time is one of those crazy OS-specific issues-- every Unix-like OS seems to provide slightly different interfaces for figuring out this information. Episode #57 does a pretty good job of delving into this, so I won't repeat myself here.

If you're interested in understanding how the file systems on the machine are laid out and what network shares might be in use, just run the "mount" command with no arguments. There are some additional subtleties that you may run into here related to logical volume managers and encrypted file systems, so you might want to check out Episode #59 for more details.

If the machine you're attacking is a file server, you might also be interested in getting information about the the file systems that remote clients are mounting from the local server. On NFS servers, "showmount -a" will normally display this information ("showmount -e" should normally list all exported file systems, or you could just look at the /etc/exports file). If you're a Samba server for Windows clients, just run "smbstatus".

As far as enumerating users and groups goes, Episode #44 has this useful little tidbit for listing users and their group memberships:

$ for u in `cut -f1 -d: /etc/passwd`; do echo -n $u:; groups $u; done | sort
hal:hal adm dialout cdrom plugdev lpadmin admin sambashare

The tricks in Episode #34 for finding UID 0 accounts and accounts with null passwords can also be useful if you're looking for accounts to exploit.

Listing all installed software and the patch status of the machine is another one of those annoying distro-specific problems in Unix. For Red Hat derived systems, you're looking at "rpm -qa" to list all installed packages and their version numbers, and "yum list updates" to show pending updates that have yet to be applied ("yum list-security" may work on newer Red Hat releases to show only the security-related updates). On Debian systems, the corresponding commands are "dpkg --list" and "apt-show-versions -u".

Whew! That's a whole lot of fu! But it's also been a nice trip down memory lane with some of our more useful postings.

Tuesday, April 6, 2010

Episode #89: Let's Scan Us Some Ports

Ed mulls:

I'm sure it happens all the time. One of our readers is sitting at a command shell pondering their activities for the day, when the urge suddenly hits them -- "I wanna port scan something... just because." I know I feel that urge a lot, which I often sate with the wonderful Nmap port scanner. But, what if you don't have Nmap handy, and aren't allowed to install it? How can you do a TCP port scan using only built-in tools? Maybe you're on a very restricted penetration test, and you've just popped a box to get shell access, but aren't allowed to install any other software on the machine. After your shell happy-dance is complete, how can you do a port scan from your newly conquered territory against other hosts?

At first blush, you'd probably think about using the telnet client, which can make a connection to arbitrary ports. I'll walk you through the logic and approach I used to try to make the Windows telnet client bend to our will, so you can see how I approach these kind of problems. It's ugly, but perhaps the steps will interest you. After we show the utter failure and heap of ruin the completely awful Windows telnet client is, I'll then show you a technique that'll actually work.

Ahhh... but the telnet client was removed from Vista and Win7, now wasn't it? Well, while telnet.exe isn't there on the machine ready to run, the install package for it is included in most versions of Vista and Win7 without a further download. The telnet client is there, just waiting for you to install it via the package manager command:

C:\> pkgmgr /iu:"TelnetClient"

The /iu there stands for "install update". Also, on most versions of Windows, you have to put the quotes around "TelnetClient" and make sure your T & C are caps. Later, if you want to remove this package, you can run:

C:\> pkgmgr /uu:"TelnetClient"

That command uninstalls update (uu). Again, remember that the telnet client software is already on the box -- no download happens here. You are just applying the package and then removing it.

So, with our telnet client ready to rock, we can test individual ports easily. Let's try port 3 on target machine

C:\> telnet 3
Connecting To not open connection to the host, on port 3: Con
ect failed

Ahh, this looks promising, right? Well, actually, it turns out it isn't, as we can see when we wrap it into a FOR loop to try to hit a range of ports to scan ports 1 through 1024:

C:\> for /L %i in (1,1,1024) do telnet %i

This looks nice on the surface, but when you reach an open port, it just hangs. And, it'll hang for as long as the remote port keeps the connection open, which could be forever. Hmmm... How can we get around that? Well, let's echo nothing into telnet and see what happens:

C:\> for /L %i in (1,1,1024) do echo | telnet %i

Ahh... that's better. Now, when it encounters an open port, it clears the screen, and prints:
Welcome to Microsoft Telnet Client

Escape Character is 'CTRL+]'

Almost there, right? Uhh... You wish! We've got another problem. We've gotta record our result somewhere. Unfortunately, if you try to do anything with Standard Output when using the Microsoft telnet client, the telnet client refuses to connect! It's strange... it's almost like the telnet client is psychic and can peek ahead in your command line to see that you are doing something with its Standard Output, so it refuses to run. Thus, you can't pipe or redirect that output anywhere. We've got to consider other ways.

How about checking the error condition? Yeah, we can try the little && or || to conditionally execute a command after the telnet client, which will tell us if telnet succeeded or not. Let's try that:
C:\> for /L %i in (1,1,1024) do @echo | telnet %i && echo Port %i is open
Connecting To not open connection to the host, on port 1: Conn
ect failed
Port 1 is open
Connecting To not open connection to the host, on port 2: Conn
ect failed
Port 2 is open
Connecting To
Welcome to Microsoft Telnet Client

Escape Character is 'CTRL+]'

Port 3 is open

Bah! Infernal Microsoft... you don't properly set the error condition here so our && trick doesn't work. Telnet always succeeds, even when it fails. Unlike the Vista marketing campaign, but I digress.

Ok... let's give it one more shot, shall we? How about we activate logging in the telnet client (with -f logfile). We can just log the status of each of our connection attempts into a file for each port called portN.txt. That will surely tell us if it can connect or not, right? Let's try:

C:\> for /L %i in (1,1,1024) do @echo | telnet -f port%i.txt %i && echo
Port %i is open

I let that run for 7 ports, and then I stopped it. Port number 3 was listening on the target while I ran it. Let's look at what our telnet client actually logged:

C:\> dir port*
Volume in drive C has no label.
Volume Serial Number is E008-2D0C

Directory of C:\

04/03/2010 03:59 AM 0 port1.txt
04/03/2010 03:59 AM 0 port2.txt
04/03/2010 03:59 AM 0 port3.txt
04/03/2010 03:59 AM 0 port4.txt
04/03/2010 03:59 AM 0 port5.txt
04/03/2010 03:59 AM 0 port6.txt
04/03/2010 03:59 AM 0 port7.txt
7 File(s) 0 bytes
0 Dir(s) 696,209,408 bytes free

Remember, port 3 was open, the others are closed. But, it created an empty file for every port! There is no way to differentiate from this output which ports are open or closed.

The built-in Windows telnet client is absolute garbage... really one of the worst written tools ever. Microsoft should be ashamed of itself.

So, we'll have to find something else that can make a TCP connection for us so we can do our little scan. We need a simple command-line tool that's built-in to Windows that can make a TCP connection to an arbitrary port. How about the FTP client! Yeah, everyone loves the FTP client. Let's try it on port 3:

C:\> ftp 3

Transfers files to and from a computer running an FTP server service
(sometimes called a daemon). Ftp can be used interactively.

FTP [-v] [-d] [-i] [-n] [-g] [-s:filename] [-a] [-w:windowsize] [-A] [host]

Oh wait... that's just displaying the help of the ftp command. To make the FTP client connect to a given port, we can't just invoke it to connect to an IP Address and a port number. But, notice that -s:filename option. We could make the ftp client connect to port 3 by creating a file called 3.txt containing:
open 3

Then, we invoke the ftp client to simply read its command from this file:

C:\> ftp -s:3.txt
ftp> open 3
Connected to

Nice! We've got something here... Let's write a loop that will create a file called ftp.txt, which contains our connection commands (open IPaddr portnum followed by quit). We'll store Standard Error from our FTP command, because that'll give us an indication of which ports are closed, by saying "Connection closed by remote host." Here is our command:

C:\> for /L %i in (1,1,1024) do @echo Checking Port %i: >> ports.txt & echo 
open %i > ftp.txt & echo quit >> ftp.txt & ftp -s:ftp.txt 2>>ports.txt

Now, when the scan is done, we can look at ports.txt. Wherever it shows the text "Connection closed by remote host", we know that port is open. You see, there was a connection, which was then closed. So, the port was open. By the way, interestingly enough, when it does find an open port, the ftp client will hang for about 30 seconds. Then, the client drops the connection (not the remote host, as is erroneously reported in the error message!), and the client prints the message "Connection closed by remote host." on Standard Error, which we capture.

Sadly, there is one more little wrinkle here, though. The command we just showed works great on XP and 2003. But, Microsoft, in its infinite wisdom, decided to change the FTP client in Windows Vista and Windows 2008 server. In those two versions of our beloved OS, the FTP client doesn't display its error or success messages on Standard Error any more. Nor does it display them on Standard Output! I'm sure they did this in response to a lot of customer requests. I can picture it now: "Hello, Microsoft? Yeah, I want you to change the built-in FTP client so that its connection messages aren't displayed on Standard Error. Oh, and I don't want them on Standard Output either. Yes... Yes... just display them on the screen so that we can't capture them. Uh huh... I want you to change this 15 year old tool to make it even more cumbersome and less usable than ever. That's what I'm paying you for, isn't it, Microsoft? Think about MS Word as an example." I'm sure there was such an outcry for this change, Microsoft bent to the will of its customers. Still, despite this little change from Microsoft, our port scanner will function on Win Vista and 2008 server... it just won't store its results into a file for us.

Wanna hear something even weirder? When I originally wrote the previous paragraph, I had included Win7 among the versions with the lame change to ftp.exe. I tested it in Win Vista and Win2008, and figured Windows 7 would have the same behavior. But, our crack team of Bodacious Research Assistants here at CLKF labs, working diligently through the night (we do feed them now, though, by order of court) mentioned to me that on Window 7, the ftp client actually does display the "Connection closed by remote host." on Standard Error, so our little command works well on Win 7, diligently recording results. So, our port scanner works great on XP, 2003, and Win7, storing its results. But, on Vista and 2008, it doesn't. Yet another reason for confining Vista to the ash heap of history, and just avoiding doing port scans from a Win2K8 box using FTP.

Truth be told, it wasn't the Bodacious Research Assistants (and no, we don't ever convert that designation to an acronym, thank you very much... again by court order) that pointed out the change in Win7. It was Tim Medin himself, who, while Bodacious, is not a mere Research Assistant.

Diligent reader Matthew writes in about using netsh to check ports:

I was reading your recent post on port scanning from the command line and I like it a lot. In my job I have to do a lot from the Windows and Linux command line and one day while cursing at Windows for not having netcat I stumbled upon a great way to check whether a port was listening or not. You can use netsh. To check if a port is listening or not just run "netsh diag connect iphost ipaddr portnum". Here is the output when something is listening on the port.

IPHost (www.google.com)
IPHost = www.google.com
Port = 80
Server appears to be running on port(s) [80]

And here's what the command looks like when something isn't running on the port

IPHost (www.google.com)
IPHost = www.google.com
Port = 3389
Server appears to be running on port(s) [NONE]

It can't tell whether a port is open with nothing listening but for most purposes I think it works pretty good.

Awesome stuff, Matt. Thanks for the info! It should be noted that your commands work like a champ on XP and 2003 Server. However, Microsoft removed the diag context from netsh in Vista and Windows 7. You gotta love 'em. :)

Let's see what else Mr. Medin (whose last name is actually pronounced "Sally") has up his lab coat sleeve.

Tim joins the mess

While working as a starving BRA in the basement of <redacted by court order>'s house, I decided to get back at him by taking his work and one-upping him. I tried for countless hours to implement the Windows command line equivalent of "strings", but to no avail. I made attempts to solve other problems in Windows, such as lack of netcat, but there were no new breakthroughs. Before I collapsed from hunger, I decided to find a better way to port scan in Windows. Eureka! Ed did all the hard work, but there is a slightly shorter way of doing it. Finally, FOOD!

Ed's port scanning technique in Windows is really ingenious. The only problem is that annoying script file, but there is a way around it using some echo magic and the pipeline.

C:\> for /L %i in (1,1,1024) do @((echo open %i)&(echo quit)) |
ftp 2>&1 | find "host" && @echo %i is open

Connection closed by remote host.
3 is open
In this version, the commands to be executed by our ftp client are sent down the pipeline together, well, sort of. The two commands are bundled by wrapping them in parenthesis and using an ampersand between each group. Unfortunately, we can't send more than two strings down the pipeline (e.g. opening the connection, sending a username, and then a password) since the third and later commands are dropped. My guess is that the ftp client isn't expecting or can't handle commands that quickly.

Now to look at the results of our ftp commands. As Ed mentioned, an open port is found when "Connection closed by remote host" is shown on standard error. To find this message we need to first redirect standard error to standard out (2>&1). We can then use the Find command to filter for output that contains the word "host", which is a short version of "Connection closed by remote host."

Finally, we use a logical And (&&) before displaying the open port number. The use of the logical And ensures that the "port is open" text is only displayed if all of the previous commands, including Find, were successful.

No matter how you look at it, that is ugly. PowerShell is a bit better, but still not great. Here is the command in PowerShell:

C:\> PS C:\> 1..1024 | % { echo ((new-object Net.Sockets.TcpClient).Connect("",$_)) "$_ is open" } 2>$null
3 is open
We start off using the range operator to create an array of the ports we want to scan. The array is then piped into the ForEach-Object cmdlet. Inside the ForEach-Object cmdlet's script block is where the magic is done.

The connection attempt is made using the .NET TcpClient class. The Connect method requires an address and a port.

Now here is where we do a bit of Write-Output (alias echo) magic. You'll notice the echo command has been given two parameters, the connection attempt and the text that says the port is open. The trick is, if the first command fails then the output of the second command is not displayed either. Here is an example:

PS C:\> echo (1+1) (2+2)
If we replace the (1+1) with (1/0) then the only output is the error:

PS C:\> echo (1/0) (2+2)
Attempted to divide by zero.

The Connect method doesn't have any default output. If it fails it throws an error, if it works then there is no output and no error. We need a way to display the status, and we can use the technique above to our advantage. If the connection is successful then the "$_ is open" is displayed. If our connection fails, an error is thrown and we don't output the "$_ is open" portion, but we have an error messages to clean up. Clean up is done by redirecting standard error to $null.

This is much easier than the classic Windows shell, but still not super easy or clean. We could use something like nmap, but that isn't installed in Windows. Besides, wouldn't you rather impress your boss with this crazy Windows fu than something that looks easy?

Now that Ed and I have made a major mess of things let's see what Hal has for us.

Hal plays along

While I think it would actually be reasonable for me to claim that nmap is almost a de facto standard command in Unix-like operating systems these days, I'll play along with Ed's arbitrary little scenario. So in the spirit of Episode #61, I'm going to show you how to make a port scanner using just built-in bash primitives.

The main trick here is to use the /dev/tcp/... output redirection that I showed you way back in Episode #47:

$ echo >/dev/tcp/localhost/22
$ echo >/dev/tcp/localhost/23
bash: connect: Connection refused
bash: /dev/tcp/localhost/23: Connection refused

The general syntax is "/dev/tcp/<host|ip>/<port>". And as you can see, it's pretty easy to tell from the output whether the port is open or not. But rather than parsing the output, we can just use the short-cut "&&" operator to produce more useful output:

$ echo >/dev/tcp/localhost/22 && echo 22 open
22 open

However, to be clean about this, I'd also like to make the "Connection refused" messages disappear from the output so I just get a list of the open ports. Your first attempt might be to simply add a "2>/dev/null" before the "&&":

$ echo >/dev/tcp/localhost/23 2>/dev/null && echo 23 open
bash: connect: Connection refused
bash: /dev/tcp/localhost/23: Connection refused

Why didn't that work? The problem is that the "Connection refused" messages are the result of the attempted output redirection to "/dev/tcp/...". Your "2>/dev/null" is applying to the standard error output of the original echo command, so it's not going to suppress the error messages we want to go away.

The easiest work-around is to wrapper the original echo command and /dev/tcp/... output redirection in parentheses, forcing them into a sub-shell. We can then redirect the standard error output of the entire sub-process to /dev/null to remove the offending error messages:

$ (echo >/dev/tcp/localhost/23) 2>/dev/null && echo 23 open

Awesome! Now the only thing we need to do to turn this into a real port scanner is to wrap our command up inside a loop:

$ for ((i=1; $i < 65535; i++)); do 
(echo > /dev/tcp/localhost/$i) 2>/dev/null && echo $i open;

22 open
631 open
29754 open
46783 open

It takes about 90 seconds to sweep all 64K TCP ports on my local machine over the loopback interface. Scanning another host on the same LAN takes about twice as long. Of course, I could modify my loop so that I start several probes in parallel by backgrounding tasks, which would speed up the process. But really, if speed were a factor you'd be using nmap instead.

Jeff Haemer wrote in to point out that my loop would be much faster if I did away with the sub-shells and handled the standard error redirection outside of the loop:

$ for ((i=0; $i < 65535; i++)); do 
echo >/dev/tcp/localhost/$i && echo $i open;
done 2>/dev/null

22 open
631 open
29754 open
46783 open

This version takes under 10 seconds to scan all ports over the loopback interface and still gives us the clean output we're looking for. Thus we see the perils of just throwing a loop around a simple command line without considering the possibilities for optimization.

By th way, bash also supports similar syntax with /dev/udp/... Can we use this to create a UDP port scanner as well? Unfortunately not:

$ echo >/dev/udp/localhost/53 && echo 53 open
53 open
$ echo >/dev/udp/localhost/54 && echo 54 open
54 open

Note that I'm running the above commands on a machine that has a name server listening on 53/udp, but nothing bound to 54/udp. As you can see, the output from both is the same. Apparently bash doesn't distinguish the ICMP port unreachable messages that the second command is generating. Oh well.