Monday, March 31, 2014

Episode #176: Step Up to the WMIC

Tim grabs the mic:

Michael Behan writes in:

Perhaps you guys can make this one better. Haven’t put a ton of thought into it:

C:\> (echo HTTP/1.0 200 OK & wmic process list full /format:htable) | nc -l -p 3000

Then visit http://127.0.0.1:3000

This could of course be used to generate a lot more HTML reports via wmic that are quick to save from the browser. The downside is that in its current state is that the page can only be visited once. Adding something like /every:5 just pollutes the web page with mostly duplicate output.

Assuming you already have netcat (nc.exe) on the system the command above will work fine, but it will only work once. After the browser recieves the data the connection has been used and the command is done. To do this multiple times you must wrap it in an infinite For loop.

C:\> for /L %i in (1, 0, 2) do (echo HTTP/1.0 200 OK & wmic process list full /format:htable) | nc -l -p 3000

This will count from 1 to 2 and count by 0, which will never happen (except for very large values of 0). We could use the wmic command to request this information from the remote machine and view it in our browser. This method will authenticate to the remote machine instead of allowing anyone to access the information.

C:\> wmic /node:joelaptop process list full /format:htable > joelaptopprocesses.html && start joelaptopprocesses.html

This will use your current credentials to authenticate to the remote machine, request the remote process in html format, save it to a file, and finally open the file in your default viewer (likely your browser). If you need to use separate credentials you can specify /user:myusername and /password:myP@assw0rd.

Hal, your turn, and I want to see this in nice HTML format. :)

Hal throws up some jazz hands:

Wow. Tim seems a little grumpy. Maybe it's because he can make a simple web server on the command line but has no way to actually request data from it via the command line. Don't worry Little Tim, maybe someday...

Heck, maybe Tim's grumpy because of the dumb way he has to code infinite loops in CMD.EXE. This is a lot easier:

$ while :; do ps -ef | nc -l 3000; done

Frankly, most browsers will interpret this as "text/plain" by default and display the output correctly.

But the above loop got me thinking that we could actually stack multiple commands in sequence:

while :; do
    ps -ef | nc -l 3000
    netstat -anp | nc -l 3000
    df -h | nc -l 3000
    ...
done

Each connection will return the output of a different command until you eventually exhaust the list and start all over again with the first command.

OK, now let's deal with grumpy Tim's request for "nice HTML format". Nothing could be easier, my friends:

$ while :; do (echo '<pre>'; ps -ef; echo '</pre>') | nc -l 3000; done

Hey, it's accepted by every major browser I tested it with! And that's the way we do it downtown... (Hal drops the mic)

Friday, February 28, 2014

Episode #175: More Time! We Need More Time!

Tim leaps in

Every four years (or so) we get an extra day in February, leap year. When I was a kid this term confused me. Frogs leap, they leap over things. A leap year should be shorter! Obviously, I was wrong.

This extra day can give us extra time to complete tasks (e.g. write blog post), so we are going to use our shells to check if the current year is a leap year.

PS C:\> [DateTime]::IsLeapYear(2014)
False

Sadly, this year we do not have extra time. Let's confirm that this command does indeed work by checking a few other years.

PS C:\> [DateTime]::IsLeapYear(2012)
True
PS C:\> [DateTime]::IsLeapYear(2000)
True
PS C:\> [DateTime]::IsLeapYear(1900)
False

Wait a second! Something is wrong. The year 1900 is a multiple of 4, why is it not a leap year?

The sun does not take exactly 365.25 days to get around the sun, it is actually 365.242199 days. This means that if we always leaped every four years we would slowly get off course. So every 100 years we skip the leap year.

Now you are probably wondering why 2000 had a leap year. That is because it is actually the exception to the exception. Every 400 years we skip skipping the leap year. What a cool bit of trivia, huh?

Hal, how jump is your shell?

Hal jumps back

I should have insisted Tim do this one in CMD.EXE. Isn't is nice that PowerShell has a IsLeapYear() built-in? Back in my day, we didn't even have zeroes! We had to bend two ones together to make zeroes! Up hill! Both ways! In the snow!

Enough reminiscing. Let's make our own IsLeapYear function in the shell:

function IsLeapYear {
    year=${1:-$(date +%Y)};
    [[ $(($year % 400)) -eq 0 || ( $(($year % 4)) -eq 0 && $(($year % 100)) -ne 0 ) ]]
}

There's some fun stuff in this function. First we check to see if the function is called with an argument ("${1-..."). If so, then that's the year we'll check. Otherwise we check the current year, which is the value returned by "$(date +%Y)".

The other line of the function is the standard algorithm for figuring leap years. It's a leap year if the year is evenly divisible by 400, or divisible by 4 and not divisible by 100. Since shell functions return the value of the last command or expression executed, our function returns whether or not it's a leap year. Nice and easy, huh?

Now we can run some tests using our IsLeapYear function, just like Tim did:

$ IsLeapYear && echo Leaping lizards! || echo Arf, no
Arf, no
$ IsLeapYear 2012 && echo Leaping lizards! || echo Arf, no
Leaping lizards!
$ IsLeapYear 2000 && echo Leaping lizards! || echo Arf, no
Leaping lizards!
$ IsLeapYear 1900 && echo Leaping lizards! || echo Arf, no
Arf, no

Assuming the current year is not a Leap Year, we could even wrap a loop around IsLeapYear to figure out the next leap year:

$ y=$(date +%Y); while :; do IsLeapYear $((++y)) && break; done; echo $y
2016

We begin by initializing $y to the current year. Then we go into an infinte loop ("while :; do..."). Inside the loop we add one to $y and call IsLeapYear. If IsLeapYear returns true, then we "break" out of the loop. When the loop is all done, simply echo the last value of $y.

Stick that in your PowerShell pipe and smoke it, Tim!

Tuesday, January 28, 2014

Episode #174: Lightning Lockdown

Hal firewalls fast

Recently a client needed me to quickly set up an IP Tables firewall on a production server that was effectively open on the Internet. I knew very little about the machine, and we couldn't afford to break any of the production traffic to and from the box.

It occurred to me that a decent first approximation would be to simply look at the network services currently in use, and create a firewall based on that. The resulting policy would probably be a bit more loose than it needed to or should be, but it would be infinitely better than no firewall at all!

I went with lsof, because I found the output easier to parse than netstat:

# lsof -i -nlP | awk '{print $1, $8, $9}' | sort -u
COMMAND NODE NAME
httpd TCP *:80
named TCP 127.0.0.1:53
named TCP 127.0.0.1:953
named TCP [::1]:953
named TCP 150.123.32.3:53
named UDP 127.0.0.1:53
named UDP 150.123.32.3:53
ntpd UDP [::1]:123
ntpd UDP *:123
ntpd UDP 127.0.0.1:123
ntpd UDP 150.123.32.3:123
ntpd UDP [fe80::baac:6fff:fe8e:a0f1]:123
ntpd UDP [fe80::baac:6fff:fe8e:a0f2]:123
portreser UDP *:783
sendmail TCP 150.123.32.3:25
sendmail TCP 150.123.32.3:25->58.50.15.213:1526
sendmail TCP *:587
sshd TCP *:22
sshd TCP 150.123.32.3:22->121.28.56.2:39054

I could have left off the process name, but it helped me decide which ports were important to include in the new firewall rules. Honestly, the output above was good enough for me to quickly throw together some workable IP Tables rules. I simply saved the output to a text file and hacked things together with a text editor.

But maybe you only care about the port information:

# lsof -i -nlP | awk '{print $9, $8, $1}' | sed 's/.*://' | sort -u
123 UDP ntpd
1526 TCP sendmail
22 TCP sshd
25 TCP sendmail
39054 TCP sshd
53 TCP named
53 UDP named
587 TCP sendmail
783 UDP portreser
80 TCP httpd
953 TCP named
NAME NODE COMMAND

Note that I inverted the field output order, just to make my sed a little easier to write

If you wanted to go really crazy, you could even create and load the actual rules on the fly. I don't recommend this at all, but it will make Tim's life harder in the next section, so here goes:

lsof -i -nlP | tail -n +2 | awk '{print $9, $8}' | 
    sed 's/.*://' | sort -u | tr A-Z a-z | 
    while read port proto; do ufw allow $port/$proto; done

I added a "tail -n +2" to get rid of the header line. I also dropped the command name from my awk output. There's a new "tr A-Z a-z" in there to lower-case the protocol name. Finally we end with a loop that takes the port and protocol and uses the ufw command line interface to add the rules. You could do the same with the iptables command and its nasty syntax, but if you're on a Linux distro with UFW, I strongly urge you to use it!

So, Tim, I figure you can parse netstat output pretty easily. How about the command-line interface to the Windows firewall? Remember, adversity builds character...

Tim builds character

When I first saw this I thought, "Man, this is going to be easy with the new cmdlets in PowerShell v4!" There are a lot of new cmdlets available in PowerShell version 4, and both Windows 8.1 and Server 2012R2 ship with PowerShell version 4. In addition, PowerShell version 4 is available for Windows 7 SP1 (and later) and Windows Server 2008 R2 SP1 (and later).

The first cmdlet that will help us out here is Get-NetTCPConnection. According to the help page this cmdlet "gets current TCP connections. Use this cmdlet to view TCP connection properties such as local or remote IP address, local or remote port, and connection state." This is going to be great! But...

It doesn't mention the process ID or process name. Nooooo! This can't be. Let's look at all the properties of the output objects.

PS C:\> Get-NetTCPConnection | Format-List *

State                    : Established
AppliedSetting           : Internet
Caption                  :
Description              :
ElementName              :
InstanceID               : 192.168.1.167++445++10.11.22.33++49278
CommunicationStatus      :
DetailedStatus           :
HealthState              :
InstallDate              :
Name                     :
OperatingStatus          :
OperationalStatus        :
PrimaryStatus            :
Status                   :
StatusDescriptions       :
AvailableRequestedStates :
EnabledDefault           : 2
EnabledState             :
OtherEnabledState        :
RequestedState           : 5
TimeOfLastStateChange    :
TransitioningToState     : 12
AggregationBehavior      :
Directionality           :
LocalAddress             : 192.168.1.167
LocalPort                : 445
RemoteAddress            : 10.11.22.33
RemotePort               : 49278
PSComputerName           :
CimClass                 : ROOT/StandardCimv2:MSFT_NetTCPConnection
CimInstanceProperties    : {Caption, Description, ElementName, InstanceID...}
CimSystemProperties      : Microsoft.Management.Infrastructure.CimSystemProperties

Dang! This will get most of what we want (where "want" was defined by that Hal guy), but it won't get the process ID or the process name. So much for rubbing the new cmdlets in his face.

Let's forget about Hal for a second and get what we can with this cmdlet.

PS C:\> Get-NetTCPConnection | Select-Object LocalPort | Sort-Object -Unique LocalPort
LocalPort
---------
    135
    139
    445
   3587
   5357
  49152
  49153
  49154
  49155
  49156
  49157
  49164

This is helpful for getting a list of ports, but not useful for making decisions about what should be allowed. Also, we would need to run Get-NetUDPEndpoint to get the UDP connections. This is so close, yet so bloody far. We have to resort to the old school netstat command and the -b option to get the executable name. In episode 123 we needed parsed netstat output. I recommended the Get-Netstat script available at poshcode.org. Sadly, we are going to have to resort to that again. With this script we can quickly get the port, protocol, and process name.

PS C:\> .\get-netstat.ps1 | Select-Object ProcessName, Protocol, LocalPort | 
   Sort-Object -Unique LocalPort, Protocol, ProcessName

ProcessName   Protocol      Localport
-----------   --------      ---------
svchost       TCP           135
System        UDP           137
System        UDP           138
System        TCP           139
svchost       UDP           1900
svchost       UDP           3540
svchost       UDP           3544
svchost       TCP           3587
dasHost       UDP           3702
svchost       UDP           3702
System        TCP           445
svchost       UDP           4500
...

It should be pretty obvious that the port 137-149 and 445 should not be accessible from the internet. We can filter these ports out so that we don't allow these ports through the firewall.

PS C:\> ... | Where-Object { (135..139 + 445) -NotContains $_.LocalPort }
ProcessName   Protocol      Localport
-----------   --------      ---------
svchost       UDP           1900
svchost       UDP           3540
svchost       UDP           3544
svchost       TCP           3587
dasHost       UDP           3702
svchost       UDP           3702
svchost       UDP           4500
...

Now that we have the ports and protocols we can create new firewall rules using the new New-NetFirewallRule cmdlet. Yeah!

PS C:\> .\get-netstat.ps1 | Select-Object Protocol, LocalPort | Sort-Object -Unique * | 
 Where-Object { (135..139 + 445) -NotContains $_.LocalPort } | 
 ForEach-Object { New-NetFirewallRule -DisplayName AllowedByScript -Direction Outbound 
 -Action Allow  -LocalPort $_.LocalPort -Protocol $_.Protocol }
Name                  : {d15ca484-5d16-413f-8460-a29204ff06ed}
DisplayName           : AllowedByScript
Description           :
DisplayGroup          :
Group                 :
Enabled               : True
Profile               : Any
Platform              : {}
Direction             : Outbound
Action                : Allow
EdgeTraversalPolicy   : Block
LooseSourceMapping    : False
LocalOnlyMapping      : False
Owner                 :
PrimaryStatus         : OK
Status                : The rule was parsed successfully from the store. (65536)
EnforcementStatus     : NotApplicable
PolicyStoreSource     : PersistentStore
PolicyStoreSourceType : Local
...

These new firewall cmdlets really make things easier, but if you don't have PowerShellv4 you can still use the old netsh command to add the firewall rules. Also, the Get-Netstat will support older version of PowerShell as well, so this is nicely backwards compatible. All we need to do is replace the command inside the ForEach-Object cmdlet's script block.

PS C:\> ... | ForEach-Object { netsh advfirewall firewall add rule 
 name="AllowedByScript" dir=in action=allow protocol=$_.Protocol 
 localport=$_.LocalPort }

Tuesday, December 31, 2013

Episode #173: Tis the Season

Hal finds some cheer
From somewhere near the borders of scriptistan, we send you:
function t { 
    for ((i=0; $i < $1; i++)); do
        s=$((8-$i)); e=$((8+$i));
        for ((j=0; j <= $e; j++)); do [ $j -ge $s ] && echo -n '^' || echo -n ' '; done;
        echo;
    done
}
function T {
    for ((i=0; $i < $1; i++)); do
        for ((j=0; j < 10; j++)); do [ $j -ge 7 ] && echo -n '|' || echo -n ' '; done;
        echo;
    done
    echo
}
t 3; t 5; t 7; T 2; echo -e "Season's Greetings\n    from CLKF"


Ed comes in out of the cold:

Gosh, I missed you guys.  It's nice to be home with my CLKF family for the holidays.  I brought you a present:

c:\>cmd.exe /v:on /c "echo. & echo A Christmas present for you: & color 24 & 
echo. & echo     0x0& for /L %a in (1,1,11) do @(for /L %b in (1,1,10) do @ set /a
%b%2) & echo 1"& echo. & echo Merry Christmas!

Tim awaits the new year:

Happy New Year from within the borders of Scriptistan!

Function Draw-Circle {
    Param( $Radius, $XCenter, $YCenter )
    
    for ($x = -$Radius; $x -le $Radius ; $x++) {
        $y = [int]([math]::sqrt($Radius * $Radius - $x * $x))
        Set-CursorLocation -X ($XCenter + $x) -Y ($YCenter + $y)
        Write-Host "*" -ForegroundColor Blue -NoNewline
        Set-CursorLocation -X ($XCenter + $x) -Y ($YCenter - $y)
        Write-Host "*" -ForegroundColor Blue -NoNewline
    }
}

Function Draw-Hat {
    Param( $XCenter, $YTop, $Height, $Width, $BrimWidth )
    
    $left = Round($XCenter - ($Width / 2))
    $row = "#" * $Width
    for ($y = $YTop; $y -lt $YTop + $Height - 1; $y++) {
        Set-CursorLocation -X $left -Y $y
        Write-Host $row -ForegroundColor Black -NoNewline
    }
    
    Set-CursorLocation -X ($left - $BrimWidth) -Y ($YTop + $Height - 1)
    $row = "#" * ($Width + 2 * $BrimWidth)
    Write-Host $row -ForegroundColor Black -NoNewline
}

Function Set-CursorLocation {
    Param ( $x, $y )

    $pos = $Host.UI.RawUI.CursorPosition
    $pos.X = $x
    $pos.Y = $y
    $Host.UI.RawUI.CursorPosition = $pos
}

Function Round {
    Param ( $int )
    # Stupid banker's rounding
    return [Math]::Round( $int, [MidpointRounding]'AwayFromZero' )
}

Clear-Host
Write-Host "Happy New Year!"
Draw-Circle -Radius 4 -XCenter 10 -YCenter 8
Draw-Circle -Radius 5 -XCenter 10 -YCenter 17
Draw-Circle -Radius 7 -XCenter 10 -YCenter 29
Draw-Hat -XCenter 10 -YTop 2 -Height 5 -Width 7 -BrimWidth 2
Set-CursorLocation -X 0 -Y 38

Tuesday, November 26, 2013

Episode #172: Who said bigger is better?

Tim sweats the small stuff

Ted S. writes in:

"I have a number of batch scripts which turn a given input file into a configurable amount of versions, all of which will contain identical data content, but none of which, ideally, contain the same byte content. My problem is, how do I, using *only* XP+ cmd (no other scripting - PowerShell, jsh, wsh, &c), replace the original (optionally backed up) with the smallest of the myriad versions produced by the previous batch runs?"

This is pretty straight forward, but it depends on what we want to do with the files. I assumed that the larger files should be deleted since they are redundant. This will leave us with only the smallest file in the directory. Let's start off by listing all the files in the current directory and sort them by size.

C:\> dir /A-D /OS /b
file3.txt
file2.txt
file1.txt
file4.txt

Sorting the files, and only files, in the current directory by size is pretty easy. The "/A" option filters on the object's properties and directories are filtered out with "-D". Next, the "/O" option is used to sort and the "S" tells the command to sort putting the smallest files first. Finally, the "/b" is used to show the bare format.

At this point we have the files in the proper order and in a nice light format. We can now use a For loop to delete everything while skipping the first file.

C:\> for /F "tokens=* skip=1" %i in ('dir /A-D /OS /b') do @del %i

Here is the same functionality in PowerShell:

PS C:\> Get-ChildItem | Where-Object { -not $_.PSIsContainer } | Sort-Object -Property Length | Select-Object -Skip 1 | Remove-Item

This is mostly readable. The only exception is the "PSIsContainer". Directories are container objects but files are not, so we filter out the containers (directories). Here is the same command shortented using aliases and positional parameters:

PS C:\> ls | ? { !$_.PSIsContainer } | sort Length | select -skip 1 | rm

There you go Ted, and in PowerShell even though you didn't want it. Here comes Hal brining something even smaller you don't want.

Hal's is smaller than Tim's... but less sweaty

Tim, how many times do I have to tell you, smaller is better when it comes to command lines:

ls -Sr | tail -n +2 | xargs rm

It's actually not that different from Tim's PowerShell solution, except that my "ls" command has "-S" to sort by size as a built-in. We use the "-r" flag to reverse the sort, putting the smallest file first and skipping it with "tail -n +2".

If you're worried about spaces in the file names, we could tart this one up a bit more:

ls -Sr | tail -n +2 | tr \\n \\000 | xargs -0 rm

After I use "tail" to get rid of the first, smallest file, I use "tr" to convert the newlines to nulls. That allows me to use the "-0" flag to "xargs" to split the input on nulls, and preserves the spaces in the input file names.

What may be more interesting about this Episode is the command line I used to create and re-create my files for testing. First I made a text file with lines like this:

1 3
2 4
3 1
4 2

And then I whipped up a little loop action around the "dd" command:

$ while read file size; do 
      dd if=/dev/zero bs=4K count=$size of=file$file; 
  done <../input.txt 
3+0 records in
3+0 records out
12288 bytes (12 kB) copied, 6.1259e-05 s, 201 MB/s
4+0 records in
4+0 records out
16384 bytes (16 kB) copied, 0.000144856 s, 113 MB/s
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 3.4961e-05 s, 117 MB/s
2+0 records in
2+0 records out
8192 bytes (8.2 kB) copied, 4.3726e-05 s, 187 MB/s

Then I just had to re-run the loop whenever I wanted to re-create my test files after deleting them.

Tuesday, October 8, 2013

Episode #171: Flexibly Finding Firewall Phrases

Old Tim answers an old email

Patrick Hoerter writes in:
I have a large firewall configuration file that I am working with. It comes from that vendor that likes to prepend each product they sell with the same "well defended" name. Each configuration item inside it is multiple lines starting with "edit" and ending with "next". I'm trying to extract only the configuration items that are in some way tied to a specific port, in this case "port10".

Sample Data:

edit "port10"
        set vdom "root"
        set ip 192.168.1.54 255.255.255.248
        set allowaccess ping
        set type physical
        set sample-rate 400
        set description "Other Firewall"
        set alias "fw-outside"
        set sflow-sampler enable
   next
edit "192.168.0.0"
        set subnet 192.168.0.0 255.255.0.0
    next
    edit "10.0.0.0"
        set subnet 10.0.0.0 255.0.0.0
    next
    edit "172.16.0.0"
        set subnet 172.16.0.0 255.240.0.0
    next
  edit "vpn-CandC-1"
        set associated-interface "port10"
        set subnet 10.254.153.0 255.255.255.0
    next
    edit "vpn-CandC-2"
        set associated-interface "port10"
        set subnet 10.254.154.0 255.255.255.0
    next
    edit "vpn-CandC-3"
        set associated-interface "port10"
        set subnet 10.254.155.0 255.255.255.0
    next
   edit 92
        set srcintf "port10"
        set dstintf "port1"
            set srcaddr "vpn-CandC-1" "vpn-CandC-2" "vpn-CandC-3"            
            set dstaddr "all"            
        set action accept
        set schedule "always"
            set service "ANY"            
        set logtraffic enable
    next
 

Sample Results:

edit "port10"
        set vdom "root"
        set ip 192.168.1.54 255.255.255.248
        set allowaccess ping
        set type physical
        set sample-rate 400
        set description "Other Firewall"
        set alias "fw-outside"
        set sflow-sampler enable
   next
  edit "vpn-CandC-1"
        set associated-interface "port10"
        set subnet 10.254.153.0 255.255.255.0
    next
    edit "vpn-CandC-2"
        set associated-interface "port10"
        set subnet 10.254.154.0 255.255.255.0
    next
    edit "vpn-CandC-3"
        set associated-interface "port10"
        set subnet 10.254.155.0 255.255.255.0
    next
   edit 92
        set srcintf "port10"
        set dstintf "port1"
            set srcaddr "vpn-CandC-1" "vpn-CandC-2" "vpn-CandC-3"            
            set dstaddr "all"            
        set action accept
        set schedule "always"
            set service "ANY"            
        set logtraffic enable
    next

Patrick gave us the full text and the expected output. In short, he wants the text between "edit" and "next" if it contains the text "port10". To begin this task we need to first need get each of the edit/next chunks.

PS C:\> ((cat fw.txt) -join "`n") | select-string "(?s)edit.*?next" -AllMatches | 
 select -ExpandProperty matches

This command will read the entire file fw.txt and combine it into one string. Normally, each line is treated as a separate object, but we are going to join them into a big string using the newline (`n) to join each line. Now that the text is one big string we can use Select-String with a regular expression to find all the matches. The regular expression will find text across line breaks and allows for very flexible searches so we can find our edit/next chunks. Here is a break down of the pieces of the regular expression:

  • (?s) - Use single line mode where the dot (.) will match any character, including a newline character. This allows us to match text across multiple lines.
  • edit - the literal text "edit"
  • .*? - find any text, but be lazy, not greedy. This means it should match the smallest chunks that will match the criteria.
  • next - literal text next

Now that we have the chunks we use a Where-Object filter (alias ?) to find matching objects to pass down the pipeline.

PS C:\> ((cat .\fw.txt) -join "`n") | select-string "(?s)edit.*?next" -AllMatches | 
 select -ExpandProperty matches | ? { $_.Captures | Select-String "port10" }

Inside the Where-Object filter we can check the Value property to see if it contains the text "port10". The Value property is piped into Select-String to look for the text "port10", and if it contains "port10" it continues down the pipeline, if not, it is dropped.

At this point, we have the objects we want, so all we need to do is display the results by expanding the Value and displaying it again. The expansion means that it just displays the text and no data or metadata associated with the parent object. Here is what the final command looks like.

PS C:\> ((cat .\fw.txt) -join "`n") | select-string "(?s)edit.*?next" -AllMatches | 
 select -ExpandProperty matches | ? { $_.Value | Select-String "port10" } | 
 select -ExpandProperty Value

Not so bad, but I have a feeling it is going to be worse for my friend Hal.

Old Hal uses some old tricks

Oh sure, I know what Tim's thinking here. "It's multi-line matching, and the Unix shell is lousy at that. Hal's in trouble now. Mwhahaha. The Command-Line Kung Fu title will finally be mine! Mine! Do you hear me?!? MINE!"

Uh-huh. Well how about this, old friend:

awk -v RS=next -v ORS=next '/port10/' fw.txt

While we're doing multi-line matching here, the blocks of text have nice regular delimiters. That means I can change the awk "record separator" ("RS") from newline to the string "next" and gobble up entire chunks at a time.

After that, it's smooth sailing. I just use awk's pattern-matching operator to match the "port10" strings. Since I don't have an action defined, "{print}" is assumed and we output the matching blocks of text.

The only tricky part is that I have to remember to change the "output record separator" ("ORS") to be "next". Otherwise, awk will use its default ORS value, which is newline. That would give me output like:

$ awk -v RS=next '/port10/' fw.txt
edit "port10"
        set vdom "root"
        set ip 192.168.1.54 255.255.255.248
        set allowaccess ping
        set type physical
        set sample-rate 400
        set description "Other Firewall"
        set alias "fw-outside"
        set sflow-sampler enable
   

  edit "vpn-CandC-1"
        set associated-interface "port10"
        set subnet 10.254.153.0 255.255.255.0
    

    edit "vpn-CandC-2"
        set associated-interface "port10"
...

The "next" terminators get left out and we get extra lines in the output. But when ORS is set properly, we get exactly what we were after:

$ awk -v RS=next -v ORS=next '/port10/' fw.txt
edit "port10"
        set vdom "root"
        set ip 192.168.1.54 255.255.255.248
        set allowaccess ping
        set type physical
        set sample-rate 400
        set description "Other Firewall"
        set alias "fw-outside"
        set sflow-sampler enable
   next
  edit "vpn-CandC-1"
        set associated-interface "port10"
        set subnet 10.254.153.0 255.255.255.0
    next
    edit "vpn-CandC-2"
        set associated-interface "port10"
...

So that wasn't bad at all. Sorry about that Tim. Maybe next time, old buddy.

Friday, September 27, 2013

Episode #170: Fearless Forensic File Fu

Hal receives a cry for help

Fellow forensicator Craig was in a bit of a quandary. He had a forensic image in "split raw" format-- a complete forensic image broken up into small pieces. Unfortunately for him, the pieces were named "fileaa", "fileab", "fileac", and so on while his preferred tool wanted the files to be named "file.001", "file.002", "file.003", etc. Craig wanted to know if there was an easy way to rename the files, using either Linux or the Windows shell.

This one's not too hard in Linux, and in fact it's a lot like something we did way back in Episode #26:

c=1; 
for f in file*; do 
    printf -v ext %03d $(( c++ )); 
    mv $f ${f/%[a-z][a-z]/.$ext}; 
done

You could remove the newlines and make that one big long line, but I think it's a bit easier to read this way. First we initialize a counter variable $c to 1. Then we loop over each of the files in our split raw image.

The printf statement inside the loop formats $c as three digits, with however many leading zeroes are necessary ("%03d"). There are a couple of tricky bits in the printf though. First is we're assigning the output of printf to a variable $ext ("-v ext"). Second, we're doing a little arithmetic on $c at the same time and using the "++" operator to increment the value of $c each time through the loop-- that's the "$(( c++ ))" part.

Then we use mv to rename our file. I'm using the variable substitution operator like we did in Episode #26. The format again is "${var/pattern/substitution}" and here the "%" after the first slash means "match at the end of the string". So I'm replacing the last two letters in the file name with a dot followed by our $ext value. And that's exactly what Craig wanted!

All of the symbols in this solution make it appear like a little chunk of line noise, but it's nowhere near as ugly as Ed's CMD.EXE solution in Episode #26. Here's hoping Tim's Powershell solution is a bit more elegant.

Tim finishes before September ends!

Elegance where here we come!

Long Version:
PS C:\> $i=1; Get-ChildItem file?? | Sort-Object -Propery Name | 
  ForeEach-Object { MoveItem -Path $_ -Destination ("file.{0:D3}" -f $i++) }
Shortened Version:
PS C:\> ls file?? | sort name | % { move $_ -dest ("file.{0:D3}" -f $i++) }

We start off by initializing our counter variable ($i) to 1 just like Hal did. Next, we list all the files that start with "file" and are followed by exactly two characters (each ? matches exactly 1 character of any kind). The results are then sorted by the file name to ensure that the files are renamed in the correct order. The results are then fed into the ForEach-Object cmdlet (alias %).

The ForEach-Object loop will operate on each object (file) as it moves down the pipeline. One at a time, each file will be represented by the current pipeline object ($_). The Move-Item cmdlet (alias move) is used to rename a file; to move it to its new name. The source path is provided by the current object and the destination is determined using the format operator (-f) and our counter ($i). The format operator will print $i as a three digit number prefixed with leading zeros and "file.". The ++ after $i will increment the counter after it has been used.

That is much cleaner than Ed's example...and even cleaner than Hal's to boot!

Update:

Reader m_cnd writes in with a solution for CMD. vm

C:\> for /F "tokens=1,2 delims=:" %d in ('dir /on /b file* ^| 
findstr /n "file"') do for /F %x in ('set ext^=00%d^&^& 
cmd /v:on /c "echo !ext:~-3!"') do rename %e file.%x
Nice work!