Tuesday, July 27, 2010

Episode #105: File Triage

Hal answers the mail:

Frank McClain, one of my former SANS For508 students, sent me some email containing a bit of Command-Line Fu that he and his co-worker Mark Hallman had developed. The problem they were trying to solve was to sort out Microsoft Office files carved out of a forensic image using a tool like foremost or scalpel. The file format signatures used by these tools don't distinguish different types of MS Office files, so when I run foremost for example, I end up with a directory of file names with the generic ".ole" extension:

# ls
00003994.ole 00004618.ole 00005146.ole 00005746.ole 00015410.ole
00004162.ole 00004722.ole 00005250.ole 00010994.ole 00051554.ole
00004226.ole 00004826.ole 00005354.ole 00011410.ole 00054226.ole
00004290.ole 00004890.ole 00005418.ole 00012274.ole
00004394.ole 00004954.ole 00005514.ole 00014154.ole
00004498.ole 00005018.ole 00005578.ole 00014858.ole
00004554.ole 00005082.ole 00005642.ole 00015330.ole

Happily, the Linux "file" command is not only able to distinguish which file types you have, but is also able to show you different information in the file meta-data:

# file 00004618.ole
00004618.ole: CDF V2 Document, Little Endian, Os: Windows, Version 5.1, Code page: 1252,
Title: SANS Expense Report Spreadsheet, Subject: SANS Expense Report Spreadsheet,
Author: Jason Fossen, Keywords: SANS spreadsheet expense report, Comments: Version 2.0 --
Updated 2/17/02., Last Saved By: Hal Pomeranz, Name of Creating Application: Microsoft Excel,
Last Printed: Fri Oct 31 01:31:31 2003, Create Time/Date: Sat Sep 29 18:28:43 2001,
Last Saved Time/Date: Sat Aug 15 23:05:29 2009, Security: 0

There's lots of interesting information here, but for our purposes "Name of Creating Application: Microsoft Excel" is the helpful bit. Frank and Mark wanted to recognize the file type in the output of the "file" command and change the extension on the file from ".ole" to the appropriate Windows file extension-- ".xls" in this case. Here's their solution:

file -p *.ole | grep -i excel | awk -F: '{print $1}' | rename 's/\.ole/\.xls/'

"file -p" will dump the information about the files while attempting to preserve the file timestamps. Then we use grep to match the Excel files and awk to pull off the file name-- "-F:" splits on colons instead of whitespace and then we print the first field. The file names get fed into the rename command which changes the file extensions.

This example exercises one of my pet peeves: piping grep into awk. awk has built-in pattern matching, so we could rewrite the command line as:

file -p *.ole | awk -F: '/Excel/ {print $1}' | rename 's/\.ole/\.xls/'

Another problem in the context of the Command-Line Kung Fu blog is the use of the rename command, which isn't necessarily a built-in command on various Unix operating systems. Also, I'd like to have a single command that splits out all different types of MS Office files in a single command, rather than running one command for Excel spreadsheets, and then another for PowerPoint files, etc.

So here's my solution:

# file -p * | awk -F. '/Excel/ { system("mv "$1".ole "$1".xls") }; 
/PowerPoint/ { system("mv "$1".ole "$1".ppt") }'

# ls
00003994.ole 00004618.xls 00005146.xls 00005746.xls 00015410.ppt
00004162.xls 00004722.xls 00005250.xls 00010994.ppt 00051554.ppt
00004226.xls 00004826.xls 00005354.xls 00011410.ppt 00054226.ppt
00004290.xls 00004890.xls 00005418.xls 00012274.ppt
00004394.xls 00004954.xls 00005514.xls 00014154.ole
00004498.xls 00005018.xls 00005578.xls 00014858.ppt
00004554.xls 00005082.xls 00005642.xls 00015330.ppt

I'm using awk to differentiate between the Excel and PowerPoint files (by default, foremost automatically detects Word files and splits them out into another directory so I don't need to deal with them here). I then call system() which allows me to run a shell command-- in this case a mv command to rename the files with the appropriate extensions. But notice something subtle here: unlike Frank and Mark's command, I'm telling awk to split on period ("-F."). So $1 in this case only contains the "basename" of the file before the ".ole" extension. That makes my mv command a bit simpler, though the crazy quoting rules in awk make the whole thing rather ugly.

Careful readers will notice that there are still a couple of files in the directory with .ole extensions:

# file *.ole
00003994.ole: CDF V2 Document, corrupt: Can't read SSAT
00014154.ole: CDF V2 Document, corrupt: Can't read SSAT

These are files that matched foremost's signature for MS Office files, but which are not really Office docs. They're just random sequences of blocks that happened to match the file signature that foremost uses.

Poor Tim doesn't have anything like the "file" command in Windows. But I'm sure that finding a solution for this week's challenge will be a "character building" experience for him. In fact, I'm expecting him to be positively bursting with character very soon now...

Tim bursts:

Alas, there is no Windows equivalent to the file command, and I don't see it coming. We will have to do this the hard way. Time to build some character.

Before we jump into the episode, let's go over a bit of how the file command works so we can recreate some of its functionality. One of the ways that the command determines the file type is by looking at the 'magic number' of the file. The magic number is a byte sequence towards the beginning of the file and it is typically 16 or fewer bytes in length. Also, each file type has a unique signature.

The reason all the carved files are named *.ole is that the magic number is the same for all Microsoft Office documents for versions 97 through 2003. That means an Excel 97 document has the same magic number as an Word 2003 document. To better classify the document type we have to look elsewhere.

According to a few websites there is supposed to be a unique identifier between 6 and 8 bytes long at offset 512 (512 bytes into the file, counting from zero). I downloaded a bunch of documents and as well as created some myself and it seems the websites did not have a full list, nor was it consistent. There are more boring details here but I finally determined the best way to determine the file type. Towards the end of the file there was an indicator of the file type as shown here.


<0x00>Microsoft Office Word<0x00>
<0x00>Microsoft Office PowerPoint<0x00>
<0x00>Microsoft Office Excel 2003 Worksheet<0x00>


Unfortunately, Excel adds the version number to the end of the name so we aren't just limited to three possibilities. However, if we key off of the null byte, the words Microsoft Office, and the next word, then we can determine our file type.

Since a user can't (normally) type a null byte (0x00) into an Office document, we can be reasonably sure that if we find one of the search strings above it will accurately determine the file type. Here is how we do it.

PS C:\> ls *.ole | % { Select-String -Path $_.FullName
"(?<=`0Microsoft Office )([A-Z]+)" -List } | select Filename, Matches


Filename Matches
-------- -------
00003994.ole {Word}
00004162.ole {PowerPoint}
00004226.ole {Excel}


Cool, we can accurately identify the files, but how does it work? First we use Get-Content (alias cat) to dump the contents and pipe it into Select-String. The Select-String cmdlet uses a regular expression with a lookbehind to search for the word after a null byte (`0) + Microsoft Office.

Before we get really crazy, let's do some simple file renaming.

PS C:\> ls *.ole | ? { Select-String -Path $_.FullName "`0Microsoft Office Word`0" } |
% { mv $_ ($_.Name -replace ".ole", ".doc") }


PS C:\> ls *.ole | ? { Select-String -Path $_.FullName "`0Microsoft Office PowerPoint`0" } |
% { mv $_ ($_.Name -replace ".ole", ".ppt") }


PS C:\> ls *.ole | ? { Select-String -Path $_.FullName "`0Microsoft Office Excel" } |
% { mv $_ ($_.Name -replace ".ole", ".xls") }


First we get a listing of all the .ole files. The files are then filtered before they are passed down the pipeline. The filter looks inside each file, using Select-String, to find a given string. The files are then renamed using Move-Item (alias mv). The Move-Item cmdlet takes two parameters, the original file and the new file name. The original file is the object passed down the pipeline and is designated by $_. The new name is just the old name ($_.Name) where .ole replaced with the new file extension.

One problem, a file can be read up to three times before it is renamed, which will obviously slow things down. So let's speed this crap up.

PS C:\> ls *.ole | % { $a = $_; Select-String -Path $_.FullName
"(?<=`0Microsoft Office )([A-Z]+)" -List } | % { switch ($_.Matches[0]) {
"Excel" { mv $a ($a.Name -replace ".ole", ".xls") }
"Word" { mv $a ($a.Name -replace ".ole", ".doc") }
"PowerPoint" { mv $a ($a.Name -replace ".ole", ".ppt") } } }


Let's space this out so it is a bit easier to read:

PS C:\> ls *.ole | % {
$a = $_;
Select-String -Path $_.FullName "(?<=`0Microsoft Office )([A-Z]+)" -List } | % {
switch ($_.Matches[0]) {
"Excel" { mv $a ($a.Name -replace ".ole", ".xls") }
"Word" { mv $a ($a.Name -replace ".ole", ".doc") }
"PowerPoint" { mv $a ($a.Name -replace ".ole", ".ppt") }
}
}


This is similar to the first command line fu. We take all the .ole files and pipe them into the ForEach-Object cmdlet. We use a temporary variable ($a) to store our file object since we will need it further down the command.

Next, the Select-String cmdlet is used to grab the special word (Word/Excel/PowerPoint) following <null byte>Microsoft Office. The List switch is used to stop searching each file after the first match is found. Select string returns a MatchInfo object that contains our match.

The first match is the 0th item in the Matches collection. We use this as the input into our switch command. The switch is used to pick the correct file extension.

Ok, that wasn't pretty. That brought the pain. But at least I built a bit of character.

Seth Matheson apparently also needed a bit of character, because he concocted some tasty Mac OS fu to solve this problem using Spotlight. But I still think my awk is prettier than his case statement.

Tuesday, July 20, 2010

Episode #104: Fricken' Users

Hal remembers fondly

I remember it as if it were only last week. There we were, having a quiet little celebration for our 100th Episode on the PaulDotCom Podcast. A little trash talk, some fart jokes, and, of course, Ed's big announcement ("I quit! Hal wins!"). And then one of the folks on the IRC channel had to harsh our mellow by pointing out that we've never done an Episode about adding users via the command line. Well we're not the sort of people to take a challenge like that lying down (at least when we're sober)!

Interestingly, the useradd command is one of the few fairly consistent commands across all flavors of Unix-like operating systems. I've used essentially identical useradd commands to add users on Linux, Solaris, and BSD. Here's a sample of the typical usage:

# useradd -g users -G adm,wheel -c 'Hal Pomeranz' -m -d /home/pomeranz -s /bin/bash pomeranz

The "-g" option is used to specify the primary group for the account and "-G" can be used to set supplemental group memberships in /etc/group. "-c" sets the comment (aka full name or GECOS) field in /etc/passwd. "-d" specifies the user's home directory, and "-m" tells useradd to make this directory (don't use "-m" if the home directory is on a share that you don't have root write access to-- you'll have to make it by hand). "-s" specifies the shell. The final argument is always the name of the user, and somehow I usually manage to forget this vital piece of information the first time I run the command. Stupid computers! Why can't they do what I want them to do instead of what I tell them to do?

Note that the useradd command does not set the user's password. While there's normally a "-p" option on most useradd commands to specify a hashed password string on the command line, this is not terribly secure even though the password is encrypted (hello, password crackers!). If you don't specify the password, then the account will be created with a locked password which you will need to set with the passwd command.

Actually, there are typically defaults for some or all of the parameters given on the command line above, though what is defaulted and the values of those defaults can vary widely from Unix variant to Unix variant. This is why I tend to use commands like you see above that are very explicit about what I want to set. "useradd -D" will show you the defaults available on your platform. Here's the output from one of my Linux systems:

# useradd -D
GROUP=100
HOME=/home
INACTIVE=-1
EXPIRE=
SHELL=/bin/bash
SKEL=/etc/skel
CREATE_MAIL_SPOOL=yes

You can see the defaults for GROUP and SHELL. The directory given by the HOME parameter is obviously a prefix, so if you don't specify a homedir with "-d", then the default in this case will be "/home/<username>". SKEL is actually worth noting-- any files you place in this directory will automatically be copied into the new user's home directory. This makes it easy to give your new users customized .profile, .bashrc, and other start-up files for your environment.

So how can you change these defaults? Well, that's where things start getting a little squirrelly. Generally, there are options that can follow "-D" in order to reset these defaults. For example, "useradd -D -k /usr/local/etc/skel" would change the default location of the SKEL parameter. But on the Red Hat machine where I'm preparing these examples, there's no option for setting the value of CREATE_MAIL_SPOOL. That means we need to go and edit the configuration file for useradd directly to change this parameter. Typically the path to this file is given in the system manual page (it's /etc/default/useradd on Red Hat systems). You're going to need to look at the manual page anyway, because the defaults and the option letter to reset them vary so widely from OS to OS.

And, yes Virginia, there is a userdel command for removing users:

# userdel -r pomeranz

The "-r" option tells userdel to go ahead and remove the user's home directory when cleaning up the account. Some sites prefer not to do this, which is why it's an option instead of the default behavior. Note that userdel will got through /etc/group and clean up all of the user's supplemental group listings.

The only problem with useradd/userdel is that they generally only work on the local user database on the system. If you're using LDAP or some other networked user database, then you'll have to use an alternate tool. But in these cases, the user management software you're using will come with some sort of application-specific user adding/removing tool.

Let's see what Tim can come up with this week...

Tim creates an army:

Since Ed is gone, this is a perfect week to conjure up my own army of Eds. We can use the Windows shells to create local computer accounts, as well as domain accounts. First, the local machine.

We can start off with the good old net user command. This command works in cmd and in PowerShell.

C:\> net user Ed Tim15myHero! /add


This creates a user named Ed, with a password of Tim15myHero! on the local machine. What happens if you try to create a user with a longer password?

C:\> net user Ed This15aLOOOOOONGpassword /add
The password entered is longer than 14 characters. Computers
with Windows prior to Windows 2000 will not be able to use
this account. Do you want to continue this operation? (Y/N) [Y]: Y


Windows barks about backwards compatibility. This warning is related to the password storage on older machines. Specifically, the LAN Manager (LanMan or LM) Hash which can only handle 14 characters. As a side note, when creating passwords passphrases in Windows it is advisable to use one that is at least 15 characters long so that Windows can't store the password in this form. Windows still stores the LM Hash in versions prior to Vista and Windows 2008.

Ok, so back to creating the army.

When we create the users the password was typed so it was visible on the screen. If we use * as the password, then you will be prompted to enter the password and it isn't visible on the screen.

C:\> net user Ed * /add
Type a password for the user:


Users can also be added to the Windows domain.

C:\> net user Ed Tim15myHero! /add /domain


Deleting the users is pretty easy too.

C:\> net user Ed /delete
C:\> net user Ed /delete /domain


PowerShell v2 gives us the ability to create users on the domain. However, it does require a Windows 2008 R2 domain controller.

This cmdlet gives us a lot of power to set a wide range of AD attributes, but we won't use them all. Let's keep it simple and create an account and set a few of the most common attributes.

PS C:\> New-ADUser -SamAccountName edskoudis -GivenName Ed -Surname Skoudis -DisplayName "Skoudis, Ed" 


This cmdlet allows us to set the common attributes. If there is an attribute you would like to set upon creation of the account that isn't standard, you can use the OtherAttributes parameter to access the attribute.

PS C:\> New-ADUser edskoudis -OtherAttributes @{extendedAttribute1="gone"}


We can't just set the account password since the AccountPassword expects the password to be a secure string. It takes a bit of extra effort to use a clear text password in this command let. To securely set the password of the account, we can nest the cmdlets so we will be prompted

PS C:\> New-ADUser edskoudis -AccountPassword (ConvertTo-SecureString "Tim15myHero!" -AsPlainText -force)


The AsPlainText parameter tells the cmdlet that a plain text string will be used and it must be used with the -Force option.

Now that we can create accounts from the command line, I can automate the creation of my Ed army.

PS C:\> 1..5000 | % { New-ADUser "ed$_" }

Tuesday, July 13, 2010

Episode #100: The Lost Episode

Tim, Hal and Ed, collectively known as THE, have traveled from the past to deliver episode #100 live on PaulDotCom Security Weekly. The show goes live on Thursday, July 15th at 7:30 PM EDT (GMT -4). You can follow the stream to listen live.

Please join us on irc as well:
irc.freenode.net
#pauldotcom

We will be back to a regular shenanigans next Tuesday.

Sincerely,

The THE

Tuesday, July 6, 2010

Episode #103: Size Might Matter... But Timing is Everything

Ed checks the mailbag:

Diligent reader and command-line warrior Esther Yee writes in:

Would I get your advice on how to track all the files which I have
accessed in my pc (Window XP) on yesterday, with the time of
accessed pls? what would be the command line then? I only know
about last modified, last created. how about last accessed?

Great question, Esther! On first read, it sounds like a cousin of the issue we faced last week, looking for files based on their size. But, from the cmd.exe perspective, there are some important subtleties here with big implications.

I'm gonna start out by pretending that you ask about last modified time, instead of last accessed. Yup... I'm gonna ignore the substance of the question for now, because we have to build up to it. So, how can we find files that were modified on a given date (such as yesterday) and pluck out their modified time. Taking a cue from Episode #102, we could run:
C:\> cmd.exe /v:on /c "for /r c:\ %i in (*) do @set datetime=%~ti& set 
date=!datetime:~0,10!& if !date! equ 07/03/2010 set time=!datetime:~-8,8!&
echo !time! %~fi"
05:06 AM c:\tmp\what.txt
05:13 AM c:\WINDOWS\WindowsUpdate.log
06:24 AM c:\WINDOWS\Prefetch\CMD.EXE-087B4001.pf
05:32 AM c:\WINDOWS\Prefetch\DEFRAG.EXE-273F131E.pf
05:32 AM c:\WINDOWS\Prefetch\DFRGNTFS.EXE-269967DF.pf

Here, I'm invoking delayed variable expansion (cmd.exe /v:on /c) and then running a FOR /R loop, just like last week's episode, to iterate through files recursively. I'm looking in c:\, with file names assigned to iterator variable %i. I'm looking for any type of file in my set, with in (*). In the body of my loop (do), I'm turning off display of commands (@). I then store the date/time associated with a file (%~ti) in the variable datetime so that I can perform substring operations on it (you can't do substring ops on an iterator variable directly). I've smushed the & right after the %~ti so that I don't get an extraneous space after it. I then set a variable called date to the first 10 characters of datetime (set date=!datetime:~0,10!).

Next, I check to see if the date is equal to the date in question (I put 07/03/2010 here as an example). If that is the case, I set a variable called time to the last eight characters of date time (set time=!datetime:~-8,8!&), and then I simply echo the time as well as the full path to the file. With the filename stored in %i, our FOR /R loop put the full path to the file in %~fi.

That's not too bad from a complexity perspective... but it doesn't answer Esther's question. This shows files that were last modified on the given date, not last accessed. It turns out getting the last accessed time is more difficult, because FOR /R loops give us the %~ti variable for time only in terms of last modified. If we want last accessed info, we can't rely on our FOR /R loop to give us the time option. We're going to have to rely on "dir /ta" instead (I wrote about using dir with /ta to get last access times in Episode #79). It's important to note right up front that, while dir /ta does show Windows last access date and time, this field is often not updated appropriately on Windows machines. Still, the timestamp given by "dir /ta" is officially the last access time, so we'll work with it.

So, let's toss out our FOR /R loop and just plow through with this, using a FOR /F loop to iterate over our dir /ta output, creating something that looks a bit like what we did for last modified time, but instead using a FOR /F loop to iterate over the output of a dir /s /ta command:
C:\> cmd.exe /v:on /c "for /f "tokens=1-4,*" %i in ('dir /a /s /ta c:\') do @set 
date=%i& if !date! equ 07/03/2010 @echo %j %k %m" | more
05:56 AM Documents and Settings
06:19 AM downloads
06:19 AM icecasttemp
06:19 AM Program Files
06:20 AM RECYCLER
05:32 AM System Volume Information
Here, I'm invoking delayed variable expansion again. Then, I start a FOR /F loop, which will let me iterate over the output of a command. I'm using some parsing logic to split up the output of my command, assigning iterator variables starting at %i to the first five fields of output ("tokens=1-4,*" %i). So, %i will get the first field, %j the second, %k the third, %l the fourth. Then, %m will get everything left through the end of the line, which may be a file name with spaces in it. The command whose output I'm iterating over is 'dir /a /s /ta'. I put a /a here so that I can get files with any kind of attributes (including hidden) files. My FOR /R loop in the earlier command got files independent of their attributes, so I figured I should make my dir iterator comparable. To tell my FOR /F loop that I want this dir stuff to be interpreted as a command, and not a file or a string, I put it in single quotes in my in () clause. My dir command is recursing (/s) starting from the c:\ and displaying the last access time in its output. The output of dir has the following columns, separated by spaces and tabs:
DATE  TIME AM/PM        SIZE NAME
With my parsing logic, %i will be DATE, %j will be TIME, and so on, up to %m, which will hold the name, even if it includes spaces.

In my do clause, I simply set the date to %i, the first column of the dir output (if it's not a date because of the cruft at the start or end of dir's output, that's ok, because my next command, an if statement, will find that it doesn't match what I'm looking for, a date). I then check to see if the date equals what we're looking for (if !date! equ 07/03/2010), I then echo out %j, %k, and %m. What are those? Well, that would be the TIME, AM/PM, and NAME.

Sweet! So, what's the problem here? Well, the name is just, uh, the name. It's not the full path. We don't have access to the full path like we did in the nice little %~fi trick we had with FOR /R.

Now, normally, what you'd do with dir /s to get the full path is to use a /b, which gives the "bare" form of output (no volume name and total size cruft) but also has the useful side effect of showing full paths when used with /s. However, the /b option, when used with the /ta, overrides the /ta, giving us NO DATE OR TIME FIELDS. Don't ya just love cmd.exe?

OK, so we have a dilemma: to get the full path with dir /s, we use /b, which causes us to lose the datetime field, which kinda screws everything up. This happens a lot when using the dir command. You want to access something (like a timestamp with /t or even the owner of a file with /q), and you want the full path, but the /b which gives you the full path removes the other fields you want. Clearly, we need another approach. If this were going to be easy, Esther wouldn't have asked.

We can accomplish this by using some of the piece parts from above. Let's recurse through the file system using FOR /R, which gives us all those marvelous ways of referring to file properties (with the full path of %~fi). Then, we can run dir with /ta on each individual file to pull out the last accessed date and time, which we can check with some if logic. If the date matches what we're looking for, we can then print the time (which we'll parse out of our dir output using a FOR /F loop) and the full path, which will still be hanging around from our FOR /R loop. Yeah, that's the ticket. Here it is:
C:\> cmd.exe /v:on /c "for /r c:\ %a in (*) do @for /f "tokens=1-5" %i in 
('dir /a /ta "%~fa"') do @set date=%i& if !date! equ 07/03/2010 echo %j %k %~fa"
04:34 AM c:\Documents and Settings\All Users\Start Menu\New Office Document.lnk
04:34 AM c:\Documents and Settings\All Users\Start Menu\Open Office Document.lnk
04:34 AM c:\Documents and Settings\All Users\Start Menu\Set Program Access and Defaults.lnk
04:34 AM c:\Documents and Settings\All Users\Start Menu\Windows Catalog.lnk
04:34 AM c:\Documents and Settings\All Users\Start Menu\Windows Update.lnk
So, here, I've invoked delayed variable expansion, kicked off a FOR /R loop to go through c:\ and grab all files setting each to iterator variable %a. Then, in the do clause of my FOR /R loop, I run a FOR /F loop to do some string parsing on the output of my 'dir /a /ta' command, which is used to pull the directory listing information from the full path of the file assigned by my FOR /R loop ("%~fa"). I have to put that full path in double quotes, or else spaces in directory or file names could cause trouble.

Then, in the do clause of my FOR /F loop, I put the date (%i) in a variable called date. I check to see if the date is equal to the date we had in mind. If it is, I echo out the time (%j) the AM/PM (%k) and the full path of our file (which still lives in %~fa courtesy of the FOR /R loop). Voila! Easy as... uh... pi.

Hal's rocking out

I seriously thought about just posting the Unix solution for this week's Episode without any explanation. But then I thought that would be just rubbing it in, and I'm bigger than that. Here's the solution, though:

find / -type f -atime -1 -print0 | xargs -0 ls -lu

Some commentary is in order here:


  • Notice that I'm just looking for regular files here ("-type f"). If you care about other kinds of files, you might want to suppress directories ("\! -type d"), since the atime on a directory gets updated every time the directory is listed. That means that directy atime info is mostly just noise.


  • The minus sign before the one in "-atime -1" means "less than". So you read that clause as "atime less than one day old". "+1" would mean "greater than one day".


  • Esther was kind enough to want to search for files with one-day granularity, which is all find can handle internally. If you need finer control than that, see the trick in Episode 29.


  • Since I can't be sure whether or not the file names are going to have spaces, quotes, or other funny characters in them, I'm using "-print0" to tell find to output the file names as null-terminated strings. "xargs -0" tells xargs to look for input formatted in this way.


  • "ls -lu" gives a detailed listing ("-l") showing last access times ("-u") instead of last modified times. We had to use "xargs -0 ls -lu" here because the built-in "-ls" operator in find only displays last modified times.


So there you go, a solution that fits easily into a 140 character tweet. Boy, that takes me back to the early days when Paul, Ed, and I were just a group of crazy young kids with visions of command-line glory in our eyes...

Tim's does it quickly:

Ah, the glory days. Oh wait, I wasn't around then, but I'm still just as cynical as the old timers. And that vision of command line glory...I lost it moons ago.

If I had been tweeting fu back in the day, I would have had no trouble posting this one. Even the long version of the PowerShell command fits in 140 characters.

PS C:\> Get-ChildItem -Recurse -Force | Where-Object { $_.LastAccessTime -gt (Get-Date).AddDays(-1) }
So how does it work? A recursive directory listing, which includes system and hidden files by using the -Force option, is filtered based on the last access time. The filter looks for files where the LastAccessTime is greater than (-gt) yesterday at this time. The "date math" is accomplished by getting a date object and using its AddDays method to subtract a day.

The only shortcoming is that the default directory listing displays the LastWriteTime property, not the LastAccessTime. To display the LastAccessTime, or other properties we might want, we can pipe the output into the Select-Object cmdlet. Here is how we do just that:

PS C:\> ls -r -fo | ? { $_.LastAccessTime -gt (Get-Date).AddDays(-1) } | select LastAccessTime, Name
That's it. Short and sweet.