On Windows, we can list the size of all files on c:\ and their names:
C:\> for /r c:\ %i in (*) do @echo %~zi, %i
This can be useful to find files that are hogging up disk space so you can back them up or delete them. To dump it into a file, append >> filename.csv. You can then open it in a spreadsheet for searching or sorting.
Hal's Comments:
Of course Unix folks tend to use du for tracking down directories that are consuming large amounts of disk space, but if you really want to drill down on individual files:
# find / -type f -exec wc -c {} \;
How about sorting that list so it's easier to find the big files:
# find / -type f -exec wc -c {} \; | sort -n
Or how about just the top 100 largest files in descending order of size:
# find / -type f -exec wc -c {} \; | sort -nr | head -100
By the way, this is an interesting example of a case where I really want to use "find ... -exec ..." rather than "find ... | xargs ...". If I were to let xargs aggregate the calls to "wc -c" I'd end up having wc intersperse totals in the middle of my output. By calling wc individually on each file with find I get output that's much nicer for piping into other programs.