One of the most frequent questions I get regarding the Windows command line involves how to run commands on a remote Windows machine and get access to the standard output of the command. Sure, Microsoft SysInternals psexec rocks, but it's not built in. On Linux and Unix, ssh offers some great possibilities here, but neither ssh nor sshd are built-in to Windows (and what's with that? I mean... we need that. Call Microsoft right now and demand that they build in an ssh and sshd into Windows. Installing a third-party version is certainly doable, but we need it built in... starting about 5 years ago, thank you very much.)
Anyway, while there are many options for running a command on a remote Windows machine using built in tools (such as using at, schtasks, or sc), one of my faves is good old WMIC:
C:\> wmic /node:[targetIPaddr] /user:[admin] process call create "cmd.exe /c [command]"
That'll run [command] on the target, after prompting you for the given admin's password.
You won't see the standard output, though.
To get that, change it to:
C:\> wmic /node:[targetIPaddr] /user:[admin] process call create "cmd.exe /c [command] >>
Make sure you have [YourShare] open on your box so the target machine and [admin] user can write to your share. The results.txt file will have your standard output of the command once it is finished.
Oh, and to execute a command en mass on a bunch of targets, you could use /node:@[filename.txt], in which the filename has one line per machine name or IP address on which you want to run the given command.
Not nearly as elegant as what I'm sure my sparring partners will come up with for Linux, but it is workable.
Thanks for throwing us a bone here, Ed. With SSH built into every modern Unix-like operating system, remote commands are straightforward:
$ ssh remotehost df -h
Sometimes, however, you need to SSH as a different user-- maybe you're root on the local machine, but the remote system doesn't allow you to SSH directly as root, so you have to use your normal user account. There's always the "-l" option:
$ ssh -l pomeranz remotehost df -h
But what if you want to scp files as an alternate user? The scp command doesn't have a command line option like "-l" to specify an alternate user.
One little-known trick is that both ssh and scp support the old "user@host" syntax that's been around since the rlogin days. So these commands are equivalent:
$ ssh -l pomeranz remotehost df -h
$ ssh pomeranz@remotehost df -h
Personally, I never use "-l"-- I find "user@host" more natural to type and it works consistently across a large number of SSH-based utilities, including rsync.
Unlike wmic, SSH does not have built-in support for running the same command on several targets. The "Unix design religion" is that you're supposed to do this with other shell primatives:
$ for h in $(< targets); do echo ===== $h; ssh $h df -h; done
By the way, note the "$(< targets)" syntax in the above loop, which is just a convenient alternate form of "`cat targets`".
Unfortunately, the above loop is kind of slow if you have a lot of targets, because the commands are run in serial fashion. You could add some shell fu to background each ssh command so that they run in parallel:
$ for h in $(< targets); do (echo ===== $h; ssh $h df -h) & done
Unfortunately, this causes the output to be all garbled because different commands return at different speeds.
Frankly, you're better off using any of the many available Open Source utilities for parallelizing SSH commands. Some examples include sshmux, clusterssh, and fanout (which was written by our friend and fellow SANS Instructor, Bill Stearns). Please bear in mind, however, that while remote SSH commands allow you to easily shoot yourself in the foot, these parallelized SSH tools allow you to simultaneously shoot yourself in both feet, both hands, the head, and every major internal organ all at the same time. Take care when doing these sorts of things as root.