This talks about port forwarding but does not mention using ssh as a socks proxy, with is extremely convenient. Also, although tricky to configure ssh can also be used as a VPN with tun/tap support (although a good /etc/network/interfaces on Debian can make it as easy as ifup/ifdown tunX)
Unfortunately this guide does not mention ssh-agent to have usable passphrased keys. Tip: Mac OS X Keychain integrates fantastically with ssh-agent.
My favorite trick is transparently bouncing via ProxyCommand+netcat:
Also, authorizing by key but restricting the (passwordless) key to certain commands, allowing for remote action automation. [0]
Ssh agent forwarding is also particularly awesome instead of naively scattering keys.
Ssh ControlMaster allowing to reuse connections can really improve responsiveness. Tip: start the master connection as a daemon (-f), so as not to mistakenly close the terminal which handles it, else you will close the channel for all other currently opened slave sessions. I wish ssh would fork and start the master on demand then close it when the last channel closes.
It gives you most of the benefits of VPN, without requiring tun/tap and without needing root on the remote box -- all you need is the ability to run python. Very useful if you're on an insecure network and you want to tunnel everything over a secure connection, or if you have SSH access to a box inside your firewall and want to access other resources without having to specify each port individually.
> Ssh ControlMaster allowing to reuse connections can really improve responsiveness. Tip: start the master connection as a daemon (-f), so as not to mistakenly close the terminal which handles it, else you will close the channel for all other currently opened slave sessions. I wish ssh would fork and start the master on demand then close it when the last channel closes.
Good news: as of OpenSSH 5.6p1, it can. Just set "ControlPersist 60" in ~/.ssh/config (in addition to setting ControlMaster auto and ControlPath), and ssh will automatically spawn an SSH master connection in the background, and close it 60 seconds after the last client exits. (You can obviously change the timeout to taste.)
I don't use rsnapshot but you can set up arguments you pass to ssh, in which case you can selectively disable ControlMaster. If you can't, maybe rsnapshot respects .ssh/config so you can set up a Host entry with the relevant config. If you want to also use ControlMaster the host you connect to with rsnapshot, you can set up a Host with a dummy name and set its Hostname option to the real host.
you don't mention whats awesome about the proxycommand to go through hosts:
here's what! it uses the intermediate hosts as a tunnel, which means no ssh agent is listening on the hosts (the regular way to do this is ssh -A hostx ssh -A hosty ssh finalhost)
This means no attacker can use your agent while connected.
Is there a "tech blogger of the year" category somewhere so we can nominate this guy? Every single one of his posts is epic. The peri-relational metaphor for shell command composition, this one, and every one before should be required reading.
Hey Matt, to echo yycom's concern below... can you please add dates to your articles? We have context as they're posted today, but it gets muddled 10+ years out.
Re: iOS -- Panic (makers of Coda, etc.) developed a _really_ nice little iOS app for SSH called 'Prompt'. It got some coverage here when it was released, and I immediately replaced iSSH with it and haven't looked back once.
scp has some end-to-end latency for each file transferred. This means that for lots of small files, a single tar file stream is much quicker than 'scp -r'.
if you're piping the output of tar, instead of using "tar f -", you can leave off the f argument (since you don't want to specify a file anyway) and tar will default to stdin/stdout:
$ tar cz foo | ssh remote "cd /where/to/unpack && tar xz"
For GNU tar these days that's true, unless you can be sure $TAPE is in its environment. Historically, tar defaulted to a tape device, e.g. /dev/mt0, and you still find vestiges of that, e.g. OpenBSD defaults to /dev/rst0.
Using named pipes (mkfifo) I suspect you could do that. I've not tried it in practice and there will be some warts to work around. e.g. the password prompt comes to mind.
The most fun I ever had was doing exactly this, piping a stream through ssh, but on the one end was a CD image, on the other end was a cd burner. It is kinda obvious you could also do that because pipes and ssh are ubiquitous on UNIX but I still couldn't stop giggling.
I would add the use of the ControlMaster and ControlPath options for connection sharing, as well as keepalive settings for those cases where connections drop when idle.
The remote port forwarding feature can be very handy. I've used a combination of ssh and daemontools to set up remote access to a machine behind a particularly nasty firewall.
I tried enabling this and it seems that support for this on MacOS is pretty iffy. It's not too stable and my issues went away as soon as I disabled it.
My main problem with sshfs lately has been that on a flakey connection it sometimes hangs irretrievably, often taking out the process trying to use it (usually emacs.)
My laptop has a built in smart card reader, and my desktop has a USB one plugged in with a hardware pin-pad.
The cryptostick does look cool. I've come across it before. I like the smart card because I can just pop it in my wallet like a credit card.
You can also get keyboards with built in smart card readers, where the numeric keypad has a mode to operate as a hardware pin pad (rather than sending the keypresses to the computer) I'm thinking of getting one of these at some point.
Do you know of anyone who sells them in the US? (or OpenPGP cards or any other Linux usable tokens for that matter?) or do I have to order them from Germany?
Good list of useful configuration options. I'd also like to add the "Compression yes" option that you can add on a per-host basis, and this could save some bytes sent over the wire. To see how much it saved, invoke with verbose ("ssh -v"), and it outputs the number of bytes saved after the session ends.
Another config option that has saved me a lot of time is the "ProxyCommand" option that lets you specify a command, whose stdin is used as a pipe to talk to a remote server. So, something like:
Host inside
ProxyCommand ssh gateway nc inside 22
Would allow you to just type "ssh inside" and ssh to a machine behind a gateway, without ssh-ing twice!
Remember also when you're using SSH socks proxy that all your DNS requests go through the un-proxied connection -- unless you're using Firefox, in which case you could set network.proxy.socks_remote_dns to 1 in about:config.
Another point is that most firewalls block most ports, but usually not 443 (https). So set up your SSH server on port 443. Since all traffic to 443 is encrypted anyway, you're less likely to raise suspicion.
One more trick that I really like: using an SSH agent. On Gnome-based systems, Seahorse provides one (sometimes you have to install seahorse-plugins to get it); otherwise, gpg-agent can be an ssh agent. Pageant on Windows, and I'm not sure what's available on Mac.
SSH agents let you keep your key encrypted while only needing to enter your passphrase on first use (with the default ssh-agent, you must load the key manually with ssh-add; gpg-agent and seahorse both prompt you the first time it's needed). Add that to SSH agent forwarding (where multi-hop SSH connections authenticate using the agent on the originating machine, and your key is (A) only on the local machine and (B) encrypted when not in use.
One good trick that the article could add is getting around NTLM-authenticated HTTP proxies, which are frequently found in schools and workplaces. First, set up cntlm or ntlmaps to run a local HTTP proxy that strips the authentication from the real proxy. This is because almost all software that can handle authenticated proxies (including corkscrew) can only handle basic auth, but not NTLM. Then, configure ssh to use corkscrew:
ssh as a cheap way around firewalls is nifty, but caveat ssh-or, for sure. A colleague of mine used this to connect to Yahoo IM (or somesuch; details not important) for some totally innocent IM'ing. Nothing against the company, no secrets leaked, nothing against any of his signed agreements. (Only that he couldn't use IM.)
While corporate IT couldn't tell exactly what was going on, they did ask him why he was ssh'd to an ISP-provided IP for X hours using Y bytes. So don't try to outclever your company this way; they can still nab you on suspicion of whatever, even if what you did was non-harmful in any way.
SSH is capable of running multiple channels over the same SSH connection. In theory this means you could ssh to a remote host, then establish a second channel to transfer a file over. Why the heck does the ssh tool not actually allow this sort of capability? Why do I have to establish a second scp or sftp connection to transfer a file when I'm already ssh'd into the machine? I don't get it.
Sure, but that's not really relevant to my point. That's just a minor optimization, but my workflow is still exactly the same: start up sftp, re-navigate back to the same directory, and do the transfer.
I actually went on to implement an automated file sharing service over SSH using these tricks in our college when they blocked normal protocols. You can see the software on github tailored for my college
https://github.com/sravfeyn/SparkDC
Another cool trick is setting up a script that constantly tries to set up x2x between your laptop and desktop, tunneled over ssh. As soon as the laptop enters the local network, you can use the keyboard and mouse of the desktop on it, no cable connections necessary.
Be warned that sshfs uses sftp underneath, which doesn't like latency too much, so potential bandwidth is greatly reduced over the internet. If you want fast file transfers, try scp or rsync.
Unfortunately this guide does not mention ssh-agent to have usable passphrased keys. Tip: Mac OS X Keychain integrates fantastically with ssh-agent.
My favorite trick is transparently bouncing via ProxyCommand+netcat:
Also, authorizing by key but restricting the (passwordless) key to certain commands, allowing for remote action automation. [0]Ssh agent forwarding is also particularly awesome instead of naively scattering keys.
Ssh ControlMaster allowing to reuse connections can really improve responsiveness. Tip: start the master connection as a daemon (-f), so as not to mistakenly close the terminal which handles it, else you will close the channel for all other currently opened slave sessions. I wish ssh would fork and start the master on demand then close it when the last channel closes.
[0] http://www.cmdln.org/2008/02/11/restricting-ssh-commands/