Yeah - "stalled". At 43%. So you pull out rsync to, at least, re-start from where it stopped:
scp -c blowfish -P 2222 user@remote_host:huge_remote_file .
huge_remote_file 43% 182MB 0.0KB/s - stalled -
But what if it stalls again? You will not be around to re-start the transfer.
rsync --progress -e 'ssh -c blowfish -p 2222' user@remote_host:huge_remote_file .
You can use ssh's ServerAliveInterval option and rsync's exit value to solve this problem. The remote host will need to accept public key authentication but this is easy to set-up, even if only temporarily.
This is the command that does the trick:
Explaining: if there is no traffic through ssh in 300 seconds, the client will request a ping reply from the ssh server and drop the connection in case none is received. Rsync will then exit with a non-zero status (probably 20), which will allow the loop condition to run again.
while rsync --progress -e 'ssh -o ServerAliveInterval=300 -c blowfish -p 2222' \
user@remote_host:huge_remote_file . ; do sleep 1; done
Thus, when the transfer is successfully finished, rsync will exit with status 0, ending the loop. And the file will be waiting for you in the morning. Nice.
By the way, I use blowfish as the cipher algorithm because it is faster than the default, 3des. In my case it doesn't matter too much since the connection was being tunnelled through a VPN but you should use a stronger cipher if that's not your case.