Friday 27 April 2007

Restoring NTFS partitions with partimage

There are plenty of articles out there describing how to use partimage to backup and restore your servers. Most of them assume you are using Linux but the process can be used with Windows servers without much hassle. Except for one big hassle, which is the raison d'ĂȘtre of this post.

I have managed to use partimage version 0.6.4 to backup and restore Windows 2000 servers. Partimage will nag about the experimental status of NTFS support but it works fine, at least it did for me, as long as the MBR (Master Boot Record) was intact.

If you have lost the MBR (for example, you are restoring to a new hard-disk), then you have, most likely, tried partimage's option that says:
( ) Restore an MBR from the imagefile
Only to be presented with an error message:
Can't read block 0 from image (0)
If you have already tried Dave Farquhar's solution without success, or if, like me, you didn't backup the MBR, there might still be hope. You will need your Windows 2000 installation CD, though.
  1. Start the windows installation by booting up with your Windows 2000 CD.
  2. Have the installer re-create the partitions. If you use "restore" methods the installer will not create a new MBR for you.
  3. When the installer reboots to finish the installation, quickly replace the CD in the drive with your Linux emergency boot disk. I used Knoppix v5.1.1.
  4. Restore your data using partimage as describe in one of the many tutorials on the Internet.
This method assumes that you have created the partitions in the windows installer based on your previous partitions. When you create your image, make sure you also back-up your partition information.

Tuesday 17 April 2007

Rsync + SSH ServerAliveInterval

Did you ever leave scp running overnight to copy a file from a remote server over some slow WAN link? And only to find the dreaded message on the console (I was using port 2222):

scp -c blowfish -P 2222 user@remote_host:huge_remote_file .
huge_remote_file 43% 182MB 0.0KB/s - stalled -
Yeah - "stalled". At 43%. So you pull out rsync to, at least, re-start from where it stopped:

rsync --progress -e 'ssh -c blowfish -p 2222' user@remote_host:huge_remote_file .
But what if it stalls again? You will not be around to re-start the transfer.

You can use ssh's ServerAliveInterval option and rsync's exit value to solve this problem. The remote host will need to accept public key authentication but this is easy to set-up, even if only temporarily.

This is the command that does the trick:

while rsync --progress -e 'ssh -o ServerAliveInterval=300 -c blowfish -p 2222' \
user@remote_host:huge_remote_file . ; do sleep 1; done
Explaining: if there is no traffic through ssh in 300 seconds, the client will request a ping reply from the ssh server and drop the connection in case none is received. Rsync will then exit with a non-zero status (probably 20), which will allow the loop condition to run again.

Thus, when the transfer is successfully finished, rsync will exit with status 0, ending the loop. And the file will be waiting for you in the morning. Nice.

By the way, I use blowfish as the cipher algorithm because it is faster than the default, 3des. In my case it doesn't matter too much since the connection was being tunnelled through a VPN but you should use a stronger cipher if that's not your case.