Welp, this week I got an unrequested opportunity to put my PC technologist hat back on and fikits my laptop, whose hard drive had gone nearly tango-uniform.
I flexed my dusty skills and, after much mafipulation, managed to extract the win! Dead 40GB drive out (hey, it's like a 2002 model!), new inexpensive 160GB drive in, all data salvaged, unit operating like nothing ever happened.
My wife and I share the laptop and use it considerably. She noted earlier in the week that it was making strange noises. I didn't hear them, but ran SpinRite across the disk to verify its integrity. That passed and no problems were noted. The hard drive is OEM, and my rule of thumb is that one ought to expect a HDD failure in a laptop at least once in a 5-year ownership period. They have to take lots of mech-destroying bumps and jolts, and so are more at risk than a well isolated desktop unit. I've never owned a laptop that made it from purchase to trash without a drive dying.
A few days later, the most horrible noises did begin to emit from the unit. Not the clicks of head-resets, not the ray-gun of stuck platters or stuck head armature. This was a lower pitch metallic growl which sounded almost exactly like a large sleeve-bearing case fan whose aforementioned sleeve bearing has become contaminated and is now loose and buzzing.
During these growls, the drive couldn't move any data, and Windows halted. The situation was serious. I powered down immediately, hoping to preserve a chance to salvage the data, and resigned to procuring a new drive.
We had backups, mine were up-to-date, Jane's were weeks old. Just the same, I was hoping not to have to play endless hours of setup reloading Windows and Ubuntu and getting all the apps reconfigured and so-on.
I had no access to a 2.5" IDE drive adapter cable for a desktop or USB enclosure. So instead, with a cold drive, the plan was to boot the laptop with the failing drive to Knoppix and connect it to my network. Then, I would also boot my wife's Windows desktop PC with Knoppix and use it as a repository for rescued data (all my big drives were too full!).
I cleverly hit upon ATA over Ethernet as an expedient means of accessing the faulty drive and moving its content to an image file on the desktop PC.
The process for setting up ATA over Ethernet was as follows:
Verify AoE support in the kernel
grep ATA_OVER /boot/config-`uname -r`
A result denotes support. If you've ever configured/compiled your own kernel, this bit seems self-explanatory.
Assuming support's been modularized, insert the module
Target PC (the PC whose block device you want to make available on the ethernet):
Install the vblade EtherDrive emulator. This software exports any local block device, partition, RAID, LVM volume, or even flat image file, as a network block device.
aptitude install vblade
Then export the chosen block device to the LAN
vbladed 0 1 eth0 /dev/hda
In this case, the above line exports my failing hard drive to shelf 0, slot 1. This concept of shelf and slot numbers is how you differentiate various block devices that might be available on the same LAN. Most PCs have only one connected ethernet interface, but if you have more, you can specify each one you want to make your device available on, and if your initiator PC has multiple interfaces to the same network, you'll get automagic channel ganging for faster throughput, if your device's native interface throughput is faster than a single interface.
Anyway, on the initiator PC (the PC which is going to access the remotely shared block device):
aptitude install aoetools
That should install the needed support software to attach an AoE-shared block device, and the status command should show you the device which was discovered and made available to you, in my case e0.1.
The system's already attached to the drive at this point. There's no security or per-machine restrictions. If there is more than one machine which may want to access the device, I think you'll have to manually coordinate access. While I think many could simultaneously access a device for reading, mounting a filesystem shared this way read-write on more than one initiator is going to result in damage to the target's data structures.
Everything that happens next is from the initiator PC. I used GNU ddrescue to image off the failing drive in a manner that would allow me to backup and retry unreadable areas if necessary.
ddrescue /dev/etherd/e0.1 fail_drive.img /home/knoppix/transfer.log
Specifying the transfer logfile allows ddrescue to record its progress, skip over errors, resume if interrupted, and using extra options, go back and retry previously failed areas.
Ddrescue is a super handy tool. If SpinRite's dynastat sector reading abilities were available to this tool, it would be perfect (sidebar: I consider it a big failing that SpinRite has no convenient means of redirecting successful sector reads to another device or imagefile. Often, when dealing with a failing drive, you next good read is likely to be your last, and the mechanism may be continuing to degrade around you. In such situations, the less you have to touch the source drive, the better. You want to transport Scotty's pattern out of the damaged transporter buffer and into a fresh one you've got waiting. You don't really want to just reconstitute the degraded pattern inside the same degrading transporter. What if after all it's many hours of beautiful dynastat recovered and reallocated sectors, the mechanism's occasional clicks become a permanent repeating clank, rendering the drive inaccessible? Steve, think about this for SpinRite 7. Consider the wealth of block device connection options a live Knoppix environment gives us. That's the rich platform for attaching revival storage, and spinrite's internals would be the muscle.)
Well...this tool ran for about 90 minutes, and the drive occasionally emitted those painful shrieking buzzsaw sounds, but eventually ALL the data was salvaged. Hooray!
The second act was to be far more thorny. The original drive was 40GB and held a majority Windows partition with system volume Truecrypt encryption (to protect private personal data in the event the laptop was ever stolen), a minority Ubuntu partition, and a few hundred MB in a partition at the end as linux swap.
To be sure that was getting quite cramped, and a happy side effect of the drive replacement was the prospect of much more space to work with. But, how to transplant these bootable partitions onto the new drive in such a was as to ensure they would remain usable (in the case of the Truecrypted Windows system volume), bootable, and enlargeable.
This turned out to be a more formidable task than I'd originally bargained for. I think I won't go into fail details, but simply report that in the end, after much experimentation and head scratching over partitions that then failed to boot, I was finally totally successful. Here instead is the shorter story of the process which worked:
I began by using the previous AoE process to write back the 40GB drive image to the new drive.
In my prior failures, I learned that the best my BIOS was going to do was 32bit LBA, meaning that only 137GB of the 160GB drive capacity would be addressable by the BIOS (28 bits are used for the sector address, the rest for other junk). This had bearing in my repartition plan. Prior experimentation moving the Windows partition around left it unbootable for unknown reasons, I decided to leave it in place, but later would expand it. Ubuntu would exist below the 137GB barrier, to give Windows the most space, and prevent boot problems by way of a partition which straddled the barrier. To boot Ubuntu then, required a small boot partition which could load the kernel. Once running, the kernel could do 48bit LBA and so mounting the root partition past 137GB would be no problem. After expanding the Ubuntu partition some, the remaining space could be allocated to additional NTFS storage and Linux swap.
I started by noting down the sector layout of the partitions. Frak cylinders, sectors are just fine in LBA, so ignore cylinder boundary warnings when partitioning.
The structure was:
[30GB Win/Truecrypt][8GB Ubuntu][2GB swap][**unallocated**]
To move the partitions around, I needing something sector precise and unafraid of JFS or crypted garbage. This ruled out parted. It insists on knowing the underlying filesystem to do moves and resizes, but this isn't strictly necessary. I went manually.
I would define a temporary 4th partition at the sector positions I desired for my destination, then use ddrescue to copy the data from the current location to the new location. After test mounting the filesystem on the new temporary partition, I would note the sector locations, then delete that partition, and use #4 for setting up another new partition location.
In this way I first setup a 250MB boot partition spanning so as to end precisely on the last addressable sector inside the 137GB boundary. I next, used fake 4 to reposition the Ubuntu root partition to its new home starting after the boot partition. I next mounted both and transplanted /boot to the new boot partition, made arrangements in root to splice this partition back onto it's normal /boot location within the VFS once Ubuntu was booted (so automatic kernel updates would work), and then used the grub shell to install the grub bootloader into the superblock of the boot partition, pointed properly at the needed files in the partition.
Once satisfied, I could delete all the partitions, recreating them one-by-one in their new larger sizes.
Now the structure looked like this:
[136GB Win/Truecrypt (30GB filesystem)][.25GB ext2 BOOT]|137GB Boundary|[16GB Ubuntu root (8GB filesystem)][6GB extended -> [4GB NTFS][2GB Linux swap]]
At this point, I tested bootability and was pleased to find both OSes bootable by normal grub selection. The Truecrypt MBR password dialog to open up the Windows partition for use worked as normal.
Finally, to resize the filesystems to encompass their entire partitions. With the Truecrypted Windows volume, I thought orignally I'd have to decrypt, resize, and recrypt, but I cleverly worked around that inside Ubuntu.
I booted into Ubuntu and installed the ntfsprogs (for ntfsresize util). ntfsresize works on the filesystem inside a partition. To get it access, I used software by Jan Krueger which does the truecrypt authentication magic and passes the results onto dm-crypt, which can then handle the realtime blockwise encrypt/decrypt. The output is a mapped block device which gives decrypted data and accepts clear data for encryption, and works just like any normal, unencrypted block device.
Under normal circumstances, this is what I do to get access to my documents on the NTFS filesystem, as I store finance data there, which I use in Ubuntu on GNUCash.
I passed this device as the argument to ntfsresize, which WAS able to then resize the encyphered filesystem to span the whole partition! No laborious decryption necessary for the tool to work! Sweet!
Resizing the Ubuntu partition was far simpler, thanks to the "resize" remount option for jfs. I booted the Knoppix live-CD once more, mounted the Ubuntu root partition, then remounted it with that option specified. Leaving off any size specification to the option causes it to expand the underlying jfs filesystem to fit the whole partition. Unmount.
I now have a secure Windows partition with much more free space, a larger Ubuntu side, and some residual swap and unencrypted NTFS storage. Best of all, aside from the space benefits, it's like nothing bad ever happened to the lappy.
Finally, a tidbit if you ever need to very quickly create a large image file for the purpose of storing or preparing a filesystem.
The tradition is use of the trusty dd program, with /dev/zero as the source, but this results in a laborious and needless writing of zeros to the entire desired span and size of your diskimage file. The tip uses the sparse file writing capabilities of most modern filesystems.
dd if=/dev/zero of=your-sparse-diskimage.bin bs=1 count=1 seek=1G
Note how the above command uses a single byte as a block, writes just one single-byte-block, and then seeks 1 gigablocks (1GB since a block = a byte, now) further out on the new diskimage file. This results in a file which is 1GB in size, but actually uses only 1 cluster (generally 4KB for most filesystems) on disk!
By seeking to the desired end, you cause the filesystem to generate a hole between that endpoint and the last written byte. The file appears to be the size of the full seek, but its size on disk in this cast would only be one byte. Cool! Of course once you put a filesystem into the image file, mount it, and start throwing data into it, its size on disk will grow to approach the reported size.
While I've posted this mainly for my own future reference, to any passers-by out there who happened to read it, I hope you found it of interest! Cheers!