brent timothy saner on 18 Jul 2015 18:35:23 -0700

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Cloning a dying hard drive

Hash: SHA512

On 07/18/2015 09:12 PM, Keith C. Perry wrote:
> I used to use a dd imaging method but its not the best way to move a filesystem for several reasons:
> 1) on journaling fs', the journal may not be written to disk after shutdown.  This can cause imagine integrity issues which may render the image useless.

The journal is above the block level, which is what dd/dd-based tools
copy at. If there's a journal, they'll copy it. If there isn't/if it's
broken, they'll still copy whatever's there.

> 2) images will be the size of the drive not the filesystem so if you have a lot of free space its a waste of resources.

This is true, but images are the only reliable form of workable data
when dealing with a failed disk as it's a full block-for-block copy of
the target disk. Does it mean you suck up a lot of disk space? Sure, but
you only need it around for as long as the restore takes. Once you're
sure everything's working and all corrupt packages, etc. have been fixed
and the like on the damaged filesystem, you can delete the image file.

There are even ways to split an image across multiple files at imaging
runtime, and mount images as writeable, etc.

All very handy if the filesystem is in a damaged state, which
(typically) filestem-level utilities (tar, cpio, etc.) will fail on if
the damage is bad enough. Vanilla dd will only fail on hardware error.
ddrecover will only fail if you stop the process (and even then, it can
keep a journal of itself- so it can start right back up again at the
last known-good block it found). :P

> 3) images may also fail is there is problem reading a bad location those dd conv parameters can help prevent failures but it doesn't always work and even when it does resulting image may be unusable.

This is why myself (and Rich M.) have both suggested GNU ddrescue as an

However, filesystem dumping utilities would always fail before a
(vanlla) dd would.

The resulting image, however, would be at the *least* just as usable as
the initial target disk was, except frozen at a specific point (before
further corruption could take place). And as I mentioned, you can mount
an image as writeable (detecting the partition table with kpartx) and
attempt an fsck (if you have the disk space, you can create an
"original" image file and a "working" image file- you operate recovery
procedures on the "working" image).

Or you can use testdisk/photorec on the image as well, etc. Image files
give you the best chances of recovery (avoid the increased-degredation
problem of physical media, work around and possibly repair filesystem
errors that would prevent filesystem-level dumping normally, ability to
"revert" to an original state if a recovery fails, etc.). The only
downside is the amount of storage space actually needed for implementing
this- but hey, disk space is cheap these days.

This is why I keep an eSATA enclosure (4x 5TB in an mdadm RAID-10). The
pricing isn't too bad, and it works great in GNU/Linux (kernel sees each
individual drive in the enclosure so I can still perform S.M.A.R.T.,
badblocks, etc. on them). I can provide Amazon links if y'all are curious.

> So, when moving a filesystem, I use xfsdump/xfsrestore for xfs and cpio for ext4 or anything else.

Which are great!- if the situation supports it. But I'd steer clear of
these for failed/failing drives for the reasons I mentioned above, and
Walt is hitting some errors on his target disk.
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird -

Philadelphia Linux Users Group         --
Announcements -
General Discussion  --