Is it safe to take the drive image of the current working drive?

I have to backup my hard disk. I want to use dd and put the image on an external hdd.

  • Can I do this using dd from the OS that resides on the hdd itself or do I have to boot from another device, for example a LiveCD?
  • Is it safe, in general, to take the image of a device, if the device is mounted and working?
  • What if the device is mounted, but I’m sure there’s no other I/O operation, while dd is running?

I’m sure that rsync is the best tool to use for backups, specially the incremental ones.

But I’m interested in dd, because I want to backup up also other storage devices, and it copies also data stored on unpartitioned space. For example, my e-book reader use an unpartitioned space to store uboot, kernel and other data.

Asked By: Marco Sulla


It depends on what exactly the partition is for, and what the purpose of the copy is. However, I will say that in general dd is an inappropriate tool for backing up filesystems. That’s not what it was intended for, either.

  • It will waste a lot of time copying empty sections of the partition.

  • It may lead to inconsistencies if the filesystem is currently mounted, in part because it’s an OS level entity and may be out of sync with the underlying block device. Calling sync initially won’t help much with this, since the process is not instantaneous.

Use cp -a or rsync instead. You then need to create the destination partition, of course, so it is not quite as drop dead easy, but it is much safer and more flexible. If you need to create a filesystem image, see below.

If you are intending to copy the root filesystem, absolutely do not use dd. You must use something like rsync -ax (or cp -ax on individual toplevel directories), because there is a bunch of stuff that must NOT be in the copy. On Linux, this includes:


Some of these are actually kernel interfaces and not real directories on disk. If you copy them, you are copying a bunch of information that will not apply in the copy; if you try and run a system with it it will just amount to wasted space since the real interface will be mounted on top. Others contain temporary information in use by running processes and those are more of an issue, since the system will not be able to sort out the garbage if you copy that.

If you want to create an image file of the root filesystem (or any filesystem), create an empty image file — this is an appropriate use for dd:

dd if=/dev/zero of=whatever.img bs=1024 count=1000000

That’s a 1024 MB image (1000000 * 1024). Adjust count if you want it some other size. Create, e.g., an ext filesystem in the file:

mke2fs whatever.img

It will warn you this is not a real block device. Proceed. Now mount the image file:

mount whatever.img /mnt/img

/mnt/img must exist but could anything. You can now rsync (or cp -a) into /mnt/img. The content will remain inside whatever.img when you unmount it.


Just to be clear, only use the filesystem image method just described if you absolutely need an image file for whatever reason. If your goal is to copy the partition to another hard drive, you don’t need an image: create a new partition with an empty filesystem on that drive, mount it, and copy into there. You could instead just put the filesystem content into an empty directory and archive it:

tar -czf myarchive.tar.gz [the directory path]

You can then deploy this in an existing (empty or otherwise) partition by placing it in the toplevel and using:

tar -xzf myarchive.tar.gz

Beware that will overwrite existing files if their paths match something in the archive. It will otherwise leave the existing directory hierarchy the same.

Answered By: goldilocks

rsync is the tool of choice for backing up a filesystem, and it can make a bootable backup of the current running OS.

Some caveats:

  • you must add the appropriate alphabet-soup options
  • paths are rather critical
  • an exclusion list is required, and will be different for each OS and possibly each configuration

Some advantages of rsync over other methods like tar:

  • you can stop and start the backup at any time
  • many options for handling superseded files like delete on demand, delete before, move…..
  • resumed (or repeated) backups are much faster than other methods, as previously-copied files are skipped. (20x speed increase is common)
  • the –link-dest option can create versioned backups while only actually copying new files

Image backups have their place, but they copy the drive exactly as-is, including any problems you may have. A file backup makes a fresh directory and has the side effect of linearizing (defragmenting) your drive in the process. If you want to make 10 identical copies of your current OS, I would use rsync for the copy master and then dd (or similar) for the rest.

Answered By: paul

It depends on what you mean by “current working system”. If you simply want to avoid using a boot disk, and don’t care about disruption to services running on the computer, it’s possible:

  1. Shut down all nonessential programs (basically, everything except the root shell you’re working in — don’t try this from an X terminal, use a real console shell). Single-user mode may help for this.
  2. If you’ve got mounted disks other than the system root, unmount them. Don’t unmount virtual filesystems such as /proc, /sys, or /dev.
  3. Flush cached data on the remaining disk: sync
  4. Remount the root filesystem read-only: mount -o ro /.
  5. Mount your external hard drive (you’ll probably get a warning about being unable to write to /etc/mtab; ignore it).
  6. Make your backup.
  7. Unmount your external hard drive.
  8. Reboot. You’ve made rather a mess of your system getting here, and rebooting is the fastest way to put it back to normal.

I use this method to make an archive of a computer I’ve just upgraded from and don’t expect to use much more. It’s not a very good method for a system in active use: it’s slow (taking hours or days), the backups are huge (so you can’t keep more than a few), and it’s incredibly disruptive to use of the system being backed up. For day-to-day backups, I recommend something that works at the filesystem level, such as rsnapshot.

Answered By: Mark

In general it is not safe. The FS assumes that operations are written in certain order so it can write new data of file and then make a pointer to it from other data, the exact details depend on filesystem. Imagine if following happens:

  1. dd reads from location X which contains garbage or some data
  2. Filesystem writes to location X
  3. Filesystem writes to location X+1 pointer to location X
  4. dd reads from location X+1 link to location X

From the point of view of backup you get a garbage data. However there are several ways to workaround it:

  • Freeze filesystem by filesystem specific command (I believe xfs_freeze is one and I don’t know any other – but such option exists at least in theory)
  • Create a lvm snapshot and copy from it. The copy will be as-if you rebooted the computer (minus the HDD reordering) so it will be a dirty filesystem but the copy will be atomic. Note that some filesystems like XFS needs to be frozen first.
  • Use rsync as suggested by others. Now the copy is safe and you don’t need LVM but the copy is not atomic. So while it avoids the above problem on filesystem level it might still run into problems with files (rather unlikely but one can imagine missing files while mv is executed in background for example)
  • Use filesystem with snapshoting such as btrfs, tux3, zfs, nilfs… Then you avoid both problems – you can just create a snapshot and copy from it by rsync having full atomicity. Note however that such filesystem often tend to be experimental.

As a last note – dd might not be a best way of backup. It copies a full disk which is often wasteful as you copy the ‘garbage’ as well. If you need to have a disk images something like partimage might be better. If you don’t a better option is using either using a rsync, tar in differential/incremental mode etc. or a full backup system such as bacula, tarsnap or one of many others. Data deduplication may do wonders for the sizes of backups.

Answered By: Maciej Piechotka

Use Clonezilla, seriously. It’s the best open-source, Linux-based Norton Ghost-like utility. It will do both partition and full disk cloning, either disk-to-disk or disk-to-filesystem (save as a file). It supports most Linux file systems, NTFS, FAT32 and more. It can save to an internal disk, an external drive or even over the network on SMB or NFS shares.

It is very easy to use and will save you a lot of time.

Edit: to answer the question, no, you cannot dd most file systems while they are mounted because you risk ending up with an inconsistent copy of your file system as reading from a block device is not atomic. For example, if you are copying 100 blocks, the system might have updated the first block and the last block while you were only halfway, which means your copy will include the modified last block but not the first one.

Answered By: sleblanc
Categories: Answers Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.