My dear lemmings,
I discovered Clonezilla a while ago and it still is my main tool to backup and restore the partitions I care about on my computers.
I cannot help but wonder if there are now better, more efficient alternatives or is it still a solid choice? There’s nothing wrong with it, I’m just curious about others’ practices and habits — and if there was newer tools or solutions available.
Thank you for your feedback, and keep your drives safe!
The big advantage of Clonezilla or using dd is you make a perfect 1:1 copy of the disk so you’re pretty confident it will restore perfectly, but you need a disk of at least the same size and so on. Also perfect if you’re trying to do file recovery and so on, because even corrupted or entirely unreachable data is still technically on the disk.
That’s very inefficient when you have say, 5GB used of a 1TB disk, although compression will help a bit. But that’s where more specialized tools comes in: what if we could only backup the actual data, and end up with a 5GB backup before compression.
That’s useful and nice, but can’t possibly deal with corrupted or deleted files since it’ll just skip over them. The backup is only as good as all the filesystem features the archiver can encode. On Linux, tar has us pretty well covered as long as you only need relatively standard features like owners, groups. If you zip your root Linux partition you’ll end up with broken ownership and permissions, because it doesn’t encode ACLs and xattrs and hardlinks and whatever else. On NTFS, since it’s proprietary, undocumented and a fairly complex filesystem, it’s much riskier. If you backup your game library, you’re probably fine, but if you want Windows to boot after a restore, you need a much more complete backup and if you don’t want to take risks, whole partition backups are much safer. ntfsclone exists but I just don’t trust it like I would trust tar to backup my ext4 partitions correctly.
So it’s all a tradeoff. Do you want efficiency, or do you want reliability? How much of the information can you lose? Like, if you backup your C: drive on Windows but only care about your files and documents but not the Windows install itself, then it makes sense to just archive the files rather than a block copy.
So, what do you expect from your backups? The answer to that question also answers this thread.
That’s correct for dd but not for clonezilla.
Clonezilla uses partclone, which reads the file system and copies only the data, for any filesystem sorted by partclone.
Source: Many File systems are supported: (1) ext2, ext3, ext4, reiserfs, reiser4, xfs, jfs, btrfs, f2fs and nilfs2 of GNU/Linux, (2) FAT12, FAT16, FAT32, exFAT and NTFS of MS Windows, (3) HFS+ and APFS of Mac OS, (4) UFS of FreeBSD, NetBSD, and OpenBSD, (5) minix of Minix, and (6) VMFS3 and VMFS5 of VMWare ESX. Therefore you can clone GNU/Linux, MS windows, Intel-based Mac OS, FreeBSD, NetBSD, OpenBSD, Minix, VMWare ESX and Chrome OS/Chromium OS, no matter it’s 32-bit (x86) or 64-bit (x86-64) OS. For these file systems, only used blocks in partition are saved and restored by Partclone. For unsupported file system, sector-to-sector copy is done by dd in Clonezilla.
for a large drive with only partial data you can make dd quicker by reducing partition size. Then fdisk to list byte size of (cylinders x bytes) in header output, and units listed for end of partition. you then use dd with bs=(cyl x bytes) count=(units+1) so dd stops at the last block of partition. once copied you can resize partition. it is how I fit a duplicate of my nas OS img on a 4 gig USB stick img for redeploy. DD is faster and then resize partitions after
That… seems pretty unsafe. If I’m taking a backup, I definitely would avoid resizing it or making any modifications to it during the backup process. What if the resize fails and is the reason you need to restore from backup in the first place?
I guess it’s a handy hack in use cases like yours, or if the backup is a convenience, but it’s important to understand the risks and whether you’re better off with filesystem level tools.
I’m sure there is potential risk, It just hasn’t been a problem on my end. Just putting out as an option if you don’t want to clone a 16TB drive and want to fit it on a drive that suits it.
You’d probably be better off with dd if=/dev/zero of=file.zero to zero out empty space, dd copy the whole drive, then compress the copy. I wouldn’t fuck around with partitions on something I want to back up
For sure, but in my case I didn’t want a copressed copy, I wanted a working fully functional drive image
Probably safer to image the whole partition then shrink the image, then. Not sure exactly how I’d go about it, but I’m sure it’s not too bad, probably three arcane shell commands
Yes, zero spacing and compress. In my case I was building a direct clone backup for when nas might fail and I can swap drive innediately, but did not want to wait hours to dd the empty drive to an image file.
Reposted from a server fault thread , author plasmapotential. note fdisk -l -u=cylinders /dev/sdX will output cylinder info if it doesnt by default.
Use dd, with the count option.
In your case you were using fdisk so I will take that approach. Your "sudo fdisk -l "produced:
The two things you should take note of are 1) the unit size, and 2) the “End” column. In your case you have cylinders that are equal to 8225280 Bytes. In the “End” column sda8 terminates at 525 (which is 525[units]16065512 = ~4.3GB)
dd can do a lot of things, such as starting after an offset, or stopping after a specific number of blocks. We will do the latter using the count option in dd. The command would appear as follows:
Where -bs is the block size (it is easiest to use the unit that fdisk uses, but any unit will do so long as the count option is declared in these units), and count is the number of units we want to copy (note that we increment the count by 1 to capture the last block).