Keith via plug on 13 Jul 2020 14:37:40 -0700 |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: [PLUG] Best Solution for Multiple Volume Backups |
On 7/13/20 3:40 PM, Rich Freeman wrote:
On Mon, Jul 13, 2020 at 3:13 PM Keith C. Perry via plug <plug@lists.phillylinux.org> wrote:10+TB is very large change set.Well, transfer-wise, yes, but one hard drive these days can hold more than that. But I did want to make it clear that I'm not looking for "create a tarball and stick it on a USB stick."Have you considered running nilfs2 for your backup system filesystem and dumping that data to an LTO tape set?LTO seems a bit overkill/expensive for my needs here. While I'm over 10TB I'm not talking about hundreds of TB. An LTO drive costs hundreds of dollars just for something old like LTO5, and you also need to make sure you're getting the right combo of drives, enclosures, HBAs, and so on (plus a host to stick all this stuff in). For just 10-20TB stored to hard drives I can just use modestly-priced USB3 hard drives which keeps things simple. If I were over 100TB then LTO would make a lot more sense. Or if I wanted to be more rigorous with my backup sets (tapes are marginally cheaper so you can afford to have a few extra just to rotate them more effectively).
Ok, when you said span I started thinking tapes but I understand now.
What I currently do dump to an 8Tb filesystem container on my LizardFS net.Much of this data is medium-priority data already on LizardFS with snapshots/redundancy/etc. In theory it isn't the end of the world if I lose it so I don't want to spend a fortune on backup. However, I'm reaching a point where if I just lost it all that would be pretty frustrating, so having an offline copy of the more valuable stuff would be desirable. I'm starting to look at Bacula but reading the docs just serves to remind me why I got away from it in the first place. It is fairly tape-centric and it seems to be lacking when it comes to the concept of "please insert disk 2". Granted, with USB3 hard drives I guess I could mount more than one at a time if I had to. It is just really clunky. I should look at duplicity and see if that can easily span multiple drives. I've never used it that way. Oh, I didn't mention it up-front, but encryption would also be useful. If I were desperate I could probably use LUKS on the disks but if the backup software can natively do encryption that would be ideal. I'm trying to move more to encrypted disks for just about everything because then when a disk dies I don't have to worry so much about wiping/etc - just toss it in the trash...
Makes sense. This is why I do an offline backup as well. Also, all my portable storage devices are encrypted. There is really no reason not to do that for piece of mind (or disposal in your case).
Other than tar the only thing I know of that does what you want is the xfsdump utility which is going to be useless if you don't run XFS. That said, I've never done that. Before I upgraded my offline drive sizes (I use internals drives in a dock) I would LVM a bunch of drives (the dock is 4 bays). I would do this long before spanning drives. If for no other reason, the upgrade ability was easy. For example... it was easy to go from a 3 x 1Tb RAID a 2 x 4Tb mirror (I know... 5 disks, how'd I do that ??!?!?- there's trick for that :-) ) which is what I'm using now. When I out grow those moving to, for example, 2 x 8Tb mirror will be just a easy.
Also, if I lose a drive... I have the mirror.If you span files and lose the 3rd drive out of 5, typically you lose the rest of your data after that- unless these days there is a way to reindex and recover. Been there before- I'd rather not have to deal with such a situation.
-- ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Keith C. Perry, MS E.E. Managing Member, DAO Technologies LLC (O) +1.215.525.4165 x2033 (M) +1.215.432.5167 www.daotechnologies.com ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug