Rich Freeman on 17 Mar 2017 07:19:32 -0700 |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: [PLUG] Avoid Arvixe at all costs! |
On Fri, Mar 17, 2017 at 8:40 AM, Rich Kulawiec <rsk@gsp.org> wrote: > On Thu, Mar 16, 2017 at 11:09:14PM -0400, PaulNM wrote: >> Contrary to what others may be saying, there's nothing wrong with having >> multiple backups per day. One (of many) use cases is a database that gets >> edited by multiple people throughout the day. > > There's nothing "wrong" with it, but it's probably not the optimal approach. > > To put it another way: if you find yourself in a design situation where > you feel this necessary, it would probably be best to back up several > steps, examine what you're doing, and figure out how to provide > equivalent data recovery capabilities without doing several backups/day. > If you are literally talking about a database I probably wouldn't be backing it up with dump, especially not multiple times per day. For my databases in mysql I have a script that I run before my backups which runs mysqldump into a directory that will be picked up by my regular backup. It is going to be far easier to do a partial restore this way, and also it is going to be atomic (I don't think dump is). If you want to backup very frequently, or while the system is in use, you should probably be thinking about COW filesystems. Granted on linux the two options both have a bunch of caveats, but they are designed to make this sort of thing straghtforward. If you're not concerned about loss of the system itself you can store snapshots as often as you want with almost no cost. If you want your backups to be offsite they're a lot cheaper than dump or rsync to create if you use the serialization capabilities of the COW filesystem. Both zfs and btrfs can determine what changed in an incremental backup without actually having to read all the inodes in the filesystem, unlike any ext2+ backup solution. As I said, they both have some caveats on linux so I wouldn't use them lightly in a production setting, but it is something to keep an eye on. An earlier email mentioned git. Git is great for some things, but keep in mind that you can't delete stuff and it is designed for text files. I wouldn't be using it as a normal backup solution for arbitrary data. It is a great thing to set up in /etc, and there are tools like etckeeper which provide hooks for your package manager to automatically create commits for you when it goes touching stuff in /etc. However, it is not really a backup solution per se even if it is pretty simple to push to a remote repo. I don't even try to replicate my /etc git repos using git, I just back them up with duplicity+rsnapshot (which was also mentioned). -- Rich ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug