Christopher Barry on 24 Aug 2016 10:57:54 -0700


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] epiphany or stupidity?


On Wed, 24 Aug 2016 11:56:04 -0400
Tone Montone <tonemontone@gmail.com> wrote:

>I was running last night, and for some reason, I had an idea about
>backups.  It occurred to me that if I had 100 Red Hat systems, all
>running the same OS and Patch level, would I need full backups on all
>the systems. Wouldn't there be static information like executables
>that would be the same across all systems?  So instead of doing fulls
>x 100, I could do a full x 1, then just differentials or incrementals
>on the others, thereby reducing total storage required on tapes.
>
>Then I thought if I took the same idea and applied it to the SAN
>storage, could I have fixed images that the systems run on, and only
>require 1 instance of it, thereby reducing total storage space
>requirements.
>
>Then I thought, either this is a really stupid idea, or it's brilliant
>and most likely already done.
>
>Comments?
>
>Thanks,
>
>Mike

No it's not stupid, and yes, it's already been done - like well over a
decade or more ago...

Boot a single, common, readonly OS image over iSCSI (do not use NFS for
this anymore) with iPXE on every box, then use a unioning fs like aufs
to overlay writable personality images from storage on top of that to
get the individual configurations for each host. The hosts can be
diskless, except for maybe an ssd swap device if required.

iPXE scripting is used to pull the correct OS and personality images
based on the systems' MAC address. That gets the box booted as it's own
unique system. Your dhcpd server can run apache too to tell iPXE where
it's images are. It's extremely flexible. If a physical box craps out,
you can simply boot a VM to take it's place while you fix or replace it
by simply changing the config on the apache server to point to the VM's
MAC. Box upgrades are that simple as well. Plus, all of this can be
done without even having a high-end SAN, but instead building a Linux
raid system for storage, and using Linux-IO.

I've done this long ago, but most recently with Infiniband, using iSER
to a custom Linux-based SSD array, and have seen 3.7+GB/s throughput to
disk - that's bytes, not bits. In that case, all of the systems were KVM
VMs hosted on IB connected hypervisors, and the VMs could live-migrate
to other hypervisors while sustaining that kind of throughput. I named
the system BubblePlex, and it was designed to have a client per VM with
an overlaid mysql database personality image for a SaaS startup here in
Philly (that recently got bought by large e-commerce player in SF). They
changed to a redshift db implementation rather than continue to use
mysql and a do a co-lo, which I was designing for, so it never got past
POC. But it was extremely high-performance, redundant and cool for
lot's of other reasons too.

After I left that company, I was considering creating it as a
clustering distro named infinux[1] that used meta-application images
that were overlaid to create say a LAMP server, a Dovecot/Postfix mail
server, a samba server - or whatever you needed. These app images (I
named them stackages :) were readonly too, and only the actual
configuration image was writable on a per/host basis.

The nice this about this design is a compromise of a system can be
corrected by a reboot, and by looking at the host's overlay image from
another admin box, you can immediately see exactly what the compromise
entailed and correct it. The readonly OS image in storage will be
secure and unaffected.

Now for user data, that's not typically duplicated all that much. For
backups for things that change, I use rsnapshot personally, but if you
have a high-end SAN, it's snapshotting capability should be used - and
of course you're using tape to offsite storage... right? ;)


[1]
But instead, I started a hardware startup creating a wearable universal
audio interface. Symbiaudix is the company name, and I'm trying to get
to a indiegogo campaign before the end of the year. See the blog I just
stood up @ https://blog.symbiaudix.com for a little background. There's
not much there yet, and no pictures of the work to date, but you'll get
the gist of where I'm going...

So, anyone on the list, ping me off-list at cbarry<at>symbiaudix<dot>com
if you're interested in getting involved in this project. I need hw/sw
engineers, web engineers, social engineers, graphics folks, etc. It'll
be cutting edge, very exciting cool tech, security and privacy focused,
fully OpenSource, and a whole lot of fun to do.


-- 
Regards,
Christopher
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug