Keith C. Perry on 8 Aug 2018 15:04:45 -0700


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Virtualization clusters & shared storage


JP this comes up on the LizardFS list every so often.  I haven't use it for this use case but it is documented here:

https://docs.lizardfs.com/cookbook/hypervisors.html#using-lizardfs-for-virtualization-farms

If you're not familiar with LizardFS, it is a software defined storage solution.  There are a number of advantages over object storage type systems like Ceph and GlusterFS (which I'll not get into here).  What I really like about it is that LizardFS provides all of the resiliency features while still being able to use traditional filesystems like ext4,xfs or even zfs (though overkill) under the hood.  Essentially for LizardsFS all you need is one big volume that is a JBOD or RAID 0.  From there you assign what are called "goals" that define the state of your data at any one time.  You can do mirroring or erasure coding and switch back and forth between goals if you ever need too since LizardFS' job is to make sure that the defined state of your data is met and maintained.  You can lose a disk, volume or an entire server and as a long as there is another copy of the data, LizardFS automatically will maintain your goals.  That means the data automatically will get rebuilt where it needs to when resources are restored.

It even comes with a GUI that lets you see what is going on with the system at any one time- Most importantly including data that is under or over its goal so you understand what data is at risk or what space will be getting reclaimed as a result of rebalancing (i.e. meeting goals).

I could go on but to your question, it is is possible.

(and yes, I will be giving a talk on this at some point along with some other things I hope to be putting into production soon :D )

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 
Keith C. Perry, MS E.E. 
Managing Member, DAO Technologies LLC 
(O) +1.215.525.4165 x2033 
(M) +1.215.432.5167 
www.daotechnologies.com

----- Original Message -----
From: "JP Vossen" <jp@jpsdomain.org>
To: "Philadelphia Linux User's Group Discussion List" <plug@lists.phillylinux.org>
Sent: Wednesday, August 8, 2018 5:13:17 PM
Subject: [PLUG] Virtualization clusters & shared storage

I have a question about virtualization cluster solutions.  One thing 
that has always bugged me is that VM vMotion/LiveMigration features 
require shared storage, which makes sense, but they always seem to 
assume that shared storage is external, as in a NAS or SAN.  What would 
be REALLY cool is a system that uses the cluster members "local" storage 
as JBOD that becomes the shared storage.  Maybe that's how some of 
solutions work (via Ceph, GlusterFS or ZFS?) and I've missed it, but 
that seems to me to be a great solution for the lab & SOHO market.

What I mean is, say I have at least 2 nodes in a cluster, though 3+ 
would be better.  Each node would have at least 2 partitions, one for 
the OS/Hypervisor/whatever and the other for shared & replicated 
storage.  The "shared & replicated" partition would be, well, shared & 
replicated across the cluster, providing shared storage without needing 
an external NAS/SAN.

This is important to me because we have a lot of hardware sitting around 
that has a lot of local storage.  It's basically all R710/720/730 with 
PERC RAID and 6x or 8x drive bays full of 1TB to 4TB drives.  While I 
*can* allocate some nodes for FreeNAS or something, that increases my 
required node count and wastes the CPU & RAM in the NAS nodes while also 
wasting a ton of local storage on the host nodes.  It would be more 
resource efficient to just use the "local" storage that's already 
spinning.  The alternative we're using now (that sucks) is that the 
hypervisors are all just stand-alone with local storage.  I'd rather get 
all the cluster advantages without the NAS/SAN issues 
(connectivity/speed, resilience, yet more rack space & boxes).

Are there solutions that work that way and I've just missed it?


Related, I'm aware of these virtualization environment tools, any more 
good ones?
1. OpenStack, but this is way too complicated and overkill
2. Proxmox sounds very cool
3. Cloudstack likewise, except it's Java! :-(
4. Ganeti was interesting but it looks like it may have stalled out 
around 2016
5. https://en.wikipedia.org/wiki/OVirt except it's Java and too limited
6. https://en.wikipedia.org/wiki/OpenNebula with some Java and might do 
on-node-shared-storage?
7. Like AWS: https://en.wikipedia.org/wiki/Eucalyptus_(software) except 
it's Java

I'm asking partly for myself to replace my free but not F/OSS ESXi 
server at home and partly for a work lab that my team needs to rebuild 
in the next few months.  We have a mishmash right now, much of it ESXi. 
We have a lot of hardware laying around, but we have *no budget* for 
licenses for anything.  I know Lee will talk about the VMware starter 
packs and deals like that but we not only have no budget, that kind of 
thing is a nightmare politically and procedurally and is a no-go; it's 
free or nothing.  And yes I know that free costs money in terms of 
people time, but that's already paid for and while we're already busy, 
this is something that has to happen.

Also we might like to branch out from ESXi anyway...  We are doing a 
some work in AWS, but that's not a solution here, though cross cloud 
tools like Terraform (and Ansible) are in use and the more we can use 
them here too the better.

Thanks,
JP
--  -------------------------------------------------------------------
JP Vossen, CISSP | http://www.jpsdomain.org/ | http://bashcookbook.com/
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug