Michael Leone on 8 Aug 2018 17:29:57 -0700 |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: [PLUG] Virtualization clusters & shared storage |
We just decided to go with Nutanix for our new datacenter, over VMware/VSAN. We'll see ... We'd still be using VMware as the hypervisor, although the bosses are all keen to eventually go with Nutanix's native hypervisor, because it's cheaper in licensing costs! Which means it automatically MUST be better, right? Right? ... <sigh> On Wed, Aug 8, 2018 at 8:23 PM, Andy Wojnarek <andy.wojnarek@theatsgroup.com> wrote: > Hey JP, > > Are you describing a hyperconverged architecture? Where there is no external storage, and each node in the cluster has local storage. > > The best commercial offering I have come across in that arena, (and have used), is Nutanix (https://www.nutanix.com/). > > They grew up as a software defined storage company, and grew into the hyperconverged product you see today. > > (Note I don't sell or make any money on Nutanix, or do Nutanix services, I just have used their product and think they're the bees knees of the hyperconverged space). > > -- > Andy > > On 8/8/18, 5:13 PM, "plug on behalf of JP Vossen" <plug-bounces@lists.phillylinux.org on behalf of jp@jpsdomain.org> wrote: > > I have a question about virtualization cluster solutions. One thing > that has always bugged me is that VM vMotion/LiveMigration features > require shared storage, which makes sense, but they always seem to > assume that shared storage is external, as in a NAS or SAN. What would > be REALLY cool is a system that uses the cluster members "local" storage > as JBOD that becomes the shared storage. Maybe that's how some of > solutions work (via Ceph, GlusterFS or ZFS?) and I've missed it, but > that seems to me to be a great solution for the lab & SOHO market. > > What I mean is, say I have at least 2 nodes in a cluster, though 3+ > would be better. Each node would have at least 2 partitions, one for > the OS/Hypervisor/whatever and the other for shared & replicated > storage. The "shared & replicated" partition would be, well, shared & > replicated across the cluster, providing shared storage without needing > an external NAS/SAN. > > This is important to me because we have a lot of hardware sitting around > that has a lot of local storage. It's basically all R710/720/730 with > PERC RAID and 6x or 8x drive bays full of 1TB to 4TB drives. While I > *can* allocate some nodes for FreeNAS or something, that increases my > required node count and wastes the CPU & RAM in the NAS nodes while also > wasting a ton of local storage on the host nodes. It would be more > resource efficient to just use the "local" storage that's already > spinning. The alternative we're using now (that sucks) is that the > hypervisors are all just stand-alone with local storage. I'd rather get > all the cluster advantages without the NAS/SAN issues > (connectivity/speed, resilience, yet more rack space & boxes). > > Are there solutions that work that way and I've just missed it? > > > Related, I'm aware of these virtualization environment tools, any more > good ones? > 1. OpenStack, but this is way too complicated and overkill > 2. Proxmox sounds very cool > 3. Cloudstack likewise, except it's Java! :-( > 4. Ganeti was interesting but it looks like it may have stalled out > around 2016 > 5. https://en.wikipedia.org/wiki/OVirt except it's Java and too limited > 6. https://en.wikipedia.org/wiki/OpenNebula with some Java and might do > on-node-shared-storage? > 7. Like AWS: https://en.wikipedia.org/wiki/Eucalyptus_(software) except > it's Java > > I'm asking partly for myself to replace my free but not F/OSS ESXi > server at home and partly for a work lab that my team needs to rebuild > in the next few months. We have a mishmash right now, much of it ESXi. > We have a lot of hardware laying around, but we have *no budget* for > licenses for anything. I know Lee will talk about the VMware starter > packs and deals like that but we not only have no budget, that kind of > thing is a nightmare politically and procedurally and is a no-go; it's > free or nothing. And yes I know that free costs money in terms of > people time, but that's already paid for and while we're already busy, > this is something that has to happen. > > Also we might like to branch out from ESXi anyway... We are doing a > some work in AWS, but that's not a solution here, though cross cloud > tools like Terraform (and Ansible) are in use and the more we can use > them here too the better. > > Thanks, > JP > -- ------------------------------------------------------------------- > JP Vossen, CISSP | http://www.jpsdomain.org/ | http://bashcookbook.com/ > ___________________________________________________________________________ > Philadelphia Linux Users Group -- http://www.phillylinux.org > Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce > General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug > > > ___________________________________________________________________________ > Philadelphia Linux Users Group -- http://www.phillylinux.org > Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce > General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug