Lee H. Marzke on 7 Nov 2017 07:35:32 -0800 |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: [PLUG] small business server virtualization? |
----- Original Message ----- > From: "Rich Freeman" <r-plug@thefreemanclan.net> > To: "Philadelphia Linux User's Group Discussion List" <plug@lists.phillylinux.org> > Sent: Tuesday, November 7, 2017 5:29:40 AM > Subject: Re: [PLUG] small business server virtualization? > On Mon, Nov 6, 2017 at 10:08 AM, Greg Helledy <gregsonh@gra-inc.com> wrote: >> Does the overhead of virtualization make sense for small organizations? > > Since there were already a bunch of replies instead of reiterating all > of that or posting 8 different replies I'll just add some thoughts > that I don't think were covered. > > First. I would definitely strongly consider containers over > virtualization where it makes sense. Containers involve a lot less > overhead. Docker tends to be what everybody uses for this, and on any > serious scale I'd strongly consider it. I don't personally use it, as > there are a few things about it I don't like, but I wouldn't benefit > as much from its upsides. So containers are great at making developers life easier ( less dependencies ) at the cost of making it more difficult for operations. > > I will point out one downside to containers relative to VMs that > didn't get a mention: security. In general linux containers are not > considered entirely escape-proof if somebody manages to obtain root > inside of one. Containers running as non-root on the host are a lot > more secure in this particular regard. This isn't really anything > inherent to containers so much as the fact that they're still > relatively new. A VM would provide more isolation if a malicious > intruder is part of your threat model. However, containers are great > for general isolation - a process isn't going to escape from a > container merely because it has a bug - a human would almost certainly > have to be behind it. So VMware Integrated Containers (VIC) runs each docker inside it's own "photon" lightweight linux VM container host, and provides the extra security. Each VM is a tiny VM, which appears inside vCenter along with other VM's Instant-clone technology clones VM's wit their running memory image to create a clone that is already running ( no need for a boot process ) so it's running nearly immediately. The normal docker command line works with these images inside their VM. > > Lee is correct that VM hypervisors themselves do not add much > overhead, but he neglected the overhead that comes from the overall > approach. With containers RAM is a completely shared commodity across > guests (subject to the resource limits that already exist for > processes in linux). With VMs it usually is not. If 47 VMs all > access the same files on the same network filesystems, each of the 47 > VMs will end up keeping their own private cache of those files in RAM. > If they were containers they would all share the same cache, both for > reading and writing. When you launch a new container the only cost is > the RAM used by the process itself and any shared libraries that > aren't also shared with other containers (to be fair sharing shared > libs across containers isn't the typical approach). When you launch a > new VM the cost is whatever RAM you would need to run the entire > OS+application. On linux launching a container is essentially the > same as launching any other process as far as the kernel itself is > concerned - all processes already run "in a default container" on the > host. > > Maybe VMWare has some solutions that make guests nicer about RAM > allocation, or capabilities like dedup. Lee could probably speak to > this better than I, but since this is one of the biggest limitations > with RAM I'm sure VMWare has focused on it. However, I'd be shocked > if you could really get a VM down to the same footprint as a > container. I guess the flip side of this is that if a kernel panics > in a VM you only lose that one VM, and not the host. Also, the fact > that the whole thing is virtualized down to the hardware does let > VMWare do some tricks with moving guests around to different hardware > that I don't think Linux supports currently. I think your right that more RAM is used, and the segregated RAM space is partly what provides the extra security. VMware has 'transparent page sharing'(TPS) which is essentially a scavanger process that runs through host RAM looking identical 4K RAM pages to de-dup. However for various reasons this is only used when the host is low on RAM. VMware has always claimed the abilty to run more VM's than other hypervisors on the same amount of RAM, due to: TPS, memory compression, guest balooning, and host swapping. > > So, there are a bunch of pros and cons here. For linux guests you > would not be out of the mainstream to adopt containers. No argument that container may be useful in some cases, but my concearns are more that there are hidden costs of managing containers and their security. Just because developers get some benefits, doesn't mean that it is always better for the project as a whole. As you said above, it's complicated. Lee > > -- > Rich > ___________________________________________________________________________ > Philadelphia Linux Users Group -- http://www.phillylinux.org > Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce > General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug -- "Between subtle shading and the absence of light lies the nuance of iqlusion..." - Kryptos Lee Marzke, lee@marzke.net http://marzke.net/lee/ ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug