Rich Freeman on 7 Nov 2017 08:22:18 -0800


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] small business server virtualization?


On Tue, Nov 7, 2017 at 7:53 AM, Lee H. Marzke <lee@marzke.net> wrote:
>> From: "Rich Freeman" <r-plug@thefreemanclan.net>
>>
>> Why would you run only 1 VM in a situation where you would be running
>> 100 containers?  You would run one container per application instance,
>> and you would run one VM in the same situation.
>>
>
> I'm no expert on Docker containers,

Docker != containers, though docker is one implementation of containers.

> but if you take a typical VM from VMware
> such as vCenter Appliance,  it runs a tomcat web server,  postgreSQL DB,  a round-robin DB
> an inventory service, and message broker like rabbitMQ.

You COULD do all of that in a single container, using Docker (I think)
or otherwise.  The reason nobody does is that you lose some of the
benefits of containers (or VMs).

Now if you ran those on 5 VMs you get that much more RAM overhead, so
that is a tradeoff.  You don't really have that issue with containers
as much, so you have the luxury of splitting things up.

But you can run ANY host in a container.  If you send me your OS root
filesystem in a tarball, I can probably have it running as a container
in a minute.  If that host runs 47 services, so will the container.

>
> To properly use containers , if I understand it, each of these services would be split out,
> and some services might be split into several sub-services perhaps at the
> module level so that you get high re-use.   So a VM like vCenter would take dozen
> or more containers to implement it.

I'd say that to ideally use containers, VMs, or physical hosts you
would want to split out services.  Now, to some degree with VMs and to
a very large degree with physical hosts the overhead of doing this
drives people to not do this.

Having done it both ways I definitely prefer splitting out services.
I used to run all my services at home on one host.  Doing updates on
that host was a one-liner.  Now I run most of my services in
containers, which means that I have a half-dozen hosts and while
updating any one is a one-liner I now have a whole bunch to manage
(and backup/etc).

I would never want to go back to having it all on one host, and the
reason is testing and troubleshooting.

I have a container that does pop3, and that is all it does.  If I
update it, I then hit the check mail button and see if it works.  If
it doesn't I roll back.  If three days later pop3 dies I know what
update did it, and I can STILL roll back with no real impact to
anything else.

Back when I had one host I'd run an update and get no errors.  Now I
guess I could have a big test script that goes down the list and tests
every service, but of course at home I wouldn't want to deal with
that.  Even in a production environment it would mean that to update
one service I have to test all of them.  Then suppose pop3 breaks -
now I can potentially roll back, but with everything co-mingled I need
a rollback strategy that doesn't hurt anything that has state -
rolling back my pop3 server is trivial (it doesn't host the
mailboxes), but if I had a database server on the same host a rollback
might have been more complex.  But maybe I really need an update on
that database server, and it shares the dependency that breaks with
pop3.  It rapidly can turn into a headache.

So, sure, you can load 15 services onto one container, and it will
work the same as 15 services on one physical host or one VM.  However,
all the consultants are going to prod you to not do it that way,
because long-term the costs of doing it that way are higher.

I'll just note that at work we saw similar things 10+ years ago when
we rolled out VMs for our physical servers (windows-based mostly).  In
the initial rollout we would just migrate them all 1:1, because IT was
mainly concerned about utilization.  However, as we rolled out future
services we ended up spreading them across more VMs than we would have
used if we were dealing with physical hosts.  We weren't just bundling
closely-related services (like LAMP or whatever), but we might bundle
completely different applications with the same maintainers because it
saved costs (basically a poor man's solution for utilization before
VMs were a thing).

Now, one factor that probably should be considered that I haven't
touched on is management tools, because I don't have much experience
with those, and I suspect Lee does.  I know those exist for
Docker/etc, and of course they exist for VMWare.  If you have hundreds
of hosts to deal with those could be a big factor.  That said, it
probably wouldn't hurt to get the perspective of somebody who is using
large-scale container deployments with Docker/CoreOS/etc.

-- 
Rich
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug