Andrew Libby on 24 Aug 2016 09:28:36 -0700


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Docker Best-practices guide / intro?



All good thoughts.

Comments inline.


On 8/24/16 12:09 PM, Jason Plum wrote:
> While I can't help with the btrfs, I can help you with everything in
> here. But I need more free time than I have at the moment :/
> 
> Jason Plum
> WarheadsSE
> 
> On Wed, Aug 24, 2016 at 12:08 PM, Rich Freeman
> <r-plug@thefreemanclan.net <mailto:r-plug@thefreemanclan.net>> wrote:
> 
>     On Wed, Aug 24, 2016 at 10:39 AM, Andrew Libby
>     <andrew.libby@gmail.com <mailto:andrew.libby@gmail.com>> wrote:
>     >
>     > On 8/23/16 9:39 PM, Rich Freeman wrote:
>     >
>     >> 1. How to create/start/stop/snapshot/maintain/etc docker instances.
>     >
>     > At my last job we built a number of docker images to run various
>     > services.  This included things like rails and perl applications,
>     > java apps that were closed source, LDAP, mysql, postgres,
>     > and so forth.
>     >
>     > We initially were using systemd units to manage start/ stop containers
>     > but wound up using the docker system itself and have containers
>     > typically --restart=always passed to docker run.
>     >
>     > Since docker images are themselves somewhat like a snapshot, but we're
>     > careful to keep application data out of the image and stored either in
>     > a database linked to the comtainer or in a persistent volume.  Generally
>     > I used volumes mapped to a direcotry in /srv devoted to the container.
> 
>     I can buy that, though often there is caching data that I'd prefer not
>     be re-created every single time an instance starts up, even if it
>     could be.  And I'd prefer not to have for every single instance a
>     laundry list of bind-mounts to stuff outside the instance, or
>     whatever.

I'm with you on the bind mounts.  I've been pretty successful keeping
the number down.  If you follow the advice of the Docker folks, they'll
tell you single service (or even single process) per container.  If you
keep the workload in a container simple, in practice you can keep the
mounting down as well.

> 
>     It still leaves me with an issue.  I create a home mythtv server
>     image, for example.  In two weeks I want to apply package updates to
>     it.  Even if I roll back to the image file every time I start an
>     instance, I still need to launch an instance, deploy my updates, and
>     then capture that as a new image.  Either that or I'm building a new
>     image from scratch every few weeks, which would be painful.

We never patch live containers.  We rebuild images and re-create the
container.  As long as you adhere to one step build and have a good way
to destroy/ re-crteate your containers this is pretty straight forward.
I've done things like having a script for each container or used systemd
units.


> 
>     For many of my images I'm going to probably end up starting from
>     scratch and not using the registry.  Or, at best I'll end up being the
>     person maintaining the registry, though I don't know if I'm going to
>     bother keeping a generic OS-only image since I'm not using anything
>     like Chef/Ansible.

I did this almost every time.  This was one of the philosophical
differences between myself and some folks I worked with. I'd almost
always build an image that was highly specific to my need.  I might
extend an existing image, or as I commonly did crip portions of my
dockerfiles from the works of others.

> 
>     >
>     > If I were to do application snapshots, I'd use the underlying
>     > filesystem/ storage technology to perform those snapshots.
> 
>     I really don't like the idea of using an application that does image
>     management, and then trying to manage my images outside of it.  I feel
>     like I'd end up fighting Docker.

The sanps I'm referring to here are not the images but the application
data I'm volume mounting into containers.  I back this stuff up just
like any other data.  A nice thing about this approach is that we never
backup anything that is in an image.  This reduces the overall backup
burden considerably.

> 
>     If I wanted to manually maintain snapshots at the file level I'd just
>     stick with nspawn.
> 
>     >
>     >> 2. How to build docker images from directory trees or using installers/etc.
>     >
>     > I generally keep copies of configuration files for the service in
>     > git and copy them into the image at build time.  Some folks have
>     > docker RUN statements that use things like sed to edit configuration
>     > files for the services that are already in the base image they're
>     > building from.
> 
>     So if I were going to go that route I'd use Ansible or Chef.  However,
>     I'm content with just setting up a host once and then maintaining the
>     image over time.
> 
>     >
>     >> 4. How to leverage btrfs snapshots if possible, since I'm running this
>     >> on btrfs anyway.
>     >
>     > We had some btrfs, and while we didn't snapshot with it, I imagine you
>     > could have subvolumes in /srv (to riff on our approach), and snapshot
>     > those.  I imagine that'd work fine.
>     >
>     > We also experimented with keeping application data that non-ephemeral
>     > and not appropriate for keeping in mages in ceph.
>     >
> 
>     Actually, it looks like it just uses btrfs natively automatically if
>     built with btrfs support.  I just need to figure out how to get it to
>     snapshot instances into new images and such.

Earlier on CoreOS shipped based on btrfs.  They switched to ext4
somewhere along the way. We wound up having systems using both.

> 
>     Thanks - I'll keep tinkering with it.
> 
>     --
>     Rich

___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug