Keith C. Perry on 12 Oct 2015 12:45:14 -0700

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Video link: "Virtualizing Bare-Metal Systems with QEMU" and meeting bookmarks

Agreed.  QEMU used to have an internal CIFS (samba) type of connectivity but I seem to recall it when away.

However, it looks like there is now something called VirtFS

Kinda cool to see Plan 9 mentioned in a production context  :D

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Keith C. Perry, MS E.E.
Owner, DAO Technologies LLC
(O) +1.215.525.4165 x2033
(M) +1.215.432.5167

From: "Rich Mingin (PLUG)" <>
To: "Philadelphia Linux User's Group Discussion List" <>
Sent: Monday, October 12, 2015 1:21:15 PM
Subject: Re: [PLUG] Video link: "Virtualizing Bare-Metal Systems with QEMU" and meeting bookmarks

Right, the biggest problem with two systems directly accessing a single filesystem is concurrency. If you can safeguard against that, it's sorta-mostly-kinda-safe, but still strongly advised against.
I'm still transitioning to a more strongly FOSS virtualization stack, but VMware uses a special pseudo-networking mode to share FSes between hosts and guests. It looks like a network filesystem to guests, so they're aware of changes, but behind the curtain it acts more like raw disk access. If Qemu/KVM doesn't have something similar, that might be worth putting together a gofundme to raise a bounty to get something like it implemented.

On Mon, Oct 12, 2015 at 1:17 PM, Keith C. Perry <> wrote:
Yep, that's might I made mention of "raw" disk images.  :)

Since I usually keep the system volume separate for my vm (or at least when I am building them) I can loop mount the partition then do whatever bulk data moves are needed (e.g. move source code for later compiling) and then convert back to qcow2.

Steve if you're talking about having an on going live share between and disk then sshfs or any other networking is the safest but slowest.

As previously mentioned, using the virtio devices are generally the most efficient as well.

Fun fact... sharing a raw disk image between a VM host and guest does seem to work (I did that a when I was building some FOSScon usb multiboot keys) but I'm not sure its "safe".  You would have to take care to make sure you are not making changes to the same files at the same time. YMMV


On Oct 12, 2015 12:52 PM, Rich Freeman <> wrote:
> On Mon, Oct 12, 2015 at 12:09 PM, Steve Litt <> wrote:
> > On Mon, 12 Oct 2015 10:40:56 -0400 (EDT)
> > "Keith C. Perry" <> wrote:
> >
> >
> >> I'm not sure how many people are familiar with how to use "mount
> >> --bind".  It's one of the things I mentioned that was needed for one
> >> the approaches to virtualizing a system (i.e. reinstalling boot
> >> information) but was out of scope for the talk.
> >
> > Hi Keith,
> >
> > If you know of a way to bind mount a Qemu host directory within a Qemu
> > guest, I'd really love to hear it. Right now I'm doing a sshfs to
> >, but as you can imagine, it's slow as molassas.
> That was what I was getting at with my question around bind mounts.
> There are ways of mounting filesystems through qemu, but while they
> might behave like bind mounts, they aren't actually bind mounts.
> Within the guest you could certainly use bind mounts.
> Bind-mounting from a host into a container isn't a problem since they
> share the same kernel.  A VM does not use the same kernel as the host
> (granted, with paravirtualization it gets a bit more fuzzy but the
> same principles apply).
> --
> Rich
> ___________________________________________________________________________
> Philadelphia Linux Users Group         --
> Announcements -
> General Discussion  --
Philadelphia Linux Users Group         --
Announcements -
General Discussion  --

Philadelphia Linux Users Group         --
Announcements -
General Discussion  --
Philadelphia Linux Users Group         --
Announcements -
General Discussion  --