[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1684638419.64320214.1547530506805.JavaMail.zimbra@redhat.com>
Date: Tue, 15 Jan 2019 00:35:06 -0500 (EST)
From: Pankaj Gupta <pagupta@...hat.com>
To: Dave Chinner <david@...morbit.com>
Cc: Dan Williams <dan.j.williams@...el.com>,
Matthew Wilcox <willy@...radead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
KVM list <kvm@...r.kernel.org>,
Qemu Developers <qemu-devel@...gnu.org>,
linux-nvdimm <linux-nvdimm@...1.01.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
virtualization@...ts.linux-foundation.org,
Linux ACPI <linux-acpi@...r.kernel.org>,
linux-ext4 <linux-ext4@...r.kernel.org>,
linux-xfs <linux-xfs@...r.kernel.org>, Jan Kara <jack@...e.cz>,
Stefan Hajnoczi <stefanha@...hat.com>,
Rik van Riel <riel@...riel.com>,
Nitesh Narayan Lal <nilal@...hat.com>,
Kevin Wolf <kwolf@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Ross Zwisler <zwisler@...nel.org>,
vishal l verma <vishal.l.verma@...el.com>,
dave jiang <dave.jiang@...el.com>,
David Hildenbrand <david@...hat.com>,
jmoyer <jmoyer@...hat.com>,
xiaoguangrong eric <xiaoguangrong.eric@...il.com>,
Christoph Hellwig <hch@...radead.org>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>, lcapitulino@...hat.com,
Igor Mammedov <imammedo@...hat.com>,
Eric Blake <eblake@...hat.com>, Theodore Ts'o <tytso@....edu>,
adilger kernel <adilger.kernel@...ger.ca>,
darrick wong <darrick.wong@...cle.com>,
"Rafael J. Wysocki" <rjw@...ysocki.net>
Subject: Re: [PATCH v3 0/5] kvm "virtio pmem" device
> > > On Mon, Jan 14, 2019 at 02:15:40AM -0500, Pankaj Gupta wrote:
> > > >
> > > > > > Until you have images (and hence host page cache) shared between
> > > > > > multiple guests. People will want to do this, because it means they
> > > > > > only need a single set of pages in host memory for executable
> > > > > > binaries rather than a set of pages per guest. Then you have
> > > > > > multiple guests being able to detect residency of the same set of
> > > > > > pages. If the guests can then, in any way, control eviction of the
> > > > > > pages from the host cache, then we have a guest-to-guest
> > > > > > information
> > > > > > leak channel.
> > > > >
> > > > > I don't think we should ever be considering something that would
> > > > > allow a
> > > > > guest to evict page's from the host's pagecache [1]. The guest
> > > > > should
> > > > > be able to kick its own references to the host's pagecache out of its
> > > > > own pagecache, but not be able to influence whether the host or
> > > > > another
> > > > > guest has a read-only mapping cached.
> > > > >
> > > > > [1] Unless the guest is allowed to modify the host's file; obviously
> > > > > truncation, holepunching, etc are going to evict pages from the
> > > > > host's
> > > > > page cache.
> > > >
> > > > This is so correct. Guest does not not evict host page cache pages
> > > > directly.
> > >
> > > They don't right now.
> > >
> > > But someone is going to end up asking for discard to work so that
> > > the guest can free unused space in the underlying spares image (i.e.
> > > make use of fstrim or mount -o discard) because they have workloads
> > > that have bursts of space usage and they need to trim the image
> > > files afterwards to keep their overall space usage under control.
> > >
> > > And then....
> >
> > ...we reject / push back on that patch citing the above concern.
>
> So at what point do we draw the line?
>
> We're allowing writable DAX mappings, but as I've pointed out that
> means we are going to be allowing a potential information leak via
> files with shared extents to be directly mapped and written to.
>
> But we won't allow useful admin operations that allow better
> management of host side storage space similar to how normal image
> files are used by guests because it's an information leak vector?
First of all Thank you for all the useful discussions.
I am summarizing here:
- We have to live with the limitation to not support fstrim and
mount -o discard options with virtio-pmem as they will evict
host page cache pages. We cannot allow this for virtio-pmem
for security reasons. These filesystem commands will just zero out
unused pages currently.
- If alot of space is unused and not freed guest can request host
Administrator for truncating the host backing image.
We are also planning to support qcow2 sparse image format at
host side with virtio-pmem.
- There is no existing solution for Qemu persistent memory
emulation with write support currently. This solution provides
us the paravartualized way of emulating persistent memory. It
does not emulate of ACPI structures instead it just uses VIRTIO
for communication between guest & host. It is fast because of its
asynchronous nature and it works well. This makes use of at guest
side libnvdimm API's
- If disk size freeing problem with guest files trim truncate is
very important for users, they can still use real hardware which
will provide them both (advance disk features & page cache by pass).
Considering all the above reasons I think this feature is useful
from virtualization point of view. As Dave rightly said we should
be careful and I think now we are careful with the security implications
of this device.
Thanks again for all the inputs.
Best regards,
Pankaj
>
> That's splitting some really fine hairs there...
>
> > > > In case of virtio-pmem & DAX, guest clears guest page cache exceptional
> > > > entries.
> > > > Its solely decision of host to take action on the host page cache
> > > > pages.
> > > >
> > > > In case of virtio-pmem, guest does not modify host file directly i.e
> > > > don't
> > > > perform hole punch & truncation operation directly on host file.
> > >
> > > ... this will no longer be true, and the nuclear landmine in this
> > > driver interface will have been armed....
> >
> > I agree with the need to be careful when / if explicit cache control
> > is added, but that's not the case today.
>
> "if"?
>
> I expect it to be "when", not if. Expect the worst, plan for it now.
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@...morbit.com
>
Powered by blists - more mailing lists