lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 16 Jan 2019 07:42:23 +1100
From:   Dave Chinner <david@...morbit.com>
To:     Pankaj Gupta <pagupta@...hat.com>
Cc:     Dan Williams <dan.j.williams@...el.com>,
        Matthew Wilcox <willy@...radead.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        KVM list <kvm@...r.kernel.org>,
        Qemu Developers <qemu-devel@...gnu.org>,
        linux-nvdimm <linux-nvdimm@...1.01.org>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        virtualization@...ts.linux-foundation.org,
        Linux ACPI <linux-acpi@...r.kernel.org>,
        linux-ext4 <linux-ext4@...r.kernel.org>,
        linux-xfs <linux-xfs@...r.kernel.org>, Jan Kara <jack@...e.cz>,
        Stefan Hajnoczi <stefanha@...hat.com>,
        Rik van Riel <riel@...riel.com>,
        Nitesh Narayan Lal <nilal@...hat.com>,
        Kevin Wolf <kwolf@...hat.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Ross Zwisler <zwisler@...nel.org>,
        vishal l verma <vishal.l.verma@...el.com>,
        dave jiang <dave.jiang@...el.com>,
        David Hildenbrand <david@...hat.com>,
        jmoyer <jmoyer@...hat.com>,
        xiaoguangrong eric <xiaoguangrong.eric@...il.com>,
        Christoph Hellwig <hch@...radead.org>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Jason Wang <jasowang@...hat.com>, lcapitulino@...hat.com,
        Igor Mammedov <imammedo@...hat.com>,
        Eric Blake <eblake@...hat.com>, Theodore Ts'o <tytso@....edu>,
        adilger kernel <adilger.kernel@...ger.ca>,
        darrick wong <darrick.wong@...cle.com>,
        "Rafael J. Wysocki" <rjw@...ysocki.net>
Subject: Re: [PATCH v3 0/5] kvm "virtio pmem" device

On Tue, Jan 15, 2019 at 12:35:06AM -0500, Pankaj Gupta wrote:
> 
> > > > On Mon, Jan 14, 2019 at 02:15:40AM -0500, Pankaj Gupta wrote:
> > > > >
> > > > > > > Until you have images (and hence host page cache) shared between
> > > > > > > multiple guests. People will want to do this, because it means they
> > > > > > > only need a single set of pages in host memory for executable
> > > > > > > binaries rather than a set of pages per guest. Then you have
> > > > > > > multiple guests being able to detect residency of the same set of
> > > > > > > pages. If the guests can then, in any way, control eviction of the
> > > > > > > pages from the host cache, then we have a guest-to-guest
> > > > > > > information
> > > > > > > leak channel.
> > > > > >
> > > > > > I don't think we should ever be considering something that would
> > > > > > allow a
> > > > > > guest to evict page's from the host's pagecache [1].  The guest
> > > > > > should
> > > > > > be able to kick its own references to the host's pagecache out of its
> > > > > > own pagecache, but not be able to influence whether the host or
> > > > > > another
> > > > > > guest has a read-only mapping cached.
> > > > > >
> > > > > > [1] Unless the guest is allowed to modify the host's file; obviously
> > > > > > truncation, holepunching, etc are going to evict pages from the
> > > > > > host's
> > > > > > page cache.
> > > > >
> > > > > This is so correct. Guest does not not evict host page cache pages
> > > > > directly.
> > > >
> > > > They don't right now.
> > > >
> > > > But someone is going to end up asking for discard to work so that
> > > > the guest can free unused space in the underlying spares image (i.e.
> > > > make use of fstrim or mount -o discard) because they have workloads
> > > > that have bursts of space usage and they need to trim the image
> > > > files afterwards to keep their overall space usage under control.
> > > >
> > > > And then....
> > > 
> > > ...we reject / push back on that patch citing the above concern.
> > 
> > So at what point do we draw the line?
> > 
> > We're allowing writable DAX mappings, but as I've pointed out that
> > means we are going to be allowing  a potential information leak via
> > files with shared extents to be directly mapped and written to.
> > 
> > But we won't allow useful admin operations that allow better
> > management of host side storage space similar to how normal image
> > files are used by guests because it's an information leak vector?
> 
> First of all Thank you for all the useful discussions. 
> I am summarizing here:
> 
> - We have to live with the limitation to not support fstrim and 
>   mount -o discard options with virtio-pmem as they will evict 
>   host page cache pages. We cannot allow this for virtio-pmem
>   for security reasons. These filesystem commands will just zero out 
>   unused pages currently.

Not sure I follow you here - what pages are going to be zeroed and
when will they be zeroed? If discard is not allowed, filesystems
just don't issue such commands and the underlying device will never
seen them.

> - If alot of space is unused and not freed guest can request host 
>   Administrator for truncating the host backing image. 

You can't use truncate to free space in a disk image file. The only
way to do it safely in a generic, filesystem agnositic way is to
mount the disk image (e.g. on loopback) and run fstrim on it. The
loopback device will punches holes in the file where all the free
space is reported by the filesystem via discard requests.

Which is kinda my point - this could only be done if the guest is
shut down, which makes it very difficult for admins to manage. 

>   We are also planning to support qcow2 sparse image format at 
>   host side with virtio-pmem.

So you're going to be remapping a huge number of disjoint regions
into a linear pmem mapping? ISTR discussions about similar things
for virtio+fuse+dax that came up against "large numbers of mapped
regions don't scale" and so it wasn't a practical solution compared
to a just using raw sparse files....

> - There is no existing solution for Qemu persistent memory 
>   emulation with write support currently. This solution provides 
>   us the paravartualized way of emulating persistent memory.

Sure, but the question is why do you need to create an emulation
that doesn't actually perform like pmem? The whole point of pmem is
performance, and emulating pmem by mmap() of a file on spinning
disks is going to be horrible for performance. Even on SSDs it's
going to be orders of magnitudes slower than real pmem.

So exactly what problem are you trying to solve with this driver?

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ