lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADrL8HWH3d2r12xWv+fYM5mfUnnavLBhHDhof0MwGKeroJHWHA@mail.gmail.com>
Date: Thu, 8 Aug 2024 12:04:35 -0700
From: James Houghton <jthoughton@...gle.com>
To: "Wang, Wei W" <wei.w.wang@...el.com>
Cc: Sean Christopherson <seanjc@...gle.com>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>, 
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Peter Xu <peterx@...hat.com>, 
	Paolo Bonzini <pbonzini@...hat.com>, Oliver Upton <oliver.upton@...ux.dev>, 
	Axel Rasmussen <axelrasmussen@...gle.com>, David Matlack <dmatlack@...gle.com>, 
	Anish Moorthy <amoorthy@...gle.com>
Subject: Re: [ANNOUNCE] PUCK Agenda - 2024.08.07 - KVM userfault
 (guest_memfd/HugeTLB postcopy)

On Thu, Aug 8, 2024 at 5:15 AM Wang, Wei W <wei.w.wang@...el.com> wrote:
>
> On Thursday, August 8, 2024 1:22 AM, James Houghton wrote:
> > 1. For guest_memfd, stage 2 mapping installation will never go through GUP /
> > virtual addresses to do the GFN --> PFN translation, including when it supports
> > non-private memory.
> > 2. Something like KVM Userfault is indeed necessary to handle post-copy for
> > guest_memfd VMs, especially when guest_memfd supports non-private
> > memory.
> > 3. We should not hook into the overall GFN --> HVA translation, we should
> > only be hooking the GFN --> PFN translation steps to figure out how to create
> > stage 2 mappings. That is, KVM's own accesses to guest memory should just go
> > through mm/userfaultfd.
>
> Sorry.. still a bit confused about this one: will gmem finally support GUP and VMA?
> For 1. above, seems no, but for 3. here, KVM's own accesses to gmem will go
> through userfaultfd via GUP?
> Also, how would vhost's access to gmem get faulted to userspace?

Hi Wei,

>From what we discussed in the meeting, guest_memfd will be mappable
into userspace (so VMAs can be created for it), and so GUP will be
able to work on it. However, KVM will *not* use GUP for doing gfn ->
pfn translations for installing stage 2 mappings. (For guest-private
memory, GUP cannot be used, but the claim is that GUP will never be
used, no matter if it's guest-private or guest-shared.)

KVM's own accesses to guest memory (i.e., places where it does
copy_to/from_user) will go through GUP. By default, that's just how it
would work. What I'm saying is that we aren't going to add anything
extra to have "KVM Userfault" prevent KVM from doing a
copy_to/from_user (like how I had it in the RFC, where KVM Userfault
can block the translation of gfn -> hva).

vhost's accesses to guest memory will be the same as KVM's: it will go
through copy_to/from_user.

Hopefully that's a little clearer. :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ