lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aNx0z2XZaJZxQ44W@google.com>
Date: Tue, 30 Sep 2025 17:24:47 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Yan Zhao <yan.y.zhao@...el.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org, 
	Yan Zhao <yan.y.zhao@...el.com>, Fuad Tabba <tabba@...gle.com>, 
	Binbin Wu <binbin.wu@...ux.intel.com>, Michael Roth <michael.roth@....com>, 
	Ira Weiny <ira.weiny@...el.com>, Rick P Edgecombe <rick.p.edgecombe@...el.com>, 
	Vishal Annapurve <vannapurve@...gle.com>, David Hildenbrand <david@...hat.com>, 
	Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [RFC PATCH v2 06/51] KVM: Query guest_memfd for private/shared status

On Wed, May 28, 2025, Yan Zhao wrote:
> On Wed, May 28, 2025 at 04:08:34PM +0800, Binbin Wu wrote:
> > On 5/27/2025 11:55 AM, Yan Zhao wrote:
> > > On Wed, May 14, 2025 at 04:41:45PM -0700, Ackerley Tng wrote:
> > > > @@ -2544,13 +2554,8 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
> > > >   		return false;
> > > >   	slot = gfn_to_memslot(kvm, gfn);
> > > > -	if (kvm_slot_has_gmem(slot) && kvm_gmem_memslot_supports_shared(slot)) {
> > > > -		/*
> > > > -		 * For now, memslots only support in-place shared memory if the
> > > > -		 * host is allowed to mmap memory (i.e., non-Coco VMs).
> > > > -		 */
> > > > -		return false;
> > > > -	}
> > > > +	if (kvm_slot_has_gmem(slot) && kvm_gmem_memslot_supports_shared(slot))
> > > > +		return kvm_gmem_is_private(slot, gfn);
> > > When userspace gets an exit reason KVM_EXIT_MEMORY_FAULT, looks it needs to
> > > update both KVM memory attribute and gmem shareability, via two separate ioctls?
> > IIUC, when userspace sets flag GUEST_MEMFD_FLAG_SUPPORT_SHARED to create the
> > guest_memfd, the check for memory attribute will go through the guest_memfd way,
> > the information in kvm->mem_attr_array will not be used.
> > 
> > So if userspace sets GUEST_MEMFD_FLAG_SUPPORT_SHARED, it uses
> > KVM_GMEM_CONVERT_SHARED/PRIVATE to update gmem shareability.
> > If userspace doesn't set GUEST_MEMFD_FLAG_SUPPORT_SHARED, it still uses
> > KVM_SET_MEMORY_ATTRIBUTES to update KVM memory attribute tracking.
> Ok, so the user needs to search the memory region and guest_memfd to choose the
> right ioctl.

I don't see any reason to support "split" models like this.  Tracking PRIVATE in
two separate locations would be all kinds of crazy.  E.g. if a slot is temporarily
deleted, memory could unexpected toggle between private and shared.  As evidenced
by Yan's questions, the cognitive load on developers would also be very high.

Just make userspace choose between per-VM and per-gmem, and obviously allow
in-place conversions if and only if attributes are per-gmem.

I (or someone else?) suggested adding a capability to disable per-VM tracking, but
I don't see any reason to allow userspace to opt-out on a per-VM basis either.
The big selling point of in-place conversions is that it avoids having to double
provision some amount of guest memory.  Those types of changes go far beyond the
VMM.  So I have a very hard time imagining a use case where VMM A will want to
use per-VM attributes while VMM B will want per-gmem attributes.

Using a read-only module param will also simplify the internal code, as KVM will
be able to route memory attributes queries without need a pointer to the "struct
kvm".

In the future, we might have to swizzle things, e.g. if we want with per-VM RWX
attributes, but that's largely a future problem, and a module param also gives us
more flexibility anyways since they tend not to be considered rigid ABI in KVM.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ