lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aHUrKWaixqJyhsUU@google.com>
Date: Mon, 14 Jul 2025 09:07:05 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: David Hildenbrand <david@...hat.com>
Cc: Yan Zhao <yan.y.zhao@...el.com>, Michael Roth <michael.roth@....com>, pbonzini@...hat.com, 
	kvm@...r.kernel.org, linux-kernel@...r.kernel.org, rick.p.edgecombe@...el.com, 
	kai.huang@...el.com, adrian.hunter@...el.com, reinette.chatre@...el.com, 
	xiaoyao.li@...el.com, tony.lindgren@...el.com, binbin.wu@...ux.intel.com, 
	dmatlack@...gle.com, isaku.yamahata@...el.com, ira.weiny@...el.com, 
	vannapurve@...gle.com, ackerleytng@...gle.com, tabba@...gle.com, 
	chao.p.peng@...el.com
Subject: Re: [RFC PATCH] KVM: TDX: Decouple TDX init mem region from kvm_gmem_populate()

On Mon, Jul 14, 2025, David Hildenbrand wrote:
> On 14.07.25 17:46, Sean Christopherson wrote:
> > On Mon, Jul 14, 2025, Yan Zhao wrote:
> > > On Fri, Jul 11, 2025 at 08:39:59AM -0700, Sean Christopherson wrote:
> > > > The below could be tweaked to batch get_user_pages() into an array of pointers,
> > > > but given that both SNP and TDX can only operate on one 4KiB page at a time, and
> > > > that hugepage support doesn't yet exist, trying to super optimize the hugepage
> > > > case straightaway doesn't seem like a pressing concern.
> > > 
> > > > static long __kvm_gmem_populate(struct kvm *kvm, struct kvm_memory_slot *slot,
> > > > 				struct file *file, gfn_t gfn, void __user *src,
> > > > 				kvm_gmem_populate_cb post_populate, void *opaque)
> > > > {
> > > > 	pgoff_t index = kvm_gmem_get_index(slot, gfn);
> > > > 	struct page *src_page = NULL;
> > > > 	bool is_prepared = false;
> > > > 	struct folio *folio;
> > > > 	int ret, max_order;
> > > > 	kvm_pfn_t pfn;
> > > > 
> > > > 	if (src) {
> > > > 		ret = get_user_pages((unsigned long)src, 1, 0, &src_page);
> > > get_user_pages_fast()?
> > > 
> > > get_user_pages() can't pass the assertion of mmap_assert_locked().
> > 
> > Oh, I forgot get_user_pages() requires mmap_lock to already be held.  I would
> > prefer to not use a fast variant, so that userspace isn't required to prefault
> > (and pin?) the source.
> > 
> > So get_user_pages_unlocked()?
> 
> Yes, but likely we really want get_user_pages_fast(), which will fallback to
> GUP-slow (+take the lock) in case it doesn't find what it needs in the page
> tables.
> 
> get_user_pages_fast_only() would be the variant that doesn't fallback to
> GUP-slow.

Doh, right, that's indeed what I want.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ