[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aSUeniY1WCeaPobT@yzhao56-desk.sh.intel.com>
Date: Tue, 25 Nov 2025 11:12:30 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Ira Weiny <ira.weiny@...el.com>
CC: Michael Roth <michael.roth@....com>, <kvm@...r.kernel.org>,
<linux-coco@...ts.linux.dev>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <thomas.lendacky@....com>,
<pbonzini@...hat.com>, <seanjc@...gle.com>, <vbabka@...e.cz>,
<ashish.kalra@....com>, <liam.merwick@...cle.com>, <david@...hat.com>,
<vannapurve@...gle.com>, <ackerleytng@...gle.com>, <aik@....com>
Subject: Re: [PATCH 3/3] KVM: guest_memfd: GUP source pages prior to
populating guest memory
On Mon, Nov 24, 2025 at 09:53:03AM -0600, Ira Weiny wrote:
> Yan Zhao wrote:
> > On Fri, Nov 21, 2025 at 07:01:44AM -0600, Michael Roth wrote:
> > > On Thu, Nov 20, 2025 at 05:11:48PM +0800, Yan Zhao wrote:
> > > > On Thu, Nov 13, 2025 at 05:07:59PM -0600, Michael Roth wrote:
>
> [snip]
>
> > > > > @@ -2284,14 +2285,21 @@ static int sev_gmem_post_populate(struct kvm *kvm, gfn_t gfn_start, kvm_pfn_t pf
> > > > > goto err;
> > > > > }
> > > > >
> > > > > - if (src) {
> > > > > - void *vaddr = kmap_local_pfn(pfn + i);
> > > > > + if (src_pages) {
> > > > > + void *src_vaddr = kmap_local_pfn(page_to_pfn(src_pages[i]));
> > > > > + void *dst_vaddr = kmap_local_pfn(pfn + i);
> > > > >
> > > > > - if (copy_from_user(vaddr, src + i * PAGE_SIZE, PAGE_SIZE)) {
> > > > > - ret = -EFAULT;
> > > > > - goto err;
> > > > > + memcpy(dst_vaddr, src_vaddr + src_offset, PAGE_SIZE - src_offset);
> > > > > + kunmap_local(src_vaddr);
> > > > > +
> > > > > + if (src_offset) {
> > > > > + src_vaddr = kmap_local_pfn(page_to_pfn(src_pages[i + 1]));
> > > > > +
> > > > > + memcpy(dst_vaddr + PAGE_SIZE - src_offset, src_vaddr, src_offset);
> > > > > + kunmap_local(src_vaddr);
> > > > IIUC, src_offset is the src's offset from the first page. e.g.,
> > > > src could be 0x7fea82684100, with src_offset=0x100, while npages could be 512.
> > > >
> > > > Then it looks like the two memcpy() calls here only work when npages == 1 ?
> > >
> > > src_offset ends up being the offset into the pair of src pages that we
> > > are using to fully populate a single dest page with each iteration. So
> > > if we start at src_offset, read a page worth of data, then we are now at
> > > src_offset in the next src page and the loop continues that way even if
> > > npages > 1.
> > >
> > > If src_offset is 0 we never have to bother with straddling 2 src pages so
> > > the 2nd memcpy is skipped on every iteration.
> > >
> > > That's the intent at least. Is there a flaw in the code/reasoning that I
> > > missed?
> > Oh, I got you. SNP expects a single src_offset applies for each src page.
> >
> > So if npages = 2, there're 4 memcpy() calls.
> >
> > src: |---------|---------|---------| (VA contiguous)
> > ^ ^ ^
> > | | |
> > dst: |---------|---------| (PA contiguous)
> >
>
> I'm not following the above diagram. Either src and dst are aligned and
Hmm, the src/dst legend in the above diagram just denotes source and target,
not the actual src user pointer.
> src_pages points to exactly one page. OR not aligned and src_pages points
> to 2 pages.
>
> src: |---------|---------| (VA contiguous)
> ^ ^
> | |
> dst: |---------| (PA contiguous)
>
> Regardless I think this is all bike shedding over a feature which I really
> don't think buys us much trying to allow the src to be missaligned.
>
> >
> > I previously incorrectly thought kvm_gmem_populate() should pass in src_offset
> > as 0 for the 2nd src page.
> >
> > Would you consider checking if params.uaddr is PAGE_ALIGNED() in
> > snp_launch_update() to simplify the design?
>
> I think this would help a lot... ATM I'm not even sure the algorithm
> works if order is not 0.
>
> [snip]
>
> >
> > > > Increasing GMEM_GUP_NPAGES to (1UL << PUD_ORDER) is probabaly not a good idea.
> > > >
> > > > Given both TDX/SNP map at 4KB granularity, why not just invoke post_populate()
> > > > per 4KB while removing the max_order from post_populate() parameters, as done
> > > > in Sean's sketch patch [1]?
> > >
> > > That's an option too, but SNP can make use of 2MB pages in the
> > > post-populate callback so I don't want to shut the door on that option
> > > just yet if it's not too much of a pain to work in. Given the guest BIOS
> > > lives primarily in 1 or 2 of these 2MB regions the benefits might be
> > > worthwhile, and SNP doesn't have a post-post-populate promotion path
> > > like TDX (at least, not one that would help much for guest boot times)
> > I see.
> >
> > So, what about below change?
>
> I'm not following what this change has to do with moving GUP out of the
> post_populate calls?
Without this change, TDX (and possibly SNP) would hit a warning when max_order>0.
(either GUP in 4KB granularity or this change can get rid of the warning).
Since this series already contains changes for 2MB pages (e.g., batched GUP to
allow SNP to map 2MB pages, and actually we don't need the change in patch 1
without considering huge pages), I don't see any reason to leave this change out
of tree.
Note: kvm_gmem_populate() already contains the logic of
while (!kvm_range_has_memory_attributes(kvm, gfn, gfn + (1 << max_order),
KVM_MEMORY_ATTRIBUTE_PRIVATE,
KVM_MEMORY_ATTRIBUTE_PRIVATE)) {
if (!max_order)
goto put_folio_and_exit;
max_order--;
}
Also, the series is titled "Rework preparation/population flows in prep for
in-place conversion", so it's not just about "moving GUP out of the
post_populate", right? :)
> > --- a/virt/kvm/guest_memfd.c
> > +++ b/virt/kvm/guest_memfd.c
> > @@ -878,11 +878,10 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long
> > }
> >
> > folio_unlock(folio);
> > - WARN_ON(!IS_ALIGNED(gfn, 1 << max_order) ||
> > - (npages - i) < (1 << max_order));
> >
> > ret = -EINVAL;
> > - while (!kvm_range_has_memory_attributes(kvm, gfn, gfn + (1 << max_order),
> > + while (!IS_ALIGNED(gfn, 1 << max_order) || (npages - i) < (1 << max_order) ||
> > + !kvm_range_has_memory_attributes(kvm, gfn, gfn + (1 << max_order),
> > KVM_MEMORY_ATTRIBUTE_PRIVATE,
> > KVM_MEMORY_ATTRIBUTE_PRIVATE)) {
> > if (!max_order)
> >
> >
> >
>
> [snip]
Powered by blists - more mailing lists