[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aAsJZuLjOAYriz8v@yzhao56-desk.sh.intel.com>
Date: Fri, 25 Apr 2025 12:02:46 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Ackerley Tng <ackerleytng@...gle.com>
CC: Vishal Annapurve <vannapurve@...gle.com>, Chenyi Qiang
<chenyi.qiang@...el.com>, <tabba@...gle.com>, <quic_eberman@...cinc.com>,
<roypat@...zon.co.uk>, <jgg@...dia.com>, <peterx@...hat.com>,
<david@...hat.com>, <rientjes@...gle.com>, <fvdl@...gle.com>,
<jthoughton@...gle.com>, <seanjc@...gle.com>, <pbonzini@...hat.com>,
<zhiquan1.li@...el.com>, <fan.du@...el.com>, <jun.miao@...el.com>,
<isaku.yamahata@...el.com>, <muchun.song@...ux.dev>, <erdemaktas@...gle.com>,
<qperret@...gle.com>, <jhubbard@...dia.com>, <willy@...radead.org>,
<shuah@...nel.org>, <brauner@...nel.org>, <bfoster@...hat.com>,
<kent.overstreet@...ux.dev>, <pvorel@...e.cz>, <rppt@...nel.org>,
<richard.weiyang@...il.com>, <anup@...infault.org>, <haibo1.xu@...el.com>,
<ajones@...tanamicro.com>, <vkuznets@...hat.com>,
<maciej.wieczor-retman@...el.com>, <pgonda@...gle.com>,
<oliver.upton@...ux.dev>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, <kvm@...r.kernel.org>,
<linux-kselftest@...r.kernel.org>
Subject: Re: [RFC PATCH 39/39] KVM: guest_memfd: Dynamically
split/reconstruct HugeTLB page
On Thu, Apr 24, 2025 at 11:15:11AM -0700, Ackerley Tng wrote:
> Vishal Annapurve <vannapurve@...gle.com> writes:
>
> > On Thu, Apr 24, 2025 at 1:15 AM Yan Zhao <yan.y.zhao@...el.com> wrote:
> >>
> >> On Thu, Apr 24, 2025 at 01:55:51PM +0800, Chenyi Qiang wrote:
> >> >
> >> >
> >> > On 4/24/2025 12:25 PM, Yan Zhao wrote:
> >> > > On Thu, Apr 24, 2025 at 09:09:22AM +0800, Yan Zhao wrote:
> >> > >> On Wed, Apr 23, 2025 at 03:02:02PM -0700, Ackerley Tng wrote:
> >> > >>> Yan Zhao <yan.y.zhao@...el.com> writes:
> >> > >>>
> >> > >>>> On Tue, Sep 10, 2024 at 11:44:10PM +0000, Ackerley Tng wrote:
> >> > >>>>> +/*
> >> > >>>>> + * Allocates and then caches a folio in the filemap. Returns a folio with
> >> > >>>>> + * refcount of 2: 1 after allocation, and 1 taken by the filemap.
> >> > >>>>> + */
> >> > >>>>> +static struct folio *kvm_gmem_hugetlb_alloc_and_cache_folio(struct inode *inode,
> >> > >>>>> + pgoff_t index)
> >> > >>>>> +{
> >> > >>>>> + struct kvm_gmem_hugetlb *hgmem;
> >> > >>>>> + pgoff_t aligned_index;
> >> > >>>>> + struct folio *folio;
> >> > >>>>> + int nr_pages;
> >> > >>>>> + int ret;
> >> > >>>>> +
> >> > >>>>> + hgmem = kvm_gmem_hgmem(inode);
> >> > >>>>> + folio = kvm_gmem_hugetlb_alloc_folio(hgmem->h, hgmem->spool);
> >> > >>>>> + if (IS_ERR(folio))
> >> > >>>>> + return folio;
> >> > >>>>> +
> >> > >>>>> + nr_pages = 1UL << huge_page_order(hgmem->h);
> >> > >>>>> + aligned_index = round_down(index, nr_pages);
> >> > >>>> Maybe a gap here.
> >> > >>>>
> >> > >>>> When a guest_memfd is bound to a slot where slot->base_gfn is not aligned to
> >> > >>>> 2M/1G and slot->gmem.pgoff is 0, even if an index is 2M/1G aligned, the
> >> > >>>> corresponding GFN is not 2M/1G aligned.
> >> > >>>
> >> > >>> Thanks for looking into this.
> >> > >>>
> >> > >>> In 1G page support for guest_memfd, the offset and size are always
> >> > >>> hugepage aligned to the hugepage size requested at guest_memfd creation
> >> > >>> time, and it is true that when binding to a memslot, slot->base_gfn and
> >> > >>> slot->npages may not be hugepage aligned.
> >> > >>>
> >> > >>>>
> >> > >>>> However, TDX requires that private huge pages be 2M aligned in GFN.
> >> > >>>>
> >> > >>>
> >> > >>> IIUC other factors also contribute to determining the mapping level in
> >> > >>> the guest page tables, like lpage_info and .private_max_mapping_level()
> >> > >>> in kvm_x86_ops.
> >> > >>>
> >> > >>> If slot->base_gfn and slot->npages are not hugepage aligned, lpage_info
> >> > >>> will track that and not allow faulting into guest page tables at higher
> >> > >>> granularity.
> >> > >>
> >> > >> lpage_info only checks the alignments of slot->base_gfn and
> >> > >> slot->base_gfn + npages. e.g.,
> >> > >>
> >> > >> if slot->base_gfn is 8K, npages is 8M, then for this slot,
> >> > >> lpage_info[2M][0].disallow_lpage = 1, which is for GFN [4K, 2M+8K);
> >> > >> lpage_info[2M][1].disallow_lpage = 0, which is for GFN [2M+8K, 4M+8K);
> >> > >> lpage_info[2M][2].disallow_lpage = 0, which is for GFN [4M+8K, 6M+8K);
> >> > >> lpage_info[2M][3].disallow_lpage = 1, which is for GFN [6M+8K, 8M+8K);
> >> >
> >> > Should it be?
> >> > lpage_info[2M][0].disallow_lpage = 1, which is for GFN [8K, 2M);
> >> > lpage_info[2M][1].disallow_lpage = 0, which is for GFN [2M, 4M);
> >> > lpage_info[2M][2].disallow_lpage = 0, which is for GFN [4M, 6M);
> >> > lpage_info[2M][3].disallow_lpage = 0, which is for GFN [6M, 8M);
> >> > lpage_info[2M][4].disallow_lpage = 1, which is for GFN [8M, 8M+8K);
> >> Right. Good catch. Thanks!
> >>
> >> Let me update the example as below:
> >> slot->base_gfn is 2 (for GPA 8KB), npages 2000 (for a 8MB range)
> >>
> >> lpage_info[2M][0].disallow_lpage = 1, which is for GPA [8KB, 2MB);
> >> lpage_info[2M][1].disallow_lpage = 0, which is for GPA [2MB, 4MB);
> >> lpage_info[2M][2].disallow_lpage = 0, which is for GPA [4MB, 6MB);
> >> lpage_info[2M][3].disallow_lpage = 0, which is for GPA [6MB, 8MB);
> >> lpage_info[2M][4].disallow_lpage = 1, which is for GPA [8MB, 8MB+8KB);
> >>
> >> lpage_info indicates that a 2MB mapping is alllowed to cover GPA 4MB and GPA
> >> 4MB+16KB. However, their aligned_index values lead guest_memfd to allocate two
> >> 2MB folios, whose physical addresses may not be contiguous.
> >>
> >> Additionally, if the guest accesses two GPAs, e.g., GPA 2MB+8KB and GPA 4MB,
> >> KVM could create two 2MB mappings to cover GPA ranges [2MB, 4MB), [4MB, 6MB).
> >> However, guest_memfd just allocates the same 2MB folio for both faults.
> >>
> >>
> >> >
> >> > >>
> >> > >> ---------------------------------------------------------
> >> > >> | | | | | | | | |
> >> > >> 8K 2M 2M+8K 4M 4M+8K 6M 6M+8K 8M 8M+8K
> >> > >>
> >> > >> For GFN 6M and GFN 6M+4K, as they both belong to lpage_info[2M][2], huge
> >> > >> page is allowed. Also, they have the same aligned_index 2 in guest_memfd.
> >> > >> So, guest_memfd allocates the same huge folio of 2M order for them.
> >> > > Sorry, sent too fast this morning. The example is not right. The correct
> >> > > one is:
> >> > >
> >> > > For GFN 4M and GFN 4M+16K, lpage_info indicates that 2M is allowed. So,
> >> > > KVM will create a 2M mapping for them.
> >> > >
> >> > > However, in guest_memfd, GFN 4M and GFN 4M+16K do not correspond to the
> >> > > same 2M folio and physical addresses may not be contiguous.
> >
> > Then during binding, guest memfd offset misalignment with hugepage
> > should be same as gfn misalignment. i.e.
> >
> > (offset & ~huge_page_mask(h)) == ((slot->base_gfn << PAGE_SHIFT) &
> > ~huge_page_mask(h));
> >
> > For non guest_memfd backed scenarios, KVM allows slot gfn ranges that
> > are not hugepage aligned, so guest_memfd should also be able to
> > support non-hugepage aligned memslots.
> >
>
> I drew up a picture [1] which hopefully clarifies this.
>
> Thanks for pointing this out, I understand better now and we will add an
> extra constraint during memslot binding of guest_memfd to check that gfn
> offsets within a hugepage must be guest_memfd offsets.
I'm a bit confused.
As "index = gfn - slot->base_gfn + slot->gmem.pgoff", do you mean you are going
to force "slot->base_gfn == slot->gmem.pgoff" ?
For some memory region, e.g., "pc.ram", it's divided into 2 parts:
- one with offset 0, size 0x80000000(2G),
positioned at GPA 0, which is below GPA 4G;
- one with offset 0x80000000(2G), size 0x80000000(2G),
positioned at GPA 0x100000000(4G), which is above GPA 4G.
For the second part, its slot->base_gfn is 0x100000000, while slot->gmem.pgoff
is 0x80000000.
> Adding checks at binding time will allow hugepage-unaligned offsets (to
> be at parity with non-guest_memfd backing memory) but still fix this
> issue.
>
> lpage_info will make sure that ranges near the bounds will be
> fragmented, but the hugepages in the middle will still be mappable as
> hugepages.
>
> [1] https://lpc.events/event/18/contributions/1764/attachments/1409/3706/binding-must-have-same-alignment.svg
Powered by blists - more mailing lists