[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3bd7936624b11f755608b1c51cc1376ebf2c3a4f.camel@amd.com>
Date: Tue, 7 Jan 2025 12:11:22 +0000
From: "Shah, Amit" <Amit.Shah@....com>
To: "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "david@...hat.com"
<david@...hat.com>, "Roth, Michael" <Michael.Roth@....com>
CC: "liam.merwick@...cle.com" <liam.merwick@...cle.com>, "seanjc@...gle.com"
<seanjc@...gle.com>, "jroedel@...e.de" <jroedel@...e.de>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, "Sampat, Pratik Rajesh"
<PratikRajesh.Sampat@....com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "Lendacky, Thomas" <Thomas.Lendacky@....com>,
"vbabka@...e.cz" <vbabka@...e.cz>, "pbonzini@...hat.com"
<pbonzini@...hat.com>, "linux-coco@...ts.linux.dev"
<linux-coco@...ts.linux.dev>, "quic_eberman@...cinc.com"
<quic_eberman@...cinc.com>, "Kalra, Ashish" <Ashish.Kalra@....com>,
"ackerleytng@...gle.com" <ackerleytng@...gle.com>, "vannapurve@...gle.com"
<vannapurve@...gle.com>
Subject: Re: [PATCH RFC v1 0/5] KVM: gmem: 2MB THP support and preparedness
tracking changes
On Fri, 2024-12-20 at 12:31 +0100, David Hildenbrand wrote:
> On 12.12.24 07:36, Michael Roth wrote:
> > This patchset is also available at:
> >
> > https://github.com/amdese/linux/commits/snp-prepare-thp-rfc1
> >
> > and is based on top of Paolo's kvm-coco-queue-2024-11 tag which
> > includes
> > a snapshot of his patches[1] to provide tracking of whether or not
> > sub-pages of a huge folio need to have kvm_arch_gmem_prepare()
> > hooks issued
> > before guest access:
> >
> > d55475f23cea KVM: gmem: track preparedness a page at a time
> > 64b46ca6cd6d KVM: gmem: limit hole-punching to ranges within the
> > file
> > 17df70a5ea65 KVM: gmem: add a complete set of functions to query
> > page preparedness
> > e3449f6841ef KVM: gmem: allocate private data for the gmem inode
> >
> > [1]
> > https://lore.kernel.org/lkml/20241108155056.332412-1-pbonzini@redhat.com/
> >
> > This series addresses some of the pending review comments for those
> > patches
> > (feel free to squash/rework as-needed), and implements a first real
> > user in
> > the form of a reworked version of Sean's original 2MB THP support
> > for gmem.
> >
> > It is still a bit up in the air as to whether or not gmem should
> > support
> > THP at all rather than moving straight to 2MB/1GB hugepages in the
> > form of
> > something like HugeTLB folios[2] or the lower-level PFN range
> > allocator
> > presented by Yu Zhao during the guest_memfd call last week. The
> > main
> > arguments against THP, as I understand it, is that THPs will become
> > split over time due to hole-punching and rarely have an opportunity
> > to get
> > rebuilt due to lack of memory migration support for current CoCo
> > hypervisor
> > implementations like SNP (and adding the migration support to
> > resolve that
> > not necessarily resulting in a net-gain performance-wise). The
> > current
> > plan for SNP, as discussed during the first guest_memfd call, is to
> > implement something similar to 2MB HugeTLB, and disallow hole-
> > punching
> > at sub-2MB granularity.
> >
> > However, there have also been some discussions during recent PUCK
> > calls
> > where the KVM maintainers have some still expressed some interest
> > in pulling
> > in gmem THP support in a more official capacity. The thinking there
> > is that
> > hole-punching is a userspace policy, and that it could in theory
> > avoid
> > holepunching for sub-2MB GFN ranges to avoid degradation over time.
> > And if there's a desire to enforce this from the kernel-side by
> > blocking
> > sub-2MB hole-punching from the host-side, this would provide
> > similar
> > semantics/behavior to the 2MB HugeTLB-like approach above.
> >
> > So maybe there is still some room for discussion about these
> > approaches.
> >
> > Outside that, there are a number of other development areas where
> > it would
> > be useful to at least have some experimental 2MB support in place
> > so that
> > those efforts can be pursued in parallel, such as the preparedness
> > tracking touched on here, and exploring how that will intersect
> > with other
> > development areas like using gmem for both shared and private
> > memory, mmap
> > support, guest_memfd library, etc., so my hopes are that this
> > approach
> > could be useful for that purpose at least, even if only as an out-
> > of-tree
> > stop-gap.
> >
> > Thoughts/comments welcome!
>
> Sorry for the late reply, it's been a couple of crazy weeks, and I'm
> trying to give at least some feedback on stuff in my inbox before
> even
> more will pile up over Christmas :) . Let me summarize my thoughts:
My turn for the lateness - back from a break.
I should also preface that Mike is off for at least a month more, but
he will return to continue working on this. In the meantime, I've had
a chat with him about this work to keep the discussion alive on the
lists.
> THPs in Linux rely on the following principle:
>
> (1) We try allocating a THP, if that fails we rely on khugepaged to
> fix
> it up later (shmem+anon). So id we cannot grab a free THP, we
> deffer it to a later point.
>
> (2) We try to be as transparent as possible: punching a hole will
> usually destroy the THP (either immediately for shmem/pagecache
> or
> deferred for anon memory) to free up the now-free pages. That's
> different to hugetlb, where partial hole-punching will always
> zero-
> out the memory only; the partial memory will not get freed up
> and
> will get reused later.
>
> Destroying a THP for shmem/pagecache only works if there are no
> unexpected page references, so there can be cases where we fail
> to
> free up memory. For the pagecache that's not really
> an issue, because memory reclaim will fix that up at some point.
> For
> shmem, there were discussions to do scan for 0ed pages and free
> them up during memory reclaim, just like we do now for anon
> memory
> as well.
>
> (3) Memory compaction is vital for guaranteeing that we will be able
> to
> create THPs the longer the system was running,
>
>
> With guest_memfd we cannot rely on any daemon to fix it up as in (1)
> for
> us later (would require page memory migration support).
True. And not having a huge page when requested to begin with (as in 1
above) beats the purpose entirely -- the point is to speed up SEV-SNP
setup and guests by having fewer pages to work with.
> We use truncate_inode_pages_range(), which will split a THP into
> small
> pages if you partially punch-hole it, so (2) would apply; splitting
> might fail as well in some cases if there are unexpected references.
>
> I wonder what would happen if user space would punch a hole in
> private
> memory, making truncate_inode_pages_range() overwrite it with 0s if
> splitting the THP failed (memory write to private pages under TDX?).
> Maybe something similar would happen if a private page would get 0-ed
> out when freeing+reallocating it, not sure how that is handled.
>
>
> guest_memfd currently actively works against (3) as soon as we (A)
> fallback to allocating small pages or (B) split a THP due to hole
> punching, as the remaining fragments cannot get reassembled anymore.
>
> I assume there is some truth to "hole-punching is a userspace
> policy",
> but this mechanism will actively work against itself as soon as you
> start falling back to small pages in any way.
>
>
>
> So I'm wondering if a better start would be to (A) always allocate
> huge
> pages from the buddy (no fallback) and
that sounds fine..
> (B) partial punches are either
> disallowed or only zero-out the memory. But even a sequence of
> partial
> punches that cover the whole huge page will not end up freeing all
> parts
> if splitting failed at some point, which I quite dislike ...
... this basically just looks like hugetlb support (i.e. without the
"transparent" part), isn't it?
> But then we'd need memory preallocation, and I suspect to make this
> really useful -- just like with 2M/1G "hugetlb" support -- in-place
> shared<->private conversion will be a requirement. ... at which point
> we'd have reached the state where it's almost the 2M hugetlb support.
Right, exactly.
> This is not a very strong push back, more a "this does not quite
> sound
> right to me" and I have the feeling that this might get in the way of
> in-place shared<->private conversion; I might be wrong about the
> latter
> though.
TBH my 2c are that getting hugepage supported, and disabling THP for
SEV-SNP guests will work fine.
But as Mike mentioned above, this series is to add a user on top of
Paolo's work - and that seems more straightforward to experiment with
and figure out hugepage support in general while getting all the other
hugepage details done in parallel.
> With memory compaction working for guest_memfd, it would all be
> easier.
... btw do you know how well this is coming along?
> Note that I'm not quite sure about the "2MB" interface, should it be
> a
> "PMD-size" interface?
I think Mike and I touched upon this aspect too - and I may be
misremembering - Mike suggested getting 1M, 2M, and bigger page sizes
in increments -- and then fitting in PMD sizes when we've had enough of
those. That is to say he didn't want to preclude it, or gate the PMD
work on enabling all sizes first.
Amit
Powered by blists - more mailing lists