[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <diqzcy7d60e2.fsf@google.com>
Date: Fri, 26 Sep 2025 09:14:45 +0000
From: Ackerley Tng <ackerleytng@...gle.com>
To: Yan Zhao <yan.y.zhao@...el.com>
Cc: kvm@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
x86@...nel.org, linux-fsdevel@...r.kernel.org, aik@....com,
ajones@...tanamicro.com, akpm@...ux-foundation.org, amoorthy@...gle.com,
anthony.yznaga@...cle.com, anup@...infault.org, aou@...s.berkeley.edu,
bfoster@...hat.com, binbin.wu@...ux.intel.com, brauner@...nel.org,
catalin.marinas@....com, chao.p.peng@...el.com, chenhuacai@...nel.org,
dave.hansen@...el.com, david@...hat.com, dmatlack@...gle.com,
dwmw@...zon.co.uk, erdemaktas@...gle.com, fan.du@...el.com, fvdl@...gle.com,
graf@...zon.com, haibo1.xu@...el.com, hch@...radead.org, hughd@...gle.com,
ira.weiny@...el.com, isaku.yamahata@...el.com, jack@...e.cz,
james.morse@....com, jarkko@...nel.org, jgg@...pe.ca, jgowans@...zon.com,
jhubbard@...dia.com, jroedel@...e.de, jthoughton@...gle.com,
jun.miao@...el.com, kai.huang@...el.com, keirf@...gle.com,
kent.overstreet@...ux.dev, kirill.shutemov@...el.com, liam.merwick@...cle.com,
maciej.wieczor-retman@...el.com, mail@...iej.szmigiero.name, maz@...nel.org,
mic@...ikod.net, michael.roth@....com, mpe@...erman.id.au,
muchun.song@...ux.dev, nikunj@....com, nsaenz@...zon.es,
oliver.upton@...ux.dev, palmer@...belt.com, pankaj.gupta@....com,
paul.walmsley@...ive.com, pbonzini@...hat.com, pdurrant@...zon.co.uk,
peterx@...hat.com, pgonda@...gle.com, pvorel@...e.cz, qperret@...gle.com,
quic_cvanscha@...cinc.com, quic_eberman@...cinc.com,
quic_mnalajal@...cinc.com, quic_pderrin@...cinc.com, quic_pheragu@...cinc.com,
quic_svaddagi@...cinc.com, quic_tsoni@...cinc.com, richard.weiyang@...il.com,
rick.p.edgecombe@...el.com, rientjes@...gle.com, roypat@...zon.co.uk,
rppt@...nel.org, seanjc@...gle.com, shuah@...nel.org, steven.price@....com,
steven.sistare@...cle.com, suzuki.poulose@....com, tabba@...gle.com,
thomas.lendacky@....com, usama.arif@...edance.com, vannapurve@...gle.com,
vbabka@...e.cz, viro@...iv.linux.org.uk, vkuznets@...hat.com,
wei.w.wang@...el.com, will@...nel.org, willy@...radead.org,
xiaoyao.li@...el.com, yilun.xu@...el.com, yuzenghui@...wei.com,
zhiquan1.li@...el.com
Subject: Re: [RFC PATCH v2 39/51] KVM: guest_memfd: Merge and truncate on fallocate(PUNCH_HOLE)
Yan Zhao <yan.y.zhao@...el.com> writes:
> On Wed, May 28, 2025 at 09:39:35AM -0700, Ackerley Tng wrote:
>> Yan Zhao <yan.y.zhao@...el.com> writes:
>>
>> > On Wed, May 14, 2025 at 04:42:18PM -0700, Ackerley Tng wrote:
>> >> Merge and truncate on fallocate(PUNCH_HOLE), but if the file is being
>> >> closed, defer merging to folio_put() callback.
>> >>
>> >> Change-Id: Iae26987756e70c83f3b121edbc0ed0bc105eec0d
>> >> Signed-off-by: Ackerley Tng <ackerleytng@...gle.com>
>> >> ---
>> >> virt/kvm/guest_memfd.c | 76 +++++++++++++++++++++++++++++++++++++-----
>> >> 1 file changed, 68 insertions(+), 8 deletions(-)
>> >>
>> >> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
>> >> index cb426c1dfef8..04b1513c2998 100644
>> >> --- a/virt/kvm/guest_memfd.c
>> >> +++ b/virt/kvm/guest_memfd.c
>> >> @@ -859,6 +859,35 @@ static int kvm_gmem_restructure_folios_in_range(struct inode *inode,
>> >> return ret;
>> >> }
>> >>
>> >> +static long kvm_gmem_merge_truncate_indices(struct inode *inode, pgoff_t index,
>> >> + size_t nr_pages)
>> >> +{
>> >> + struct folio *f;
>> >> + pgoff_t unused;
>> >> + long num_freed;
>> >> +
>> >> + unmap_mapping_pages(inode->i_mapping, index, nr_pages, false);
>> >> +
>> >> + if (!kvm_gmem_has_safe_refcount(inode->i_mapping, index, nr_pages, &unused))
>>
>> Yan, thank you for your reviews!
>>
Thanks again for your reviews. I would like to respond to this since I'm
finally getting back to this part.
>> > Why is kvm_gmem_has_safe_refcount() checked here, but not in
>> > kvm_gmem_zero_range() within kvm_gmem_truncate_inode_range() in patch 33?
>> >
>>
>> The contract for guest_memfd with HugeTLB pages is that if holes are
>> punched in any ranges less than a full huge page, no pages are removed
>> from the filemap. Those ranges are only zeroed.
>>
>> In kvm_gmem_zero_range(), we never remove any folios, and so there is no
>> need to merge. If there's no need to merge, then we don't need to check
>> for a safe refcount, and can just proceed to zero.
> However, if there are still extra ref count to a shared page, its content will
> be zeroed out.
>
I believe this topic is kind of overtaken by events. IIUC the current
upstream stance is that for guest_memfd we're not allowing hole-punching
of pages for huge pages, so once a HugeTLB guest_memfd is requested,
hole punching of less than the requested HugeTLB size will result in
-EINVAL being returned to userspace.
>>
>> kvm_gmem_merge_truncate_indices() is only used during hole punching and
>> not when the file is closed. Hole punch vs file closure is checked using
>> mapping_exiting(inode->i_mapping).
>>
>> During a hole punch, we will only allow truncation if there are no
>> unexpected refcounts on any subpages, hence this
>> kvm_gmem_has_safe_refcount() check.
> Hmm, I couldn't find a similar refcount check in hugetlbfs_punch_hole().
> Did I overlook it?
>
> So, why does guest_memfd require this check when punching a hole?
>
There's no equivalent check in HugeTLBfs.
For guest_memfd, we want to defer merging to the kernel worker as little
as possible, hence we want to merge before truncating. Checking for
elevated refcounts is a prerequisite for merging, not directly for hole
punching.
>> >> + return -EAGAIN;
>> >> +
>> >
>> > Rather than merging the folios, could we simply call kvm_gmem_truncate_indices()
>> > instead?
>> >
>> > num_freed = kvm_gmem_truncate_indices(inode->i_mapping, index, nr_pages);
>> > return num_freed;
>> >
>>
>> We could do this too, but then that would be deferring the huge page
>> merging to the folio_put() callback and eventually the kernel worker
>> thread.
> With deferring the huge page merging to folio_put(), a benefit is that
> __kvm_gmem_filemap_add_folio() can be saved for the merged folio. This function
> is possible to fail and is unnecessary for punch hole as the folio will be
> removed immediately from the filemap in truncate_inode_folio().
>
>
That is a good point! Definitely sounds good to defer this to folio_put().
>> My goal here is to try to not to defer merging and freeing as much as
>> possible so that most of the page/memory operations are
>> synchronous, because synchronous operations are more predictable.
>>
>> As an example of improving predictability, in one of the selftests, I do
>> a hole punch and then try to allocate again. Because the merging and
>> freeing of the HugeTLB page sometimes takes too long, the allocation
>> sometimes fails: the guest_memfd's subpool hadn't yet received the freed
>> page back. With a synchronous truncation, the truncation may take
>> longer, but the selftest predictably passes.
> Maybe check if guestmem_hugetlb_handle_folio_put() is invoked in the
> interrupt context, and, if not, invoke the guestmem_hugetlb_cleanup_folio()
> synchronously?
>
>
I think this is a good idea. I would like to pursue this in a future
revision of a patch series.
It seems like the use of in_atomic() is strongly discouraged, do you
have any tips on how to determine if folio_put() is being called from
atomic context?
>>
>> [...snip...]
>>
Powered by blists - more mailing lists