[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <35e79cadf079622588ddb9fac0ccc985751dd81b.camel@intel.com>
Date: Thu, 18 Dec 2025 22:53:14 +0000
From: "Huang, Kai" <kai.huang@...el.com>
To: "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "michael.roth@....com"
<michael.roth@....com>
CC: "david@...hat.com" <david@...hat.com>, "liam.merwick@...cle.com"
<liam.merwick@...cle.com>, "seanjc@...gle.com" <seanjc@...gle.com>,
"aik@....com" <aik@....com>, "linux-mm@...ck.org" <linux-mm@...ck.org>,
"Annapurve, Vishal" <vannapurve@...gle.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "thomas.lendacky@....com"
<thomas.lendacky@....com>, "vbabka@...e.cz" <vbabka@...e.cz>,
"ashish.kalra@....com" <ashish.kalra@....com>, "linux-coco@...ts.linux.dev"
<linux-coco@...ts.linux.dev>, "Weiny, Ira" <ira.weiny@...el.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>, "ackerleytng@...gle.com"
<ackerleytng@...gle.com>, "Zhao, Yan Y" <yan.y.zhao@...el.com>
Subject: Re: [PATCH v2 2/5] KVM: guest_memfd: Remove preparation tracking
> /*
> * Process @folio, which contains @gfn, so that the guest can use it.
> * The folio must be locked and the gfn must be contained in @slot.
> @@ -90,13 +85,7 @@ static inline void kvm_gmem_mark_prepared(struct folio *folio)
> static int kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slot,
> gfn_t gfn, struct folio *folio)
> {
> - unsigned long nr_pages, i;
> pgoff_t index;
> - int r;
> -
> - nr_pages = folio_nr_pages(folio);
> - for (i = 0; i < nr_pages; i++)
> - clear_highpage(folio_page(folio, i));
>
Here the entire folio is cleared, but ...
[...]
> - folio = __kvm_gmem_get_pfn(file, slot, index, pfn, &is_prepared, max_order);
> + folio = __kvm_gmem_get_pfn(file, slot, index, pfn, max_order);
> if (IS_ERR(folio))
> return PTR_ERR(folio);
>
> - if (!is_prepared)
> - r = kvm_gmem_prepare_folio(kvm, slot, gfn, folio);
> + if (!folio_test_uptodate(folio)) {
> + clear_highpage(folio_page(folio, 0));
> + folio_mark_uptodate(folio);
> + }
... here only the first page is cleared.
I understand currently there's no huge folio coming out of gmem now, but
since both __kvm_gmem_get_pfn() and kvm_gmem_get_pfn() still have
@max_level as output, it's kinda inconsistent here.
I also see kvm_gmem_fault_user_mapping() only clears the first page too,
but I think that already has assumption that folio can never be huge
currently?
Given this, and the fact that the first patch of this series has
introduced
WARN_ON_ONCE(folio_order(folio));
in kvm_gmem_get_folio(), I think it's fine to only clear the first page,
but for the sake of consistency, perhaps we should just remove @max_order
from __kvm_gmem_get_pfn() and kvm_gmem_get_pfn()?
Then we can handle huge folio logic when that comes to play.
Btw:
I actually looked into the RFC v1 discussion but the code there actually
does a loop to clear all pages in the folio. There were some other
discussions about AFAICT they were more related to issues regarding to
"mark entire folio as uptodate while only one page is processed in
post_populate()".
Btw2:
There was also discussion that clearing page isn't required for TDX. To
that end, maybe we can remove clearing page from gmem common code but to
SEV code, e.g., as part of "folio preparation"?
Powered by blists - more mailing lists