[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2906b4d3b789985917a063d095c4063ee6ab7b72.camel@intel.com>
Date: Thu, 15 Jan 2026 12:25:21 +0000
From: "Huang, Kai" <kai.huang@...el.com>
To: "pbonzini@...hat.com" <pbonzini@...hat.com>, "seanjc@...gle.com"
<seanjc@...gle.com>, "Zhao, Yan Y" <yan.y.zhao@...el.com>
CC: "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "Du, Fan" <fan.du@...el.com>,
"Li, Xiaoyao" <xiaoyao.li@...el.com>, "Gao, Chao" <chao.gao@...el.com>,
"Hansen, Dave" <dave.hansen@...el.com>, "thomas.lendacky@....com"
<thomas.lendacky@....com>, "vbabka@...e.cz" <vbabka@...e.cz>,
"tabba@...gle.com" <tabba@...gle.com>, "david@...nel.org" <david@...nel.org>,
"kas@...nel.org" <kas@...nel.org>, "michael.roth@....com"
<michael.roth@....com>, "Weiny, Ira" <ira.weiny@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"binbin.wu@...ux.intel.com" <binbin.wu@...ux.intel.com>,
"ackerleytng@...gle.com" <ackerleytng@...gle.com>, "nik.borisov@...e.com"
<nik.borisov@...e.com>, "Yamahata, Isaku" <isaku.yamahata@...el.com>, "Peng,
Chao P" <chao.p.peng@...el.com>, "francescolavra.fl@...il.com"
<francescolavra.fl@...il.com>, "sagis@...gle.com" <sagis@...gle.com>,
"Annapurve, Vishal" <vannapurve@...gle.com>, "Edgecombe, Rick P"
<rick.p.edgecombe@...el.com>, "Miao, Jun" <jun.miao@...el.com>,
"jgross@...e.com" <jgross@...e.com>, "pgonda@...gle.com" <pgonda@...gle.com>,
"x86@...nel.org" <x86@...nel.org>
Subject: Re: [PATCH v3 11/24] KVM: x86/mmu: Introduce
kvm_split_cross_boundary_leafs()
On Tue, 2026-01-06 at 18:21 +0800, Yan Zhao wrote:
> @@ -1692,12 +1707,35 @@ void kvm_tdp_mmu_try_split_huge_pages(struct kvm *kvm,
>
> kvm_lockdep_assert_mmu_lock_held(kvm, shared);
> for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id) {
> - r = tdp_mmu_split_huge_pages_root(kvm, root, start, end, target_level, shared);
> + r = tdp_mmu_split_huge_pages_root(kvm, root, start, end, target_level,
> + shared, false);
> + if (r) {
> + kvm_tdp_mmu_put_root(kvm, root);
> + break;
> + }
> + }
> +}
> +
> +int kvm_tdp_mmu_gfn_range_split_cross_boundary_leafs(struct kvm *kvm,
> + struct kvm_gfn_range *range,
> + bool shared)
> +{
> + enum kvm_tdp_mmu_root_types types;
> + struct kvm_mmu_page *root;
> + int r = 0;
> +
> + kvm_lockdep_assert_mmu_lock_held(kvm, shared);
> + types = kvm_gfn_range_filter_to_root_types(kvm, range->attr_filter);
> +
> + __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, types) {
> + r = tdp_mmu_split_huge_pages_root(kvm, root, range->start, range->end,
> + PG_LEVEL_4K, shared, true);
> if (r) {
> kvm_tdp_mmu_put_root(kvm, root);
> break;
> }
> }
> + return r;
> }
>
Seems the two functions -- kvm_tdp_mmu_try_split_huge_pages() and
kvm_tdp_mmu_gfn_range_split_cross_boundary_leafs() -- are almost
identical. Is it better to introduce a helper and make the two just be
the wrappers?
E.g.,
static int __kvm_tdp_mmu_split_huge_pages(struct kvm *kvm,
struct kvm_gfn_range *range,
int target_level,
bool shared,
bool cross_boundary_only)
{
...
}
And by using this helper, I found the name of the two wrapper functions
are not ideal:
kvm_tdp_mmu_try_split_huge_pages() is only for log dirty, and it should
not be reachable for TD (VM with mirrored PT). But currently it uses
KVM_VALID_ROOTS for root filter thus mirrored PT is also included. I
think it's better to rename it, e.g., at least with "log_dirty" in the
name so it's more clear this function is only for dealing log dirty (at
least currently). We can also add a WARN() if it's called for VM with
mirrored PT but it's a different topic.
kvm_tdp_mmu_gfn_range_split_cross_boundary_leafs() doesn't have
"huge_pages", which isn't consistent with the other. And it is a bit
long. If we don't have "gfn_range" in __kvm_tdp_mmu_split_huge_pages(),
then I think we can remove "gfn_range" from
kvm_tdp_mmu_gfn_range_split_cross_boundary_leafs() too to make it shorter.
So how about:
Rename kvm_tdp_mmu_try_split_huge_pages() to
kvm_tdp_mmu_split_huge_pages_log_dirty(), and rename
kvm_tdp_mmu_gfn_range_split_cross_boundary_leafs() to
kvm_tdp_mmu_split_huge_pages_cross_boundary()
?
E.g.,:
int kvm_tdp_mmu_split_huge_pages_log_dirty(struct kvm *kvm,
const kvm_memory_slot *slot,
gfn_t start, gfn_t end,
int target_level, bool shared)
{
struct kvm_gfn_range range = {
.slot = slot,
.start = start,
.end = end,
.attr_filter = 0, /* doesn't matter */
.may_block = true,
};
if (WARN_ON_ONCE(kvm_has_mirrored_tdp(kvm))
return -EINVAL;
return __kvm_tdp_mmu_split_huge_pages(kvm, &range, target_level,
shared, false);
}
int kvm_tdp_mmu_split_huge_pages_cross_boundary(struct kvm *kvm,
struct kvm_gfn_range *range,
int target_level,
bool shared)
{
return __kvm_tdp_mmu_split_huge_pages(kvm, range, target_level,
shared, true);
}
Anything I missed?
And one more minor thing:
With that, I think you can move range->may_block check from
kvm_split_cross_boundary_leafs() to the __kvm_tdp_mmu_split_huge_pages()
common helper:
if (!range->may_block)
return -EOPNOTSUPP;
Powered by blists - more mailing lists