[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aYG2XxIylgaL+MJ9@yzhao56-desk.sh.intel.com>
Date: Tue, 3 Feb 2026 16:48:31 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: Thomas Gleixner <tglx@...nel.org>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
<x86@...nel.org>, Kiryl Shutsemau <kas@...nel.org>, Paolo Bonzini
<pbonzini@...hat.com>, <linux-kernel@...r.kernel.org>,
<linux-coco@...ts.linux.dev>, <kvm@...r.kernel.org>, Kai Huang
<kai.huang@...el.com>, Rick Edgecombe <rick.p.edgecombe@...el.com>, "Vishal
Annapurve" <vannapurve@...gle.com>, Ackerley Tng <ackerleytng@...gle.com>,
Sagi Shahar <sagis@...gle.com>, Binbin Wu <binbin.wu@...ux.intel.com>,
Xiaoyao Li <xiaoyao.li@...el.com>, Isaku Yamahata <isaku.yamahata@...el.com>
Subject: Re: [RFC PATCH v5 02/45] KVM: x86/mmu: Update iter->old_spte if
cmpxchg64 on mirror SPTE "fails"
On Wed, Jan 28, 2026 at 05:14:34PM -0800, Sean Christopherson wrote:
> Pass a pointer to iter->old_spte, not simply its value, when setting an
> external SPTE in __tdp_mmu_set_spte_atomic(), so that the iterator's value
> will be updated if the cmpxchg64 to freeze the mirror SPTE fails. The bug
> is currently benign as TDX is mutualy exclusive with all paths that do
> "local" retry", e.g. clear_dirty_gfn_range() and wrprot_gfn_range().
>
> Fixes: 77ac7079e66d ("KVM: x86/tdp_mmu: Propagate building mirror page tables")
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
> arch/x86/kvm/mmu/tdp_mmu.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 9c26038f6b77..0feda295859a 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -509,10 +509,10 @@ static void *get_external_spt(gfn_t gfn, u64 new_spte, int level)
> }
>
> static int __must_check set_external_spte_present(struct kvm *kvm, tdp_ptep_t sptep,
> - gfn_t gfn, u64 old_spte,
> + gfn_t gfn, u64 *old_spte,
> u64 new_spte, int level)
> {
> - bool was_present = is_shadow_present_pte(old_spte);
> + bool was_present = is_shadow_present_pte(*old_spte);
> bool is_present = is_shadow_present_pte(new_spte);
> bool is_leaf = is_present && is_last_spte(new_spte, level);
> int ret = 0;
> @@ -525,7 +525,7 @@ static int __must_check set_external_spte_present(struct kvm *kvm, tdp_ptep_t sp
> * page table has been modified. Use FROZEN_SPTE similar to
> * the zapping case.
> */
> - if (!try_cmpxchg64(rcu_dereference(sptep), &old_spte, FROZEN_SPTE))
> + if (!try_cmpxchg64(rcu_dereference(sptep), old_spte, FROZEN_SPTE))
> return -EBUSY;
>
> /*
> @@ -541,7 +541,7 @@ static int __must_check set_external_spte_present(struct kvm *kvm, tdp_ptep_t sp
> ret = kvm_x86_call(link_external_spt)(kvm, gfn, level, external_spt);
> }
> if (ret)
> - __kvm_tdp_mmu_write_spte(sptep, old_spte);
> + __kvm_tdp_mmu_write_spte(sptep, *old_spte);
Do we need to add a comment explaining that when the above try_cmpxchg64()
succeeds, the value of *old_spte is unmodified?
> else
> __kvm_tdp_mmu_write_spte(sptep, new_spte);
> return ret;
> @@ -670,7 +670,7 @@ static inline int __must_check __tdp_mmu_set_spte_atomic(struct kvm *kvm,
> return -EBUSY;
>
> ret = set_external_spte_present(kvm, iter->sptep, iter->gfn,
> - iter->old_spte, new_spte, iter->level);
> + &iter->old_spte, new_spte, iter->level);
> if (ret)
> return ret;
> } else {
> --
> 2.53.0.rc1.217.geba53bf80e-goog
>
Powered by blists - more mailing lists