[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241104084229.29882-1-yan.y.zhao@intel.com>
Date: Mon, 4 Nov 2024 16:42:29 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: pbonzini@...hat.com,
seanjc@...gle.com
Cc: linux-kernel@...r.kernel.org,
kvm@...r.kernel.org,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
kernel test robot <lkp@...el.com>,
Yan Zhao <yan.y.zhao@...el.com>
Subject: [PATCH 1/2] KVM: x86/tdp_mmu: Use rcu_dereference() to protect sptep for dereferencing
From: Rick Edgecombe <rick.p.edgecombe@...el.com>
Use rcu_dereference() to copy the RCU-protected pointer sptep into a local
variable for later dereferencing. This also checks that the dereferencing
occurs within the RCU read-side critical section.
Change is_mirror_sptep()'s input type from "u64 *" to "tdp_ptep_t" (typedef
as "u64 __rcu *") to centralize the call of rcu_dereference().
Opportunistically, since try_cmpxchg64() is now the only place in
__tdp_mmu_set_spte_atomic() that dereferences the local variable, move
the rcu_dereference() call closer to its point of use.
Reported-by: kernel test robot <lkp@...el.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202410121644.Eq7zRGPO-lkp@intel.com
Co-developed-by: Yan Zhao <yan.y.zhao@...el.com>
Signed-off-by: Yan Zhao <yan.y.zhao@...el.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
---
arch/x86/kvm/mmu/spte.h | 4 ++--
arch/x86/kvm/mmu/tdp_mmu.c | 6 +++---
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 8496a2abbde2..ef322f972948 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -267,9 +267,9 @@ static inline struct kvm_mmu_page *root_to_sp(hpa_t root)
return spte_to_child_sp(root);
}
-static inline bool is_mirror_sptep(u64 *sptep)
+static inline bool is_mirror_sptep(tdp_ptep_t sptep)
{
- return is_mirror_sp(sptep_to_sp(sptep));
+ return is_mirror_sp(sptep_to_sp(rcu_dereference(sptep)));
}
static inline bool is_mmio_spte(struct kvm *kvm, u64 spte)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index b0e1c4cb3004..2741b6587ec9 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -511,7 +511,7 @@ static int __must_check set_external_spte_present(struct kvm *kvm, tdp_ptep_t sp
* page table has been modified. Use FROZEN_SPTE similar to
* the zapping case.
*/
- if (!try_cmpxchg64(sptep, &old_spte, FROZEN_SPTE))
+ if (!try_cmpxchg64(rcu_dereference(sptep), &old_spte, FROZEN_SPTE))
return -EBUSY;
/*
@@ -637,8 +637,6 @@ static inline int __must_check __tdp_mmu_set_spte_atomic(struct kvm *kvm,
struct tdp_iter *iter,
u64 new_spte)
{
- u64 *sptep = rcu_dereference(iter->sptep);
-
/*
* The caller is responsible for ensuring the old SPTE is not a FROZEN
* SPTE. KVM should never attempt to zap or manipulate a FROZEN SPTE,
@@ -662,6 +660,8 @@ static inline int __must_check __tdp_mmu_set_spte_atomic(struct kvm *kvm,
if (ret)
return ret;
} else {
+ u64 *sptep = rcu_dereference(iter->sptep);
+
/*
* Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs
* and does not hold the mmu_lock. On failure, i.e. if a
--
2.43.2
Powered by blists - more mailing lists