[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250827000522.4022426-12-seanjc@google.com>
Date: Tue, 26 Aug 2025 17:05:21 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>, Paolo Bonzini <pbonzini@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Michael Roth <michael.roth@....com>, Yan Zhao <yan.y.zhao@...el.com>,
Ira Weiny <ira.weiny@...el.com>, Vishal Annapurve <vannapurve@...gle.com>,
Rick Edgecombe <rick.p.edgecombe@...el.com>
Subject: [RFC PATCH 11/12] KVM: TDX: Track nr_premapped as an "unsigned long",
not an "atomic64_t"
Track the number of premapped pfns as a non-atomic variable as all usage
is guarded by slots_lock, and KVM now asserts as much. Note, slots_lock
has always effectively guarded nr_premapped since TDX support landed, the
use of an atomic64_t was likely a leftover from development that was
never cleaned up.
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
arch/x86/kvm/vmx/tdx.c | 8 ++++----
arch/x86/kvm/vmx/tdx.h | 2 +-
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 27941defb62e..5d2bb27f22da 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1639,7 +1639,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
if (KVM_BUG_ON(kvm->arch.pre_fault_allowed, kvm))
return -EIO;
- atomic64_inc(&kvm_tdx->nr_premapped);
+ kvm_tdx->nr_premapped++;
return 0;
}
@@ -1771,7 +1771,7 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
if (tdx_is_sept_zap_err_due_to_premap(kvm_tdx, err, entry, level)) {
lockdep_assert_held(&kvm->slots_lock);
- if (KVM_BUG_ON(atomic64_dec_return(&kvm_tdx->nr_premapped) < 0, kvm))
+ if (KVM_BUG_ON(--kvm_tdx->nr_premapped < 0, kvm))
return -EIO;
return 0;
@@ -2846,7 +2846,7 @@ static int tdx_td_finalize(struct kvm *kvm, struct kvm_tdx_cmd *cmd)
* Pages are pending for KVM_TDX_INIT_MEM_REGION to issue
* TDH.MEM.PAGE.ADD().
*/
- if (atomic64_read(&kvm_tdx->nr_premapped))
+ if (kvm_tdx->nr_premapped)
return -EINVAL;
cmd->hw_error = tdh_mr_finalize(&kvm_tdx->td);
@@ -3160,7 +3160,7 @@ static int tdx_gmem_post_populate(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
goto out;
}
- KVM_BUG_ON(atomic64_dec_return(&kvm_tdx->nr_premapped) < 0, kvm);
+ KVM_BUG_ON(--kvm_tdx->nr_premapped < 0, kvm);
if (arg->flags & KVM_TDX_MEASURE_MEMORY_REGION) {
for (i = 0; i < PAGE_SIZE; i += TDX_EXTENDMR_CHUNKSIZE) {
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index ca39a9391db1..04ba9ea3e0ba 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -37,7 +37,7 @@ struct kvm_tdx {
struct tdx_td td;
/* For KVM_TDX_INIT_MEM_REGION. */
- atomic64_t nr_premapped;
+ unsigned long nr_premapped;
/*
* Prevent vCPUs from TD entry to ensure SEPT zap related SEAMCALLs do
--
2.51.0.268.g9569e192d0-goog
Powered by blists - more mailing lists