[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230815213533.548732-3-seanjc@google.com>
Date: Tue, 15 Aug 2023 14:35:25 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Joerg Roedel <joro@...tes.org>
Cc: kvm@...r.kernel.org, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org, Maxim Levitsky <mlevitsk@...hat.com>
Subject: [PATCH 02/10] KVM: SVM: Use AVIC_HPA_MASK when initializing vCPU's
Physical ID entry
Use AVIC_HPA_MASK instead of AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK when
initializing a vCPU's Physical ID table entry, the two masks are identical.
Keep both #defines for now, along with a few new static asserts. A future
change will clean up the entire mess (spoiler alert, the masks are
pointless).
Opportunisitically move the bitwise-OR of AVIC_PHYSICAL_ID_ENTRY_VALID_MASK
outside of the call to __sme_set(), again to pave the way for code
deduplication. __sme_set() is purely additive, i.e. ORing in the valid
bit before or after the C-bit does not change the end result.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
arch/x86/include/asm/svm.h | 2 ++
arch/x86/kvm/svm/avic.c | 5 ++---
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 1e70600e84f7..609c9b596399 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -285,6 +285,8 @@ static_assert((AVIC_MAX_PHYSICAL_ID & AVIC_PHYSICAL_MAX_INDEX_MASK) == AVIC_MAX_
static_assert((X2AVIC_MAX_PHYSICAL_ID & AVIC_PHYSICAL_MAX_INDEX_MASK) == X2AVIC_MAX_PHYSICAL_ID);
#define AVIC_HPA_MASK ~((0xFFFULL << 52) | 0xFFF)
+static_assert(AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK == AVIC_HPA_MASK);
+static_assert(AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK == GENMASK_ULL(51, 12));
#define SVM_SEV_FEAT_DEBUG_SWAP BIT(5)
diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index 7062164e4041..442c58ef8158 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -308,9 +308,8 @@ static int avic_init_backing_page(struct kvm_vcpu *vcpu)
if (!entry)
return -EINVAL;
- new_entry = __sme_set((page_to_phys(svm->avic_backing_page) &
- AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK) |
- AVIC_PHYSICAL_ID_ENTRY_VALID_MASK);
+ new_entry = __sme_set(page_to_phys(svm->avic_backing_page) & AVIC_HPA_MASK) |
+ AVIC_PHYSICAL_ID_ENTRY_VALID_MASK;
WRITE_ONCE(*entry, new_entry);
svm->avic_physical_id_cache = entry;
--
2.41.0.694.ge786442a9b-goog
Powered by blists - more mailing lists