[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211207095039.53166-5-jiangshanlai@gmail.com>
Date: Tue, 7 Dec 2021 17:50:39 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>
Cc: Lai Jiangshan <laijs@...ux.alibaba.com>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>
Subject: [PATCH 4/4] KVM: X86: Only get rflags when needed in permission_fault()
From: Lai Jiangshan <laijs@...ux.alibaba.com>
In same cases, it doesn't need to get rflags for SMAP checks.
For example: it is user mode access, it could have contained other
permission fault, SMAP is not enabled, it is implicit supervisor
access, or it is nested TDP pagetable.
Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com>
---
arch/x86/kvm/mmu.h | 34 ++++++++++++++++++++++------------
1 file changed, 22 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 0cb2c52377c8..70ab6e392f18 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -252,8 +252,6 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
unsigned pte_access, unsigned pte_pkey,
unsigned pfec)
{
- unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
-
/*
* If explicit supervisor accesses, SMAP is disabled
* if EFLAGS.AC = 1.
@@ -261,22 +259,34 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
* If implicit supervisor accesses, SMAP can not be disabled
* regardless of the value EFLAGS.AC.
*
- * SMAP works on supervisor accesses only, and not_smap can
+ * SMAP works on supervisor accesses only, and SMAP checking bit can
* be set or not set when user access with neither has any bearing
* on the result.
*
- * This computes explicit_access && (rflags & X86_EFLAGS_AC), leaving
- * the result in X86_EFLAGS_AC. We then insert it in place of
- * the PFERR_RSVD_MASK bit; this bit will always be zero in pfec,
- * but it will be one in index if SMAP checks are being overridden.
- * It is important to keep this branchless.
+ * We put the SMAP checking bit in place of the PFERR_RSVD_MASK bit;
+ * this bit will always be zero in pfec, but it will be one in index
+ * if SMAP checks are being disabled.
*/
- u32 not_smap = (rflags & X86_EFLAGS_AC) & vcpu->arch.explicit_access;
- int index = (pfec >> 1) +
- (not_smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1));
- bool fault = (mmu->permissions[index] >> pte_access) & 1;
+ u32 fault = (mmu->permissions[pfec >> 1] >> pte_access) & 1;
+ int index = (pfec + PFERR_RSVD_MASK) >> 1;
+ u32 fault_not_smap = (mmu->permissions[index] >> pte_access) & 1;
u32 errcode = PFERR_PRESENT_MASK;
+ /*
+ * fault fault_not_smap
+ * 0 0 not fault here
+ * 0 1 impossible combination
+ * 1 0 check if implicit access or EFLAGS.AC
+ * 1 1 fault with non-SMAP permission fault
+ *
+ * It is common fault == fault_not_smap, and they are always
+ * equivalent when SMAP is not enabled.
+ */
+ if (unlikely(fault & ~fault_not_smap & vcpu->arch.explicit_access)) {
+ unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
+ fault = (rflags ^ X86_EFLAGS_AC) >> X86_EFLAGS_AC_BIT;
+ }
+
WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK));
if (unlikely(mmu->pkru_mask)) {
u32 pkru_bits, offset;
--
2.19.1.6.gb485710b
Powered by blists - more mailing lists