[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201014021157.18022-5-chenyi.qiang@intel.com>
Date: Wed, 14 Oct 2020 10:11:53 +0800
From: Chenyi Qiang <chenyi.qiang@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Xiaoyao Li <xiaoyao.li@...el.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [RFC v2 4/7] KVM: MMU: Refactor pkr_mask to cache condition
pkr_mask bitmap indicates if protection key checks are needed for user
pages currently. It is indexed by page fault error code bits [4:1] with
PFEC.RSVD replaced by the ACC_USER_MASK from the page tables. Refactor
it by reverting to the use of PFEC.RSVD. After that, PKS and PKU can
share the same bitmap.
Signed-off-by: Chenyi Qiang <chenyi.qiang@...el.com>
---
arch/x86/kvm/mmu.h | 10 ++++++----
arch/x86/kvm/mmu/mmu.c | 16 ++++++++++------
2 files changed, 16 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 306608248594..597b9159c10b 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -204,11 +204,13 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
* index of the protection domain, so pte_pkey * 2 is
* is the index of the first bit for the domain.
*/
- pkr_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3;
+ if (pte_access & PT_USER_MASK)
+ pkr_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3;
+ else
+ pkr_bits = 0;
- /* clear present bit, replace PFEC.RSVD with ACC_USER_MASK. */
- offset = (pfec & ~1) +
- ((pte_access & PT_USER_MASK) << (PFERR_RSVD_BIT - PT_USER_SHIFT));
+ /* clear present bit */
+ offset = (pfec & ~1);
pkr_bits &= mmu->pkr_mask >> offset;
errcode |= -pkr_bits & PFERR_PK_MASK;
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 834a95cf49fa..f9814ab0596d 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4716,21 +4716,25 @@ static void update_pkr_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
for (bit = 0; bit < ARRAY_SIZE(mmu->permissions); ++bit) {
unsigned pfec, pkey_bits;
- bool check_pkey, check_write, ff, uf, wf, pte_user;
+ bool check_pkey, check_write, ff, uf, wf, rsvdf;
pfec = bit << 1;
ff = pfec & PFERR_FETCH_MASK;
uf = pfec & PFERR_USER_MASK;
wf = pfec & PFERR_WRITE_MASK;
- /* PFEC.RSVD is replaced by ACC_USER_MASK. */
- pte_user = pfec & PFERR_RSVD_MASK;
+ /*
+ * PFERR_RSVD_MASK bit is not set if the
+ * access is subject to PK restrictions.
+ */
+ rsvdf = pfec & PFERR_RSVD_MASK;
/*
- * Only need to check the access which is not an
- * instruction fetch and is to a user page.
+ * need to check the access which is not an
+ * instruction fetch and is not a rsvd fault.
*/
- check_pkey = (!ff && pte_user);
+ check_pkey = (!ff && !rsvdf);
+
/*
* write access is controlled by PKRU if it is a
* user access or CR0.WP = 1.
--
2.17.1
Powered by blists - more mailing lists