[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200807084841.7112-5-chenyi.qiang@intel.com>
Date: Fri, 7 Aug 2020 16:48:38 +0800
From: Chenyi Qiang <chenyi.qiang@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Xiaoyao Li <xiaoyao.li@...el.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [RFC 4/7] KVM: MMU: Refactor pkr_mask to cache condition
pkr_mask bitmap indicates if protection key checks are needed for user
pages currently. It is indexed by page fault error code bits [4:1] with
PFEC.RSVD replaced by the ACC_USER_MASK from the page tables. Refactor
it by reverting to the use of PFEC.RSVD. After that, PKS and PKU can
share the same bitmap.
Signed-off-by: Chenyi Qiang <chenyi.qiang@...el.com>
---
arch/x86/kvm/mmu.h | 10 ++++++----
arch/x86/kvm/mmu/mmu.c | 16 ++++++++++------
2 files changed, 16 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 0c2fdf0abf22..7fb4c63d5704 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -202,11 +202,13 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
* index of the protection domain, so pte_pkey * 2 is
* is the index of the first bit for the domain.
*/
- pkr_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3;
+ if (pte_access & PT_USER_MASK)
+ pkr_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3;
+ else
+ pkr_bits = 0;
- /* clear present bit, replace PFEC.RSVD with ACC_USER_MASK. */
- offset = (pfec & ~1) +
- ((pte_access & PT_USER_MASK) << (PFERR_RSVD_BIT - PT_USER_SHIFT));
+ /* clear present bit */
+ offset = (pfec & ~1);
pkr_bits &= mmu->pkr_mask >> offset;
errcode |= -pkr_bits & PFERR_PK_MASK;
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 481442f5e27a..333b4da739f8 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4737,21 +4737,25 @@ static void update_pkr_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
for (bit = 0; bit < ARRAY_SIZE(mmu->permissions); ++bit) {
unsigned pfec, pkey_bits;
- bool check_pkey, check_write, ff, uf, wf, pte_user;
+ bool check_pkey, check_write, ff, uf, wf, rsvdf;
pfec = bit << 1;
ff = pfec & PFERR_FETCH_MASK;
uf = pfec & PFERR_USER_MASK;
wf = pfec & PFERR_WRITE_MASK;
- /* PFEC.RSVD is replaced by ACC_USER_MASK. */
- pte_user = pfec & PFERR_RSVD_MASK;
+ /*
+ * PFERR_RSVD_MASK bit is not set if the
+ * access is subject to PK restrictions.
+ */
+ rsvdf = pfec & PFERR_RSVD_MASK;
/*
- * Only need to check the access which is not an
- * instruction fetch and is to a user page.
+ * need to check the access which is not an
+ * instruction fetch and is not a rsvd fault.
*/
- check_pkey = (!ff && pte_user);
+ check_pkey = (!ff && !rsvdf);
+
/*
* write access is controlled by PKRU if it is a
* user access or CR0.WP = 1.
--
2.17.1
Powered by blists - more mailing lists