[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yj0NOQOYEAG+Dz7+@google.com>
Date: Fri, 25 Mar 2022 00:30:49 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: SU Hang <darcy.sh@...group.com>
Cc: kvm@...r.kernel.org,
赖江山 <jiangshan.ljs@...group.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] KVM: X86: use EPT_VIOLATION_* instead of 0x7
On Mon, Mar 21, 2022, SU Hang wrote:
> Use symbolic value, EPT_VIOLATION_*, instead of 0x7
> in FNAME(walk_addr_generic)().
>
> Signed-off-by: SU Hang <darcy.sh@...group.com>
> ---
> arch/x86/kvm/mmu/paging_tmpl.h | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index 7853c7ef13a1..2e2b1f7ccaca 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -531,7 +531,12 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
> vcpu->arch.exit_qualification |= EPT_VIOLATION_ACC_READ;
> if (fetch_fault)
> vcpu->arch.exit_qualification |= EPT_VIOLATION_ACC_INSTR;
> - vcpu->arch.exit_qualification |= (pte_access & 0x7) << 3;
Oof. I suspect this was done to avoid conditional branches, but more importantly...
> + if (pte_access & ACC_USER_MASK)
> + vcpu->arch.exit_qualification |= EPT_VIOLATION_READABLE;
> + if (pte_access & ACC_WRITE_MASK)
> + vcpu->arch.exit_qualification |= EPT_VIOLATION_WRITABLE;
> + if (pte_access & ACC_EXEC_MASK)
> + vcpu->arch.exit_qualification |= EPT_VIOLATION_EXECUTABLE;
This is subtly wrong. pte_access is the raw RWX bits out of the EPTE, walker->pte_access
is the version that holds ACC_*_MASK flags after running pte_access through
FNAME(gpte_access).
I'm definitely in favor of a cleanup. What about formalizing the shift and using
VMX_EPT_RWX_MASK both here and in the generation of the mask for use in KVM's
EPT violation handler?
From: Sean Christopherson <seanjc@...gle.com>
Date: Thu, 24 Mar 2022 17:23:18 -0700
Subject: [PATCH] KVM: x86/mmu: Derive EPT violation RWX bits from EPTE RWX
bits
Derive the mask of RWX bits reported on EPT violations from the mask of
RWX bits that are shoved into EPT entries; the layout is the same, the
EPT violation bits are simply shifted by three. Use the new shift and a
slight copy-paste of the mask derivation instead of completely open
coding the same to convert between the EPT entry bits and the exit
qualification when synthesizing a nested EPT Violation.
No functional change intended.
Cc: SU Hang <darcy.sh@...group.com>
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
arch/x86/include/asm/vmx.h | 8 ++------
arch/x86/kvm/mmu/paging_tmpl.h | 8 +++++++-
arch/x86/kvm/vmx/vmx.c | 4 +---
3 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index a6789fe9b56e..6c23905d5465 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -543,17 +543,13 @@ enum vm_entry_failure_code {
#define EPT_VIOLATION_ACC_READ_BIT 0
#define EPT_VIOLATION_ACC_WRITE_BIT 1
#define EPT_VIOLATION_ACC_INSTR_BIT 2
-#define EPT_VIOLATION_READABLE_BIT 3
-#define EPT_VIOLATION_WRITABLE_BIT 4
-#define EPT_VIOLATION_EXECUTABLE_BIT 5
+#define EPT_VIOLATION_RWX_SHIFT 3
#define EPT_VIOLATION_GVA_VALIDATION_BIT 7
#define EPT_VIOLATION_GVA_TRANSLATED_BIT 8
#define EPT_VIOLATION_ACC_READ (1 << EPT_VIOLATION_ACC_READ_BIT)
#define EPT_VIOLATION_ACC_WRITE (1 << EPT_VIOLATION_ACC_WRITE_BIT)
#define EPT_VIOLATION_ACC_INSTR (1 << EPT_VIOLATION_ACC_INSTR_BIT)
-#define EPT_VIOLATION_READABLE (1 << EPT_VIOLATION_READABLE_BIT)
-#define EPT_VIOLATION_WRITABLE (1 << EPT_VIOLATION_WRITABLE_BIT)
-#define EPT_VIOLATION_EXECUTABLE (1 << EPT_VIOLATION_EXECUTABLE_BIT)
+#define EPT_VIOLATION_RWX_MASK (VMX_EPT_RWX_MASK << EPT_VIOLATION_RWX_SHIFT)
#define EPT_VIOLATION_GVA_VALIDATION (1 << EPT_VIOLATION_GVA_VALIDATION_BIT)
#define EPT_VIOLATION_GVA_TRANSLATED (1 << EPT_VIOLATION_GVA_TRANSLATED_BIT)
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 3c7f2d12b883..90a8e2bb1a3a 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -529,7 +529,13 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,
vcpu->arch.exit_qualification |= EPT_VIOLATION_ACC_READ;
if (fetch_fault)
vcpu->arch.exit_qualification |= EPT_VIOLATION_ACC_INSTR;
- vcpu->arch.exit_qualification |= (pte_access & 0x7) << 3;
+
+ /*
+ * Note, pte_access holds the raw RWX bits from the EPTE, not
+ * ACC_*_MASK flags!
+ */
+ vcpu->arch.exit_qualification |= (pte_access & VMX_EPT_RWX_MASK) <<
+ EPT_VIOLATION_RWX_SHIFT;
}
#endif
walker->fault.address = addr;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 84a7500cd80c..6c554d2a2038 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5410,9 +5410,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
? PFERR_FETCH_MASK : 0;
/* ept page table entry is present? */
- error_code |= (exit_qualification &
- (EPT_VIOLATION_READABLE | EPT_VIOLATION_WRITABLE |
- EPT_VIOLATION_EXECUTABLE))
+ error_code |= (exit_qualification & EPT_VIOLATION_RWX_MASK)
? PFERR_PRESENT_MASK : 0;
error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) != 0 ?
base-commit: 7cd469b5705bcfa65c3055f899c9e3e751053051
--
Powered by blists - more mailing lists