lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 4 Mar 2018 20:07:20 +0800 From: Luwei Kang <luwei.kang@...el.com> To: kvm@...r.kernel.org Cc: tglx@...utronix.de, mingo@...hat.com, hpa@...or.com, x86@...nel.org, pbonzini@...hat.com, rkrcmar@...hat.com, linux-kernel@...r.kernel.org, joro@...tes.org, Chao Peng <chao.p.peng@...ux.intel.com>, Luwei Kang <luwei.kang@...el.com> Subject: [PATCH v5 10/11] KVM: x86: Set intercept for Intel PT MSRs From: Chao Peng <chao.p.peng@...ux.intel.com> Disable interrcept Intel PT MSRs only when Intel PT is enabled in guest. But MSR_IA32_RTIT_CTL will alway be intercept. Signed-off-by: Chao Peng <chao.p.peng@...ux.intel.com> Signed-off-by: Luwei Kang <luwei.kang@...el.com> --- arch/x86/kvm/vmx.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 16943a8..0a55772 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -935,6 +935,7 @@ static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12, static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, u32 msr, int type); static bool vmx_pt_supported(void); +static void pt_set_intercept_for_msr(struct vcpu_vmx *vmx, bool flag); static DEFINE_PER_CPU(struct vmcs *, vmxarea); static DEFINE_PER_CPU(struct vmcs *, current_vmcs); @@ -3713,6 +3714,7 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 1; vmcs_write64(GUEST_IA32_RTIT_CTL, data); vmx->pt_desc.guest.ctl = data; + pt_set_intercept_for_msr(vmx, !(data & RTIT_CTL_TRACEEN)); break; case MSR_IA32_RTIT_STATUS: if (!vmx_pt_supported() || @@ -5496,6 +5498,24 @@ static void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu) vmx->msr_bitmap_mode = mode; } +static void pt_set_intercept_for_msr(struct vcpu_vmx *vmx, bool flag) +{ + unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap; + u32 i; + + vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_RTIT_STATUS, + MSR_TYPE_RW, flag); + vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_RTIT_OUTPUT_BASE, + MSR_TYPE_RW, flag); + vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_RTIT_OUTPUT_MASK, + MSR_TYPE_RW, flag); + vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_RTIT_CR3_MATCH, + MSR_TYPE_RW, flag); + for (i = 0; i < vmx->pt_desc.range_cnt; i++) + vmx_set_intercept_for_msr(msr_bitmap, MSR_IA32_RTIT_ADDR0_A + i, + MSR_TYPE_RW, flag); +} + static bool vmx_get_enable_apicv(struct kvm_vcpu *vcpu) { return enable_apicv; -- 1.8.3.1
Powered by blists - more mailing lists