lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 19 Mar 2020 06:13:11 +0000
From:   "Kang, Luwei" <luwei.kang@...el.com>
To:     "Christopherson, Sean J" <sean.j.christopherson@...el.com>
CC:     "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "pbonzini@...hat.com" <pbonzini@...hat.com>,
        "vkuznets@...hat.com" <vkuznets@...hat.com>,
        "wanpengli@...cent.com" <wanpengli@...cent.com>,
        "jmattson@...gle.com" <jmattson@...gle.com>,
        "joro@...tes.org" <joro@...tes.org>
Subject: RE: [PATCH] KVM: VMX: Disable Intel PT before VM-entry

> > If the logical processor is operating with Intel PT enabled (
> > IA32_RTIT_CTL.TraceEn = 1) at the time of VM entry, the “load
> > IA32_RTIT_CTL” VM-entry control must be 0(SDM 26.2.1.1).
> >
> > The first disabled the host Intel PT(Clear TraceEn) will make all the
> > buffered packets are flushed out of the processor and it may cause an
> > Intel PT PMI. The host Intel PT will be re-enabled in the host Intel
> > PT PMI handler.
> >
> > handle_pmi_common()
> >     -> intel_pt_interrupt()
> >             -> pt_config_start()
> 
> IIUC, this is only possible when PT "plays nice" with VMX, correct?
> Otherwise pt->vmx_on will be true and pt_config_start() would skip the
> WRMSR.

Yes. The pt_config_start() would skip the WRMSR if the HW doesn't support enable PT across the VMX.

> 
> And IPT PMI must be delivered via NMI (though maybe they're always
> delivered via NMI?).

Yes, IPT PMI is an NMI in the host.

> 
> In any case, redoing WRMSR doesn't seem safe, and it certainly isn't
> performant, e.g. what prevents the second WRMSR from triggering a second
> IPT PMI?

Twice is enough. There is probably a lot of PT trace not flush out and a PMI will happen after the first WRMSR. There are not so much data between the end of the PMI handler and the second WRMSR.

> 
> pt_guest_enter() is called after the switch to the vCPU has already been
> recorded, can't this be handled in the IPT code, e.g. something like this?
> 
> diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c index
> 1db7a51d9792..e38ddae9f0d1 100644
> --- a/arch/x86/events/intel/pt.c
> +++ b/arch/x86/events/intel/pt.c
> @@ -405,7 +405,7 @@ static void pt_config_start(struct perf_event *event)
>         ctl |= RTIT_CTL_TRACEEN;
>         if (READ_ONCE(pt->vmx_on))
>                 perf_aux_output_flag(&pt->handle, PERF_AUX_FLAG_PARTIAL);
> -       else
> +       else (!(current->flags & PF_VCPU))
>                 wrmsrl(MSR_IA32_RTIT_CTL, ctl);
> 
>         WRITE_ONCE(event->hw.config, ctl);

Thanks. But I am afraid the host perf don’t like any virtualization specific changes. I will try this.

Luwei Kang

> 
> > This patch will disable the Intel PT twice to make sure the Intel PT
> > is disabled before VM-Entry.
> >
> > Signed-off-by: Luwei Kang <luwei.kang@...el.com>
> > ---
> >  arch/x86/kvm/vmx/vmx.c | 10 +++++++++-
> >  1 file changed, 9 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index
> > 26f8f31..d936a91 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -1095,6 +1095,8 @@ static inline void pt_save_msr(struct pt_ctx
> > *ctx, u32 addr_range)
> >
> >  static void pt_guest_enter(struct vcpu_vmx *vmx)  {
> > +	u64 rtit_ctl;
> > +
> >  	if (pt_mode == PT_MODE_SYSTEM)
> >  		return;
> >
> > @@ -1103,8 +1105,14 @@ static void pt_guest_enter(struct vcpu_vmx *vmx)
> >  	 * Save host state before VM entry.
> >  	 */
> >  	rdmsrl(MSR_IA32_RTIT_CTL, vmx->pt_desc.host.ctl);
> > -	if (vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN) {
> > +	if (vmx->pt_desc.host.ctl & RTIT_CTL_TRACEEN) {
> >  		wrmsrl(MSR_IA32_RTIT_CTL, 0);
> > +		rdmsrl(MSR_IA32_RTIT_CTL, rtit_ctl);
> > +		if (rtit_ctl)
> > +			wrmsrl(MSR_IA32_RTIT_CTL, 0);
> > +	}
> > +
> > +	if (vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN) {
> >  		pt_save_msr(&vmx->pt_desc.host, vmx-
> >pt_desc.addr_range);
> >  		pt_load_msr(&vmx->pt_desc.guest, vmx-
> >pt_desc.addr_range);
> >  	}
> > --
> > 1.8.3.1
> >

Powered by blists - more mailing lists