lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 16 Jul 2019 13:03:58 -0300
From:   Eduardo Habkost <ehabkost@...hat.com>
To:     Tao Xu <tao3.xu@...el.com>
Cc:     pbonzini@...hat.com, rkrcmar@...hat.com, corbet@....net,
        tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, hpa@...or.com,
        sean.j.christopherson@...el.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, fenghua.yu@...el.com,
        xiaoyao.li@...ux.intel.com, jingqi.liu@...el.com
Subject: Re: [PATCH v7 2/3] KVM: vmx: Emulate MSR IA32_UMWAIT_CONTROL

On Fri, Jul 12, 2019 at 04:29:06PM +0800, Tao Xu wrote:
> UMWAIT and TPAUSE instructions use IA32_UMWAIT_CONTROL at MSR index E1H
> to determines the maximum time in TSC-quanta that the processor can reside
> in either C0.1 or C0.2.
> 
> This patch emulates MSR IA32_UMWAIT_CONTROL in guest and differentiate
> IA32_UMWAIT_CONTROL between host and guest. The variable
> mwait_control_cached in arch/x86/power/umwait.c caches the MSR value, so
> this patch uses it to avoid frequently rdmsr of IA32_UMWAIT_CONTROL.
> 
> Co-developed-by: Jingqi Liu <jingqi.liu@...el.com>
> Signed-off-by: Jingqi Liu <jingqi.liu@...el.com>
> Signed-off-by: Tao Xu <tao3.xu@...el.com>
> ---
[...]
> +static void atomic_switch_umwait_control_msr(struct vcpu_vmx *vmx)
> +{
> +	if (!vmx_has_waitpkg(vmx))
> +		return;
> +
> +	if (vmx->msr_ia32_umwait_control != umwait_control_cached)
> +		add_atomic_switch_msr(vmx, MSR_IA32_UMWAIT_CONTROL,
> +			vmx->msr_ia32_umwait_control,
> +			umwait_control_cached, false);

How exactly do we ensure NR_AUTOLOAD_MSRS (8) is still large enough?

I see 3 existing add_atomic_switch_msr() calls, but the one at
atomic_switch_perf_msrs() is in a loop.  Are we absolutely sure
that perf_guest_get_msrs() will never return more than 5 MSRs?


> +	else
> +		clear_atomic_switch_msr(vmx, MSR_IA32_UMWAIT_CONTROL);
> +}
> +
>  static void vmx_arm_hv_timer(struct vcpu_vmx *vmx, u32 val)
>  {
>  	vmcs_write32(VMX_PREEMPTION_TIMER_VALUE, val);
[...]


-- 
Eduardo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ