lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1904051400230.1802@nanos.tec.linutronix.de>
Date:   Fri, 5 Apr 2019 14:30:56 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Fenghua Yu <fenghua.yu@...el.com>
cc:     Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        H Peter Anvin <hpa@...or.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Ashok Raj <ashok.raj@...el.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Kalle Valo <kvalo@...eaurora.org>,
        Xiaoyao Li <xiaoyao.li@...el.com>,
        Michael Chan <michael.chan@...adcom.com>,
        Ravi V Shankar <ravi.v.shankar@...el.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        x86 <x86@...nel.org>, linux-wireless@...r.kernel.org,
        netdev@...r.kernel.org, kvm@...r.kernel.org,
        Xiaoyao Li <xiaoyao.li@...ux.intel.com>
Subject: Re: [PATCH v6 12/20] kvm/vmx: Emulate MSR TEST_CTL

On Wed, 3 Apr 2019, Fenghua Yu wrote:
> @@ -1663,6 +1663,12 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  	u32 index;
>  
>  	switch (msr_info->index) {
> +	case MSR_TEST_CTL:
> +		if (!msr_info->host_initiated &&
> +		    !(vcpu->arch.core_capability & CORE_CAP_SPLIT_LOCK_DETECT))

Errm? If the MSR_TEST_CTL is exposed to the guest via CPUID then the
rdmsr() in the guest is not supposed to fail just because
CORE_CAP_SPLIT_LOCK_DETECT is not set.

vmx->msr_test_ctl holds the proper value, which is either 0 or
CORE_CAP_SPLIT_LOCK_DETECT until more bits are supported.

So the chek needs to be guest_cpuid_has(X86_FEATURE_CORE_CAPABILITY).

> +			return 1;
> +		msr_info->data = vmx->msr_test_ctl;
> +		break;
>  #ifdef CONFIG_X86_64
>  	case MSR_FS_BASE:
>  		msr_info->data = vmcs_readl(GUEST_FS_BASE);
> @@ -1797,6 +1803,14 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  	u32 index;
>  
>  	switch (msr_index) {
> +	case MSR_TEST_CTL:
> +		if (!(vcpu->arch.core_capability & CORE_CAP_SPLIT_LOCK_DETECT))
> +			return 1;

Again, this wants to be a check for the availability of the MSR in the
guest cpuid and not to the CORE_CAP_SPLIT_LOCK_DETECT magic bit.

> +
> +		if (data & ~TEST_CTL_ENABLE_SPLIT_LOCK_DETECT)
> +			return 1;

and this one wants to be:

		if (data & vmx->msr_test_ctl_mask)

so additional bits can be enabled later on at exactly one point in the
code, i.e. during vm init.

> +		vmx->msr_test_ctl = data;
> +		break;
>  
> +static void atomic_switch_msr_test_ctl(struct vcpu_vmx *vmx)
> +{
> +	u64 host_msr_test_ctl;
> +
> +	/* if TEST_CTL MSR doesn't exist on the hardware, do nothing */
> +	if (rdmsrl_safe(MSR_TEST_CTL, &host_msr_test_ctl))
> +		return;

Yikes. So on hosts which do not have MSR_TEST_CTL this takes a fault on
every VM enter. The host knows whether it has this MSR or not.

Even if it exists there is no point to do the rdmsr on every vmenter. The
value should be cached per cpu. It only changes when:

      1) #AC triggers in kernel space

      2) The sysfs knob is flipped

#1 It happens either _BEFORE_ or _AFTER_ atomic_switch_msr_test_ctl().  In
   both cases the cached value is correct in atomic_switch_msr_test_ctl().
   
   If it happens _AFTER_ atomic_switch_msr_test_ctl() then the VMEXIT will
   restore the enabled bit, but there is nothing you can do about that.

#2 CANNOT happen in the context of vcpu_run().

So this wants to be:

   host_msr_test_ctl = this_cpu_read(msr_test_ctl_cache);

> +	if (host_msr_test_ctl == vmx->msr_test_ctl)
> +		clear_atomic_switch_msr(vmx, MSR_TEST_CTL);
> +	else
> +		add_atomic_switch_msr(vmx, MSR_TEST_CTL, vmx->msr_test_ctl,
> +				      host_msr_test_ctl, false);
> +}
> +

>  u64 kvm_get_core_capability(void)
>  {
> -	return 0;
> +	u64 data;
> +
> +	rdmsrl_safe(MSR_IA32_CORE_CAPABILITY, &data);

  	if (!boot_cpu_has(X86_FEATURE_CORE_CAPABILITY))
		return 0;

> +	/* mask non-virtualizable functions */
> +	data &= CORE_CAP_SPLIT_LOCK_DETECT;

Why?

> +	/*
> +	 * There will be a list of FMS values that have split lock detection
> +	 * but lack the CORE CAPABILITY MSR. In this case, set
> +	 * CORE_CAP_SPLIT_LOCK_DETECT since we emulate MSR CORE_CAPABILITY.

That's completely wrong. X86_FEATURE_SPLIT_LOCK_DETECT is set when the
capability is enumerated in CPUID or it's set via the FMS quirk.

So:
	data = 0;
> +	if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
> +		data |= CORE_CAP_SPLIT_LOCK_DETECT;
> +
> +	return data;

is absolutely sufficient.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ