lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 23 Mar 2020 18:02:16 +0100
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Xiaoyao Li <xiaoyao.li@...el.com>, Ingo Molnar <mingo@...hat.com>,
        Borislav Petkov <bp@...en8.de>, hpa@...or.com,
        Paolo Bonzini <pbonzini@...hat.com>,
        Sean Christopherson <sean.j.christopherson@...el.com>,
        kvm@...r.kernel.org, x86@...nel.org, linux-kernel@...r.kernel.org
Cc:     Andy Lutomirski <luto@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Arvind Sankar <nivedita@...m.mit.edu>,
        Fenghua Yu <fenghua.yu@...el.com>,
        Tony Luck <tony.luck@...el.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Jim Mattson <jmattson@...gle.com>,
        Xiaoyao Li <xiaoyao.li@...el.com>
Subject: Re: [PATCH v5 1/9] x86/split_lock: Rework the initialization flow of split lock detection

Xiaoyao Li <xiaoyao.li@...el.com> writes:

> Current initialization flow of split lock detection has following issues:
> 1. It assumes the initial value of MSR_TEST_CTRL.SPLIT_LOCK_DETECT to be
>    zero. However, it's possible that BIOS/firmware has set it.

Ok.

> 2. X86_FEATURE_SPLIT_LOCK_DETECT flag is unconditionally set even if
>    there is a virtualization flaw that FMS indicates the existence while
>    it's actually not supported.
>
> 3. Because of #2, KVM cannot rely on X86_FEATURE_SPLIT_LOCK_DETECT flag
>    to check verify if feature does exist, so cannot expose it to
>    guest.

Sorry this does not make anny sense. KVM is the hypervisor, so it better
can rely on the detect flag. Unless you talk about nested virt and a
broken L1 hypervisor.

> To solve these issues, introducing a new sld_state, "sld_not_exist",
> as

The usual naming convention is sld_not_supported.

> the default value. It will be switched to other value if CORE_CAPABILITIES
> or FMS enumerate split lock detection.
>
> Only when sld_state != sld_not_exist, it goes to initialization flow.
>
> In initialization flow, it explicitly accesses MSR_TEST_CTRL and
> SPLIT_LOCK_DETECT bit to ensure there is no virtualization flaw, i.e.,
> feature split lock detection does supported. In detail,
>  1. sld_off,   verify SPLIT_LOCK_DETECT bit can be cleared, and clear it;

That's not what the patch does. It writes with the bit cleared and the
only thing it checks is whether the wrmsrl fails or not. Verification is
something completely different.

>  2. sld_warn,  verify SPLIT_LOCK_DETECT bit can be cleared and set,
>                and set it;
>  3. sld_fatal, verify SPLIT_LOCK_DETECT bit can be set, and set it;
>
> Only when no MSR aceessing failure, can X86_FEATURE_SPLIT_LOCK_DETECT be
> set. So kvm can use X86_FEATURE_SPLIT_LOCK_DETECT to check the existence
> of feature.

Again, this has nothing to do with KVM. 

>   * Processors which have self-snooping capability can handle conflicting
> @@ -585,7 +585,7 @@ static void init_intel_misc_features(struct cpuinfo_x86 *c)
>  	wrmsrl(MSR_MISC_FEATURES_ENABLES, msr);
>  }
>  
> -static void split_lock_init(void);
> +static void split_lock_init(struct cpuinfo_x86 *c);
>  
>  static void init_intel(struct cpuinfo_x86 *c)
>  {
> @@ -702,7 +702,8 @@ static void init_intel(struct cpuinfo_x86 *c)
>  	if (tsx_ctrl_state == TSX_CTRL_DISABLE)
>  		tsx_disable();
>  
> -	split_lock_init();
> +	if (sld_state != sld_not_exist)
> +		split_lock_init(c);

That conditional want's to be in split_lock_init() where it used to be.

> +/*
> + * Use the "safe" versions of rdmsr/wrmsr here because although code
> + * checks CPUID and MSR bits to make sure the TEST_CTRL MSR should
> + * exist, there may be glitches in virtualization that leave a guest
> + * with an incorrect view of real h/w capabilities.
> + * If not msr_broken, then it needn't use "safe" version at runtime.
> + */
> +static void split_lock_init(struct cpuinfo_x86 *c)
>  {
> -	if (sld_state == sld_off)
> -		return;
> +	u64 test_ctrl_val;
>  
> -	if (__sld_msr_set(true))
> -		return;
> +	if (rdmsrl_safe(MSR_TEST_CTRL, &test_ctrl_val))
> +		goto msr_broken;
> +
> +	switch (sld_state) {
> +	case sld_off:
> +		if (wrmsrl_safe(MSR_TEST_CTRL, test_ctrl_val & ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT))
> +			goto msr_broken;
> +		break;
> +	case sld_warn:
> +		if (wrmsrl_safe(MSR_TEST_CTRL, test_ctrl_val & ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT))
> +			goto msr_broken;
> +		fallthrough;
> +	case sld_fatal:
> +		if (wrmsrl_safe(MSR_TEST_CTRL, test_ctrl_val | MSR_TEST_CTRL_SPLIT_LOCK_DETECT))
> +			goto msr_broken;
> +		break;

This does not make any sense either. Why doing it any different for warn
and fatal?

> +	default:
> +		break;

If there is ever a state added, then default will just fall through and
possibly nobody notices because the compiler does not complain.

> +	}
> +
> +	set_cpu_cap(c, X86_FEATURE_SPLIT_LOCK_DETECT);
> +	return;
>  
> +msr_broken:
>  	/*
>  	 * If this is anything other than the boot-cpu, you've done
>  	 * funny things and you get to keep whatever pieces.
>  	 */
> -	pr_warn("MSR fail -- disabled\n");
> +	pr_warn_once("MSR fail -- disabled\n");
>  	sld_state = sld_off;

So you run this on every CPU. What's the point? If the hypervisor is so
broken that the MSR works on CPU0 but not on CPU1 then this is probably
the least of your worries.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ