lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200328163412.GJ8104@linux.intel.com>
Date:   Sat, 28 Mar 2020 09:34:12 -0700
From:   Sean Christopherson <sean.j.christopherson@...el.com>
To:     Xiaoyao Li <xiaoyao.li@...el.com>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        hpa@...or.com, x86@...nel.org, linux-kernel@...r.kernel.org,
        Paolo Bonzini <pbonzini@...hat.com>, luto@...nel.org,
        Peter Zijlstra <peterz@...radead.org>,
        Arvind Sankar <nivedita@...m.mit.edu>,
        Fenghua Yu <fenghua.yu@...el.com>,
        Tony Luck <tony.luck@...el.com>
Subject: Re: [PATCH v7 2/2] x86/split_lock: Avoid runtime reads of the
 TEST_CTRL MSR

On Wed, Mar 25, 2020 at 11:09:24AM +0800, Xiaoyao Li wrote:
> In a context switch from a task that is detecting split locks
> to one that is not (or vice versa) we need to update the TEST_CTRL
> MSR. Currently this is done with the common sequence:
> 	read the MSR
> 	flip the bit
> 	write the MSR
> in order to avoid changing the value of any reserved bits in the MSR.
> 
> Cache unused and reserved bits of TEST_CTRL MSR with SPLIT_LOCK_DETECT
> bit cleared during initialization, so we can avoid an expensive RDMSR
> instruction during context switch.
> 
> Suggested-by: Sean Christopherson <sean.j.christopherson@...el.com>
> Originally-by: Tony Luck <tony.luck@...el.com>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@...el.com>
> ---
>  arch/x86/kernel/cpu/intel.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
> index deb5c42c2089..1f414578899c 100644
> --- a/arch/x86/kernel/cpu/intel.c
> +++ b/arch/x86/kernel/cpu/intel.c
> @@ -45,6 +45,7 @@ enum split_lock_detect_state {
>   * split lock detect, unless there is a command line override.
>   */
>  static enum split_lock_detect_state sld_state __ro_after_init = sld_off;
> +static u64 msr_test_ctrl_cache __ro_after_init;

What about using "msr_test_ctrl_base_value", or something along those lines?
"cache" doesn't make it clear that SPLIT_LOCK_DETECT is guaranteed to be
zero in this variable.

>  
>  /*
>   * Processors which have self-snooping capability can handle conflicting
> @@ -1037,6 +1038,8 @@ static void __init split_lock_setup(void)
>  		break;
>  	}
>  
> +	rdmsrl(MSR_TEST_CTRL, msr_test_ctrl_cache);

If we're going to bother skipping the RDMSR if state=sld_off on the command
line then it also makes sense to skip it if enabling fails, i.e. move this
below split_lock_verify_msr(true).

> +
>  	if (!split_lock_verify_msr(true)) {
>  		pr_info("MSR access failed: Disabled\n");
>  		return;
> @@ -1053,14 +1056,10 @@ static void __init split_lock_setup(void)
>   */
>  static void sld_update_msr(bool on)
>  {
> -	u64 test_ctrl_val;
> -
> -	rdmsrl(MSR_TEST_CTRL, test_ctrl_val);
> +	u64 test_ctrl_val = msr_test_ctrl_cache;
>  
>  	if (on)
>  		test_ctrl_val |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
> -	else
> -		test_ctrl_val &= ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
>  
>  	wrmsrl(MSR_TEST_CTRL, test_ctrl_val);
>  }
> -- 
> 2.20.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ