lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 3 Feb 2020 12:41:55 -0800
From:   Sean Christopherson <sean.j.christopherson@...el.com>
To:     "Luck, Tony" <tony.luck@...el.com>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Mark D Rustad <mrustad@...il.com>,
        Arvind Sankar <nivedita@...m.mit.edu>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        "Yu, Fenghua" <fenghua.yu@...el.com>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        H Peter Anvin <hpa@...or.com>,
        "Raj, Ashok" <ashok.raj@...el.com>,
        "Shankar, Ravi V" <ravi.v.shankar@...el.com>,
        linux-kernel <linux-kernel@...r.kernel.org>, x86 <x86@...nel.org>
Subject: Re: [PATCH v17] x86/split_lock: Enable split lock detection by kernel

On Sun, Jan 26, 2020 at 12:05:35PM -0800, Luck, Tony wrote:
> +/*
> + * Locking is not required at the moment because only bit 29 of this
> + * MSR is implemented and locking would not prevent that the operation
> + * of one thread is immediately undone by the sibling thread.
> + * Use the "safe" versions of rdmsr/wrmsr here because although code
> + * checks CPUID and MSR bits to make sure the TEST_CTRL MSR should
> + * exist, there may be glitches in virtualization that leave a guest
> + * with an incorrect view of real h/w capabilities.
> + */
> +static bool __sld_msr_set(bool on)
> +{
> +	u64 test_ctrl_val;
> +
> +	if (rdmsrl_safe(MSR_TEST_CTRL, &test_ctrl_val))
> +		return false;

How about caching the MSR value on a per-{cpu/core} basis at boot to avoid
the RDMSR when switching to/from from a misbehaving tasks?  E.g. to avoid
penalizing well-behaved tasks any more than necessary.

We've likely got bigger issues if MSR_TEST_CTL is being written by BIOS
at runtime, even if the writes were limited to synchronous calls from the
kernel.

Probably makes sense to split the MSR's init sequence and runtime sequence,
e.g. to also use an unsafe wrmsrl() at runtime so that an unexpected #GP
generates a WARN.

> +
> +	if (on)
> +		test_ctrl_val |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
> +	else
> +		test_ctrl_val &= ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
> +
> +	return !wrmsrl_safe(MSR_TEST_CTRL, test_ctrl_val);
> +}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ