[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <439db928-6e92-8492-31d3-cdbe2bc6b9d4@intel.com>
Date: Wed, 4 Mar 2020 10:20:20 +0800
From: Xiaoyao Li <xiaoyao.li@...el.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
hpa@...or.com, Paolo Bonzini <pbonzini@...hat.com>,
Andy Lutomirski <luto@...nel.org>, tony.luck@...el.com,
peterz@...radead.org, fenghua.yu@...el.com, x86@...nel.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/8] x86/split_lock: Ensure
X86_FEATURE_SPLIT_LOCK_DETECT means the existence of feature
On 3/4/2020 2:55 AM, Sean Christopherson wrote:
> On Thu, Feb 06, 2020 at 03:04:06PM +0800, Xiaoyao Li wrote:
>> When flag X86_FEATURE_SPLIT_LOCK_DETECT is set, it should ensure the
>> existence of MSR_TEST_CTRL and MSR_TEST_CTRL.SPLIT_LOCK_DETECT bit.
>
> The changelog confused me a bit. "When flag X86_FEATURE_SPLIT_LOCK_DETECT
> is set" makes it sound like the logic is being applied after the feature
> bit is set. Maybe something like:
>
> ```
> Verify MSR_TEST_CTRL.SPLIT_LOCK_DETECT can be toggled via WRMSR prior to
> setting the SPLIT_LOCK_DETECT feature bit so that runtime consumers,
> e.g. KVM, don't need to worry about WRMSR failure.
> ```
>
>> Signed-off-by: Xiaoyao Li <xiaoyao.li@...el.com>
>> ---
>> arch/x86/kernel/cpu/intel.c | 41 +++++++++++++++++++++----------------
>> 1 file changed, 23 insertions(+), 18 deletions(-)
>>
>> diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
>> index 2b3874a96bd4..49535ed81c22 100644
>> --- a/arch/x86/kernel/cpu/intel.c
>> +++ b/arch/x86/kernel/cpu/intel.c
>> @@ -702,7 +702,8 @@ static void init_intel(struct cpuinfo_x86 *c)
>> if (tsx_ctrl_state == TSX_CTRL_DISABLE)
>> tsx_disable();
>>
>> - split_lock_init();
>> + if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
>> + split_lock_init();
>> }
>>
>> #ifdef CONFIG_X86_32
>> @@ -986,9 +987,26 @@ static inline bool match_option(const char *arg, int arglen, const char *opt)
>>
>> static void __init split_lock_setup(void)
>> {
>> + u64 test_ctrl_val;
>> char arg[20];
>> int i, ret;
>> + /*
>> + * Use the "safe" versions of rdmsr/wrmsr here to ensure MSR_TEST_CTRL
>> + * and MSR_TEST_CTRL.SPLIT_LOCK_DETECT bit do exist. Because there may
>> + * be glitches in virtualization that leave a guest with an incorrect
>> + * view of real h/w capabilities.
>> + */
>> + if (rdmsrl_safe(MSR_TEST_CTRL, &test_ctrl_val))
>> + return;
>> +
>> + if (wrmsrl_safe(MSR_TEST_CTRL,
>> + test_ctrl_val | MSR_TEST_CTRL_SPLIT_LOCK_DETECT))
>> + return;
>> +
>> + if (wrmsrl_safe(MSR_TEST_CTRL, test_ctrl_val))
>> + return;a
>
> Probing the MSR should be skipped if SLD is disabled in sld_options, i.e.
> move this code (and setup_force_cpu_cap() etc...) down below the
> match_option() logic. The above would temporarily enable SLD even if the
> admin has explicitly disabled it, e.g. makes the kernel param useless for
> turning off the feature due to bugs.
>
> And with that, IMO failing any of RDMSR/WRSMR here warrants a pr_err().
> The CPU says it supports split lock and the admin hasn't explicitly turned
> it off, so failure to enable should be logged.
It is not about to enable split lock detection here, but to parse the
kernel booting parameter "split_lock_detect".
If probing MSR or MSR bit fails, it indicates the CPU doesn't has
feature X86_FEATURE_SPLIT_LOCK_DETECT. So don't set feature flag and
there is no need to parse "split_lock_detect", just return.
Then, as the change at the beginning of this patch, we should call
split_lock_init() based on X86_FEATURE_SPLIT_LOCK_DETECT bit.
>> +
>> setup_force_cpu_cap(X86_FEATURE_SPLIT_LOCK_DETECT);
>> sld_state = sld_warn;
>>
>> @@ -1022,24 +1040,19 @@ static void __init split_lock_setup(void)
>> * Locking is not required at the moment because only bit 29 of this
>> * MSR is implemented and locking would not prevent that the operation
>> * of one thread is immediately undone by the sibling thread.
>> - * Use the "safe" versions of rdmsr/wrmsr here because although code
>> - * checks CPUID and MSR bits to make sure the TEST_CTRL MSR should
>> - * exist, there may be glitches in virtualization that leave a guest
>> - * with an incorrect view of real h/w capabilities.
>> */
>> -static bool __sld_msr_set(bool on)
>> +static void __sld_msr_set(bool on)
>> {
>> u64 test_ctrl_val;
>>
>> - if (rdmsrl_safe(MSR_TEST_CTRL, &test_ctrl_val))
>> - return false;
>> + rdmsrl(MSR_TEST_CTRL, test_ctrl_val);
>>
>> if (on)
>> test_ctrl_val |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
>> else
>> test_ctrl_val &= ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
>>
>> - return !wrmsrl_safe(MSR_TEST_CTRL, test_ctrl_val);
>> + wrmsrl(MSR_TEST_CTRL, test_ctrl_val);
>> }
>>
>> static void split_lock_init(void)
>> @@ -1047,15 +1060,7 @@ static void split_lock_init(void)
>> if (sld_state == sld_off)
>> return;
>>
>> - if (__sld_msr_set(true))
>> - return;
>> -
>> - /*
>> - * If this is anything other than the boot-cpu, you've done
>> - * funny things and you get to keep whatever pieces.
>> - */
>> - pr_warn("MSR fail -- disabled\n");
>> - sld_state = sld_off;
>> + __sld_msr_set(true);
>> }
>>
>> bool handle_user_split_lock(unsigned long ip)
>> --
>> 2.23.0
>>
Powered by blists - more mailing lists