[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87jzmgvd04.ffs@tglx>
Date: Tue, 05 Mar 2024 17:50:51 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Dave Hansen <dave.hansen@...el.com>, Tetsuo Handa
<penguin-kernel@...ove.SAKURA.ne.jp>, LKML <linux-kernel@...r.kernel.org>,
the arch/x86 maintainers <x86@...nel.org>
Cc: Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, Dave
Hansen <dave.hansen@...ux.intel.com>, "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH v2] x86: disable non-instrumented version of copy_mc
when KMSAN is enabled
On Tue, Mar 05 2024 at 07:21, Dave Hansen wrote:
> On 3/1/24 14:52, Tetsuo Handa wrote:
>> - if (static_cpu_has(X86_FEATURE_ERMS)) {
>> + if (!IS_ENABLED(CONFIG_KMSAN) && static_cpu_has(X86_FEATURE_ERMS)) {
>> __uaccess_begin();
>> ret = copy_mc_enhanced_fast_string((__force void *)dst, src, len);
>> __uaccess_end();
>
> Where does the false positive _come_ from? Can we fix copy_mc_fragile()
> and copy_mc_enhanced_fast_string() instead of just not using them?
All it takes is a variant of __msan_memcpy() which uses a variant of
copy_mc_to_kernel() instead of __memcpy(). It's not rocket science.
Aside of that, this:
@@ -74,14 +74,14 @@ unsigned long __must_check copy_mc_to_user(void __user *dst, const void *src, un
{
unsigned long ret;
- if (copy_mc_fragile_enabled) {
+ if (!IS_ENABLED(CONFIG_KMSAN) && copy_mc_fragile_enabled) {
__uaccess_begin();
is completely bogus. copy_user_generic() is not at all covered by
KMSAN. So why fiddling with it in the first place? Just because it has
the same pattern as copy_mc_to_kernel()?
> The three enable_copy_mc_fragile() are presumably doing so for a
> reason.
Very much so. It's for MCE recovery purposes.
And yes, the changelog and the non-existing comments should explain why
this is "correct" when KMSAN is enabled. Hint: It is NOT.
Thanks,
tglx
Powered by blists - more mailing lists