[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <032386fc-fffb-1f17-8cfd-94b35b6947ee@I-love.SAKURA.ne.jp>
Date: Thu, 5 Jan 2023 22:17:24 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: Alexander Potapenko <glider@...gle.com>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
Marco Elver <elver@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
kasan-dev <kasan-dev@...glegroups.com>,
Helge Deller <deller@....de>,
Linux Fbdev development list <linux-fbdev@...r.kernel.org>,
DRI <dri-devel@...ts.freedesktop.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Kees Cook <keescook@...omium.org>
Subject: Re: [PATCH] fbcon: Use kzalloc() in fbcon_prepare_logo()
On 2023/01/05 20:54, Daniel Vetter wrote:
>>> . Plain memset() in arch/x86/include/asm/string_64.h is redirected to __msan_memset()
>>> but memsetXX() are not redirected to __msan_memsetXX(). That is, memory initialization
>>> via memsetXX() results in KMSAN's shadow memory being not updated.
>>>
>>> KMSAN folks, how should we fix this problem?
>>> Redirect assembly-implemented memset16(size) to memset(size*2) if KMSAN is enabled?
>>>
>>
>> I think the easiest way to fix it would be disable memsetXX asm
>> implementations by something like:
>>
>> -------------------------------------------------------------------------------------------------
>> diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
>> index 888731ccf1f67..5fb330150a7d1 100644
>> --- a/arch/x86/include/asm/string_64.h
>> +++ b/arch/x86/include/asm/string_64.h
>> @@ -33,6 +33,7 @@ void *memset(void *s, int c, size_t n);
>> #endif
>> void *__memset(void *s, int c, size_t n);
>>
>> +#if !defined(__SANITIZE_MEMORY__)
>> #define __HAVE_ARCH_MEMSET16
>> static inline void *memset16(uint16_t *s, uint16_t v, size_t n)
>> {
>> @@ -68,6 +69,7 @@ static inline void *memset64(uint64_t *s, uint64_t
>> v, size_t n)
>> : "memory");
>> return s;
>> }
>> +#endif
>
> So ... what should I do here? Can someone please send me a revert or patch
> to apply. I don't think I should do this, since I already tossed my credit
> for not looking at stuff carefully enough into the wind :-)
> -Daniel
>
>>
>> #define __HAVE_ARCH_MEMMOVE
>> #if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
>> -------------------------------------------------------------------------------------------------
>>
>> This way we'll just pick the existing C implementations instead of
>> reinventing them.
>>
I'd like to avoid touching per-arch asm/string.h files if possible.
Can't we do like below (i.e. keep asm implementations as-is, but
automatically redirect to __msan_memset()) ? If yes, we could move all
__msan_*() redirection from per-arch asm/string.h files to the common
linux/string.h file?
diff --git a/include/linux/string.h b/include/linux/string.h
index c062c581a98b..403813b04e00 100644
--- a/include/linux/string.h
+++ b/include/linux/string.h
@@ -360,4 +360,15 @@ static __always_inline size_t str_has_prefix(const char *str, const char *prefix
return strncmp(str, prefix, len) == 0 ? len : 0;
}
+#if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
+#undef memset
+#define memset(dest, src, count) __msan_memset((dest), (src), (count))
+#undef memset16
+#define memset16(dest, src, count) __msan_memset((dest), (src), (count) << 1)
+#undef memset32
+#define memset32(dest, src, count) __msan_memset((dest), (src), (count) << 2)
+#undef memset64
+#define memset64(dest, src, count) __msan_memset((dest), (src), (count) << 3)
+#endif
+
#endif /* _LINUX_STRING_H_ */
Powered by blists - more mailing lists