lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 9 Sep 2022 10:38:04 +0200 From: Dmitry Vyukov <dvyukov@...gle.com> To: Marco Elver <elver@...gle.com> Cc: "Paul E. McKenney" <paulmck@...nel.org>, Mark Rutland <mark.rutland@....com>, Alexander Potapenko <glider@...gle.com>, Boqun Feng <boqun.feng@...il.com>, kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org, Nathan Chancellor <nathan@...nel.org>, Nick Desaulniers <ndesaulniers@...gle.com>, llvm@...ts.linux.dev, Heiko Carstens <hca@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>, Alexander Gordeev <agordeev@...ux.ibm.com>, Christian Borntraeger <borntraeger@...ux.ibm.com>, Sven Schnelle <svens@...ux.ibm.com>, Peter Zijlstra <peterz@...radead.org>, linux-s390@...r.kernel.org, stable@...r.kernel.org Subject: Re: [PATCH v2 2/3] kcsan: Instrument memcpy/memset/memmove with newer Clang On Fri, 9 Sept 2022 at 09:38, Marco Elver <elver@...gle.com> wrote: > > With Clang version 16+, -fsanitize=thread will turn > memcpy/memset/memmove calls in instrumented functions into > __tsan_memcpy/__tsan_memset/__tsan_memmove calls respectively. > > Add these functions to the core KCSAN runtime, so that we (a) catch data > races with mem* functions, and (b) won't run into linker errors with > such newer compilers. > > Cc: stable@...r.kernel.org # v5.10+ > Signed-off-by: Marco Elver <elver@...gle.com> > --- > v2: > * Fix for architectures which do not provide their own > memcpy/memset/memmove and instead use the generic versions in > lib/string. In this case we'll just alias the __tsan_ variants. > --- > kernel/kcsan/core.c | 39 +++++++++++++++++++++++++++++++++++++++ > 1 file changed, 39 insertions(+) > > diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c > index fe12dfe254ec..4015f2a3e7f6 100644 > --- a/kernel/kcsan/core.c > +++ b/kernel/kcsan/core.c > @@ -18,6 +18,7 @@ > #include <linux/percpu.h> > #include <linux/preempt.h> > #include <linux/sched.h> > +#include <linux/string.h> > #include <linux/uaccess.h> > > #include "encoding.h" > @@ -1308,3 +1309,41 @@ noinline void __tsan_atomic_signal_fence(int memorder) > } > } > EXPORT_SYMBOL(__tsan_atomic_signal_fence); > + > +#ifdef __HAVE_ARCH_MEMSET > +void *__tsan_memset(void *s, int c, size_t count); > +noinline void *__tsan_memset(void *s, int c, size_t count) > +{ > + check_access(s, count, KCSAN_ACCESS_WRITE, _RET_IP_); These can use large sizes, does it make sense to truncate it to MAX_ENCODABLE_SIZE? > + return __memset(s, c, count); > +} > +#else > +void *__tsan_memset(void *s, int c, size_t count) __alias(memset); > +#endif > +EXPORT_SYMBOL(__tsan_memset); > + > +#ifdef __HAVE_ARCH_MEMMOVE > +void *__tsan_memmove(void *dst, const void *src, size_t len); > +noinline void *__tsan_memmove(void *dst, const void *src, size_t len) > +{ > + check_access(dst, len, KCSAN_ACCESS_WRITE, _RET_IP_); > + check_access(src, len, 0, _RET_IP_); > + return __memmove(dst, src, len); > +} > +#else > +void *__tsan_memmove(void *dst, const void *src, size_t len) __alias(memmove); > +#endif > +EXPORT_SYMBOL(__tsan_memmove); > + > +#ifdef __HAVE_ARCH_MEMCPY > +void *__tsan_memcpy(void *dst, const void *src, size_t len); > +noinline void *__tsan_memcpy(void *dst, const void *src, size_t len) > +{ > + check_access(dst, len, KCSAN_ACCESS_WRITE, _RET_IP_); > + check_access(src, len, 0, _RET_IP_); > + return __memcpy(dst, src, len); > +} > +#else > +void *__tsan_memcpy(void *dst, const void *src, size_t len) __alias(memcpy); > +#endif > +EXPORT_SYMBOL(__tsan_memcpy); > -- > 2.37.2.789.g6183377224-goog >
Powered by blists - more mailing lists