lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8247e3bd-13c2-e28c-87d8-5fd1bfed7104@linux-m68k.org>
Date: Mon, 15 Sep 2025 20:37:34 +1000 (AEST)
From: Finn Thain <fthain@...ux-m68k.org>
To: Peter Zijlstra <peterz@...radead.org>
cc: Will Deacon <will@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>, 
    Boqun Feng <boqun.feng@...il.com>, Jonathan Corbet <corbet@....net>, 
    Mark Rutland <mark.rutland@....com>, Arnd Bergmann <arnd@...db.de>, 
    linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org, 
    Geert Uytterhoeven <geert@...ux-m68k.org>, linux-m68k@...r.kernel.org
Subject: Re: [RFC v2 3/3] atomic: Add alignment check to instrumented atomic
 operations


On Mon, 15 Sep 2025, Peter Zijlstra wrote:

> On Mon, Sep 15, 2025 at 07:38:52PM +1000, Finn Thain wrote:
> > 
> > On Mon, 15 Sep 2025, Peter Zijlstra wrote:
> > 
> > > On Sun, Sep 14, 2025 at 10:45:29AM +1000, Finn Thain wrote:
> > > > From: Peter Zijlstra <peterz@...radead.org>
> > > > 
> > > > Add a Kconfig option for debug builds which logs a warning when an
> > > > instrumented atomic operation takes place at some location that isn't
> > > > a long word boundary. Some platforms don't trap for this.
> > > > 
> > > > Link: https://lore.kernel.org/lkml/20250901093600.GF4067720@noisy.programming.kicks-ass.net/
> > > > ---
> > > > This patch differs slightly from Peter's code which checked for natural
> > > > alignment.
> > > > ---
> > > >  include/linux/instrumented.h |  4 ++++
> > > >  lib/Kconfig.debug            | 10 ++++++++++
> > > >  2 files changed, 14 insertions(+)
> > > > 
> > > > diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
> > > > index 711a1f0d1a73..55f5685971a1 100644
> > > > --- a/include/linux/instrumented.h
> > > > +++ b/include/linux/instrumented.h
> > > > @@ -7,6 +7,7 @@
> > > >  #ifndef _LINUX_INSTRUMENTED_H
> > > >  #define _LINUX_INSTRUMENTED_H
> > > >  
> > > > +#include <linux/bug.h>
> > > >  #include <linux/compiler.h>
> > > >  #include <linux/kasan-checks.h>
> > > >  #include <linux/kcsan-checks.h>
> > > > @@ -67,6 +68,7 @@ static __always_inline void instrument_atomic_read(const volatile void *v, size_
> > > >  {
> > > >  	kasan_check_read(v, size);
> > > >  	kcsan_check_atomic_read(v, size);
> > > > +	WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ATOMIC) && ((unsigned long)v & (sizeof(long) - 1)));
> > > >  }
> > > >  
> > > >  /**
> > > > @@ -81,6 +83,7 @@ static __always_inline void instrument_atomic_write(const volatile void *v, size
> > > >  {
> > > >  	kasan_check_write(v, size);
> > > >  	kcsan_check_atomic_write(v, size);
> > > > +	WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ATOMIC) && ((unsigned long)v & (sizeof(long) - 1)));
> > > >  }
> > > >  
> > > >  /**
> > > > @@ -95,6 +98,7 @@ static __always_inline void instrument_atomic_read_write(const volatile void *v,
> > > >  {
> > > >  	kasan_check_write(v, size);
> > > >  	kcsan_check_atomic_read_write(v, size);
> > > > +	WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ATOMIC) && ((unsigned long)v & (sizeof(long) - 1)));
> > > >  }
> > > 
> > > Right, so why aren't we trusting the size argument? And instead
> > > mandating a possibly larger alignment?
> > > 
> > 
> > It wasn't supposed to mandate a larger alignment in practice. I considered 
> > doing something like (unsigned long)v & (size - 1) & (sizeof(long) - 1) 
> > but decided that the extra overhead probably wouldn't be worthwhile, if in 
> > practice, no-one is doing atomic ops on shorts or chars. I will revisit 
> > this.
> 
> atomic_t is aligned at 4 bytes, you're now mandating it is aligned at 8
> bytes (on LP64), this cannot be right.
> 
> kernel/locking/qspinlock.c:xchg_tail() does xchg_relaxed(&lock->tail,
> ...) which is u16. Again, you cannot mandate 8 bytes here.
> 

OK. I will change it back to your code (i.e. mandate natural alignment).

> > When you do atomic operations on atomic_t or atomic64_t, (sizeof(long)
> > - 1) probably doesn't make much sense. But atomic operations get used on 
> > scalar types (aside from atomic_t and atomic64_t) that don't have natural 
> > alignment. Please refer to the other thread about this: 
> > https://lore.kernel.org/all/ed1e0896-fd85-5101-e136-e4a5a37ca5ff@linux-m68k.org/
> 
> Perhaps set ARCH_SLAB_MINALIGN ?
> 

That's not going to help much. The 850 byte offset of task_works into 
struct task_struct and the 418 byte offset of exit_state in struct 
task_struct are already misaligned.

But that's all moot, if you intended that CONFIG_DEBUG_ATOMIC should 
complain about any deviation from natural alignment. I still don't have 
any performance measurements but I'm willing to assume there's a penalty 
for such deviation.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ