[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ec2982e3-2996-918e-f406-32f67a0decfe@linux-m68k.org>
Date: Tue, 30 Sep 2025 12:18:21 +1000 (AEST)
From: Finn Thain <fthain@...ux-m68k.org>
To: Arnd Bergmann <arnd@...db.de>
cc: Geert Uytterhoeven <geert@...ux-m68k.org>,
Peter Zijlstra <peterz@...radead.org>, Will Deacon <will@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Boqun Feng <boqun.feng@...il.com>, Jonathan Corbet <corbet@....net>,
Mark Rutland <mark.rutland@....com>, linux-kernel@...r.kernel.org,
Linux-Arch <linux-arch@...r.kernel.org>, linux-m68k@...r.kernel.org,
Lance Yang <lance.yang@...ux.dev>
Subject: Re: [RFC v2 2/3] atomic: Specify alignment for atomic_t and
atomic64_t
On Tue, 23 Sep 2025, I wrote:
>
> ... there's still some kmem cache or other allocator somewhere that has
> produced some misaligned path and dentry structures. So we get
> misaligned atomics somewhere in the VFS and TTY layers. I was unable to
> find those allocations.
>
It turned out that the problem wasn't dynamic allocations, it was a local
variable in the core locking code (kernel/locking/rwsem.c): a misaligned
long used with an atomic operation (cmpxchg). To get natural alignment for
64-bit quantities, I had to align other local variables as well, such as
the one in ktime_get_real_ts64_mg() that's used with
atomic64_try_cmpxchg(). The atomic_t branch in my github repo has the
patches I wrote for that.
To silence the misalignment WARN from CONFIG_DEBUG_ATOMIC, for 64-bit
atomic operations, for my small m68k .config, it was also necesary to
increase ARCH_SLAB_MINALIGN to 8. However, I'm not advocating a
ARCH_SLAB_MINALIGN increase, as that wastes memory. I think it might be
more useful to limit the alignment test for CONFIG_DEBUG_ATOMIC, as
follows.
diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
index 402a999a0d6b..cd569a87c0a8 100644
--- a/include/linux/instrumented.h
+++ b/include/linux/instrumented.h
@@ -68,7 +68,7 @@ static __always_inline void instrument_atomic_read(const volatile void *v, size_
{
kasan_check_read(v, size);
kcsan_check_atomic_read(v, size);
- WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ATOMIC) && ((unsigned long)v & (size - 1)));
+ WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ATOMIC) && ((unsigned long)v & (size - 1) & 3));
}
/**
@@ -83,7 +83,7 @@ static __always_inline void instrument_atomic_write(const volatile void *v, size
{
kasan_check_write(v, size);
kcsan_check_atomic_write(v, size);
- WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ATOMIC) && ((unsigned long)v & (size - 1)));
+ WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ATOMIC) && ((unsigned long)v & (size - 1) & 3));
}
/**
@@ -98,7 +98,7 @@ static __always_inline void instrument_atomic_read_write(const volatile void *v,
{
kasan_check_write(v, size);
kcsan_check_atomic_read_write(v, size);
- WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ATOMIC) && ((unsigned long)v & (size - 1)));
+ WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ATOMIC) && ((unsigned long)v & (size - 1) & 3));
}
/**
Powered by blists - more mailing lists