[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0ca5d46e292e5074c119c7c58e6ec9901fb0ed73.1761763681.git.m.wieczorretman@pm.me>
Date: Wed, 29 Oct 2025 20:09:28 +0000
From: Maciej Wieczor-Retman <m.wieczorretman@...me>
To: xin@...or.com, peterz@...radead.org, kaleshsingh@...gle.com, kbingham@...nel.org, akpm@...ux-foundation.org, nathan@...nel.org, ryabinin.a.a@...il.com, dave.hansen@...ux.intel.com, bp@...en8.de, morbo@...gle.com, jeremy.linton@....com, smostafa@...gle.com, kees@...nel.org, baohua@...nel.org, vbabka@...e.cz, justinstitt@...gle.com, wangkefeng.wang@...wei.com, leitao@...ian.org, jan.kiszka@...mens.com, fujita.tomonori@...il.com, hpa@...or.com, urezki@...il.com, ubizjak@...il.com, ada.coupriediaz@....com, nick.desaulniers+lkml@...il.com, ojeda@...nel.org, brgerst@...il.com, elver@...gle.com, pankaj.gupta@....com, glider@...gle.com, mark.rutland@....com, trintaeoitogc@...il.com, jpoimboe@...nel.org, thuth@...hat.com, pasha.tatashin@...een.com, dvyukov@...gle.com, jhubbard@...dia.com, catalin.marinas@....com, yeoreum.yun@....com, mhocko@...e.com, lorenzo.stoakes@...cle.com, samuel.holland@...ive.com, vincenzo.frascino@....com, bigeasy@...utronix.de, surenb@...gle.com,
	ardb@...nel.org, Liam.Howlett@...cle.com, nicolas.schier@...ux.dev, ziy@...dia.com, kas@...nel.org, tglx@...utronix.de, mingo@...hat.com, broonie@...nel.org, corbet@....net, andreyknvl@...il.com, maciej.wieczor-retman@...el.com, david@...hat.com, maz@...nel.org, rppt@...nel.org, will@...nel.org, luto@...nel.org
Cc: kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org, x86@...nel.org, linux-kbuild@...r.kernel.org, linux-mm@...ck.org, llvm@...ts.linux.dev, linux-doc@...r.kernel.org, m.wieczorretman@...me
Subject: [PATCH v6 14/18] x86: Minimal SLAB alignment
From: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
8 byte minimal SLAB alignment interferes with KASAN's granularity of 16
bytes. It causes a lot of out-of-bounds errors for unaligned 8 byte
allocations.
Compared to a kernel with KASAN disabled, the memory footprint increases
because all kmalloc-8 allocations now are realized as kmalloc-16, which
has twice the object size. But more meaningfully, when compared to a
kernel with generic KASAN enabled, there is no difference. Because of
redzones in generic KASAN, kmalloc-8' and kmalloc-16' object size is the
same (48 bytes). So changing the minimal SLAB alignment of the tag-based
mode doesn't have any negative impact when compared to the other
software KASAN mode.
Adjust x86 minimal SLAB alignment to match KASAN granularity size.
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
Reviewed-by: Andrey Konovalov <andreyknvl@...il.com>
---
Changelog v6:
- Add Andrey's Reviewed-by tag.
Changelog v4:
- Extend the patch message with some more context and impact
  information.
Changelog v3:
- Fix typo in patch message 4 -> 16.
- Change define location to arch/x86/include/asm/cache.c.
 arch/x86/include/asm/cache.h | 4 ++++
 1 file changed, 4 insertions(+)
diff --git a/arch/x86/include/asm/cache.h b/arch/x86/include/asm/cache.h
index 69404eae9983..3232583b5487 100644
--- a/arch/x86/include/asm/cache.h
+++ b/arch/x86/include/asm/cache.h
@@ -21,4 +21,8 @@
 #endif
 #endif
 
+#ifdef CONFIG_KASAN_SW_TAGS
+#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT)
+#endif
+
 #endif /* _ASM_X86_CACHE_H */
-- 
2.51.0
Powered by blists - more mailing lists
 
