lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250602180544.3626909-2-zecheng@google.com>
Date: Mon,  2 Jun 2025 18:05:41 +0000
From: Zecheng Li <zecheng@...gle.com>
To: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, 
	Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot <vincent.guittot@...aro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, 
	Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, 
	Valentin Schneider <vschneid@...hat.com>, Xu Liu <xliuprof@...gle.com>, 
	Blake Jones <blakejones@...gle.com>, Josh Don <joshdon@...gle.com>, 
	Madadi Vineeth Reddy <vineethr@...ux.ibm.com>, linux-kernel@...r.kernel.org, 
	Zecheng Li <zecheng@...gle.com>
Subject: [RFC PATCH v2 1/3] cache: conditionally align cache groups

Introduces a pair of macros, `__cacheline_group_begin_aligned_cond` and
`__cacheline_group_end_aligned_cond`, to provide conditional cacheline
alignment for cache groups. The alignment behavior is as follows:

If the `COND` parameter is equal to `SMP_CACHE_BYTES`, the cache group
will be aligned to `SMP_CACHE_BYTES`.

If `COND` is not equal to `SMP_CACHE_BYTES`, no specific additional
cacheline alignment is enforced by using `__aligned(1)`.

This mechanism allows for more precise control over cacheline alignment,
ensuring that layout optimizations intended for one cache architecture
do not inadvertently degrade efficiency or introduce holes on systems
with different cache line sizes.

Signed-off-by: Zecheng Li <zecheng@...gle.com>
---
 include/linux/cache.h | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/include/linux/cache.h b/include/linux/cache.h
index e69768f50d53..8b5dadf1c487 100644
--- a/include/linux/cache.h
+++ b/include/linux/cache.h
@@ -147,6 +147,34 @@
 	struct { } __cacheline_group_pad__##GROUP		\
 	__aligned((__VA_ARGS__ + 0) ? : SMP_CACHE_BYTES)
 
+/**
+ * __cacheline_group_begin_aligned_cond - conditionally align a cache group
+ * @GROUP: name of the group
+ * @COND: a size; if it equals SMP_CACHE_BYTES, the group will be aligned
+ * to SMP_CACHE_BYTES. Otherwise, no specific cacheline alignment
+ * is enforced.
+ *
+ */
+#define __cacheline_group_begin_aligned_cond(GROUP, COND)	\
+	__cacheline_group_begin(GROUP)				\
+	__aligned(((COND) == SMP_CACHE_BYTES) ? SMP_CACHE_BYTES : 1)
+
+/**
+ * __cacheline_group_end_aligned_cond - declare a conditionally aligned group end
+ * @GROUP: name of the group
+ * @COND: condition (size); if it equals SMP_CACHE_BYTES, padding will
+ * be aligned to SMP_CACHE_BYTES. Otherwise, no alignment.
+ *
+ * This complements __cacheline_group_begin_aligned_cond.
+ * The end marker itself is aligned to sizeof(long).
+ * The final padding to avoid the next field falling into this cacheline
+ * is applied conditionally based on COND.
+ */
+#define __cacheline_group_end_aligned_cond(GROUP, COND)                 \
+        __cacheline_group_end(GROUP) __aligned(sizeof(long));           \
+        struct { } __cacheline_group_pad__##GROUP                       \
+        __aligned(((COND) == SMP_CACHE_BYTES) ? SMP_CACHE_BYTES : 1)
+
 #ifndef CACHELINE_ASSERT_GROUP_MEMBER
 #define CACHELINE_ASSERT_GROUP_MEMBER(TYPE, GROUP, MEMBER) \
 	BUILD_BUG_ON(!(offsetof(TYPE, MEMBER) >= \
-- 
2.49.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ