[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <z7d52kjvlzxohbly42flhtebqc7knfvilierrjr4r5776rxhgy@lcqmkcpzklse>
Date: Wed, 24 Dec 2025 20:51:14 +0800
From: Hao Li <hao.li@...ux.dev>
To: Harry Yoo <harry.yoo@...cle.com>
Cc: akpm@...ux-foundation.org, vbabka@...e.cz, andreyknvl@...il.com,
cl@...two.org, dvyukov@...gle.com, glider@...gle.com, hannes@...xchg.org,
linux-mm@...ck.org, mhocko@...nel.org, muchun.song@...ux.dev, rientjes@...gle.com,
roman.gushchin@...ux.dev, ryabinin.a.a@...il.com, shakeel.butt@...ux.dev,
surenb@...gle.com, vincenzo.frascino@....com, yeoreum.yun@....com, tytso@....edu,
adilger.kernel@...ger.ca, linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org
Subject: [PATCH] slub: clarify object field layout comments
The comments above check_pad_bytes() document the field layout of a
single object. Rewrite them to improve clarity and precision.
Also update an outdated comment in calculate_sizes().
Suggested-by: Harry Yoo <harry.yoo@...cle.com>
Signed-off-by: Hao Li <hao.li@...ux.dev>
---
Hi Harry, this patch adds more detailed object layout documentation. Let
me know if you have any comments.
mm/slub.c | 92 ++++++++++++++++++++++++++++++++-----------------------
1 file changed, 53 insertions(+), 39 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index a94c64f56504..138e9d13540d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1211,44 +1211,58 @@ check_bytes_and_report(struct kmem_cache *s, struct slab *slab,
}
/*
- * Object layout:
- *
- * object address
- * Bytes of the object to be managed.
- * If the freepointer may overlay the object then the free
- * pointer is at the middle of the object.
- *
- * Poisoning uses 0x6b (POISON_FREE) and the last byte is
- * 0xa5 (POISON_END)
- *
- * object + s->object_size
- * Padding to reach word boundary. This is also used for Redzoning.
- * Padding is extended by another word if Redzoning is enabled and
- * object_size == inuse.
- *
- * We fill with 0xbb (SLUB_RED_INACTIVE) for inactive objects and with
- * 0xcc (SLUB_RED_ACTIVE) for objects in use.
- *
- * object + s->inuse
- * Meta data starts here.
- *
- * A. Free pointer (if we cannot overwrite object on free)
- * B. Tracking data for SLAB_STORE_USER
- * C. Original request size for kmalloc object (SLAB_STORE_USER enabled)
- * D. Padding to reach required alignment boundary or at minimum
- * one word if debugging is on to be able to detect writes
- * before the word boundary.
- *
- * Padding is done using 0x5a (POISON_INUSE)
- *
- * object + s->size
- * Nothing is used beyond s->size.
- *
- * If slabcaches are merged then the object_size and inuse boundaries are mostly
- * ignored. And therefore no slab options that rely on these boundaries
+ * Object field layout:
+ *
+ * [Left redzone padding] (if SLAB_RED_ZONE)
+ * - Field size: s->red_left_pad
+ * - Filled with 0xbb (SLUB_RED_INACTIVE) for inactive objects and
+ * 0xcc (SLUB_RED_ACTIVE) for objects in use when SLAB_RED_ZONE.
+ *
+ * [Object bytes]
+ * - Field size: s->object_size
+ * - Object payload bytes.
+ * - If the freepointer may overlap the object, it is stored inside
+ * the object (typically near the middle).
+ * - Poisoning uses 0x6b (POISON_FREE) and the last byte is
+ * 0xa5 (POISON_END) when __OBJECT_POISON is enabled.
+ *
+ * [Word-align padding] (right redzone when SLAB_RED_ZONE is set)
+ * - Field size: s->inuse - s->object_size
+ * - If redzoning is enabled and ALIGN(size, sizeof(void *)) adds no
+ * padding, explicitly extend by one word so the right redzone is
+ * non-empty.
+ * - Filled with 0xbb (SLUB_RED_INACTIVE) for inactive objects and
+ * 0xcc (SLUB_RED_ACTIVE) for objects in use when SLAB_RED_ZONE.
+ *
+ * [Metadata starts at object + s->inuse]
+ * - A. freelist pointer (if freeptr_outside_object)
+ * - B. alloc tracking (SLAB_STORE_USER)
+ * - C. free tracking (SLAB_STORE_USER)
+ * - D. original request size (SLAB_KMALLOC && SLAB_STORE_USER)
+ * - E. KASAN metadata (if enabled)
+ *
+ * [Mandatory padding] (if CONFIG_SLUB_DEBUG && SLAB_RED_ZONE)
+ * - One mandatory debug word to guarantee a minimum poisoned gap
+ * between metadata and the next object, independent of alignment.
+ * - Filled with 0x5a (POISON_INUSE) when SLAB_POISON is set.
+ * [Final alignment padding]
+ * - Any bytes added by ALIGN(size, s->align) to reach s->size.
+ * - Filled with 0x5a (POISON_INUSE) when SLAB_POISON is set.
+ *
+ * Notes:
+ * - Redzones are filled by init_object() with SLUB_RED_ACTIVE/INACTIVE.
+ * - Object contents are poisoned with POISON_FREE/END when __OBJECT_POISON.
+ * - The trailing padding is pre-filled with POISON_INUSE by
+ * setup_slab_debug() when SLAB_POISON is set, and is validated by
+ * check_pad_bytes().
+ * - The first object pointer is slab_address(slab) +
+ * (s->red_left_pad if redzoning); subsequent objects are reached by
+ * adding s->size each time.
+ *
+ * If slabcaches are merged then the object_size and inuse boundaries are
+ * mostly ignored. Therefore no slab options that rely on these boundaries
* may be used with merged slabcaches.
*/
-
static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p)
{
unsigned long off = get_info_end(s); /* The end of info */
@@ -7103,9 +7117,9 @@ static int calculate_sizes(struct kmem_cache_args *args, struct kmem_cache *s)
/*
- * If we are Redzoning then check if there is some space between the
- * end of the object and the free pointer. If not then add an
- * additional word to have some bytes to store Redzone information.
+ * If we are Redzoning and there is no space between the end of the
+ * object and the following fields, add one word so the right Redzone
+ * is non-empty.
*/
if ((flags & SLAB_RED_ZONE) && size == s->object_size)
size += sizeof(void *);
--
2.50.1
Powered by blists - more mailing lists