[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aWBfZ4ga9HQ8L8KM@hyeyoo>
Date: Fri, 9 Jan 2026 10:52:39 +0900
From: Harry Yoo <harry.yoo@...cle.com>
To: Alexander Potapenko <glider@...gle.com>
Cc: akpm@...ux-foundation.org, vbabka@...e.cz, andreyknvl@...il.com,
cl@...two.org, dvyukov@...gle.com, hannes@...xchg.org,
linux-mm@...ck.org, mhocko@...nel.org, muchun.song@...ux.dev,
rientjes@...gle.com, roman.gushchin@...ux.dev, ryabinin.a.a@...il.com,
shakeel.butt@...ux.dev, surenb@...gle.com, vincenzo.frascino@....com,
yeoreum.yun@....com, tytso@....edu, adilger.kernel@...ger.ca,
linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, hao.li@...ux.dev, stable@...r.kernel.org
Subject: Re: [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to
ensure proper metadata align
On Thu, Jan 08, 2026 at 12:39:22PM +0100, Alexander Potapenko wrote:
> On Mon, Jan 5, 2026 at 9:02 AM Harry Yoo <harry.yoo@...cle.com> wrote:
> >
> > When both KASAN and SLAB_STORE_USER are enabled, accesses to
> > struct kasan_alloc_meta fields can be misaligned on 64-bit architectures.
> > This occurs because orig_size is currently defined as unsigned int,
> > which only guarantees 4-byte alignment. When struct kasan_alloc_meta is
> > placed after orig_size, it may end up at a 4-byte boundary rather than
> > the required 8-byte boundary on 64-bit systems.
> >
> > Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS
> > are assumed to require 64-bit accesses to be 64-bit aligned.
> > See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert:
> > "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details.
> >
> > Change orig_size from unsigned int to unsigned long to ensure proper
> > alignment for any subsequent metadata. This should not waste additional
> > memory because kmalloc objects are already aligned to at least
> > ARCH_KMALLOC_MINALIGN.
> >
> > Suggested-by: Andrey Ryabinin <ryabinin.a.a@...il.com>
> > Cc: stable@...r.kernel.org
> > Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc")
> > Signed-off-by: Harry Yoo <harry.yoo@...cle.com>
> > ---
> > mm/slub.c | 14 +++++++-------
> > 1 file changed, 7 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index ad71f01571f0..1c747435a6ab 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -857,7 +857,7 @@ static inline bool slab_update_freelist(struct kmem_cache *s, struct slab *slab,
> > * request size in the meta data area, for better debug and sanity check.
> > */
> > static inline void set_orig_size(struct kmem_cache *s,
> > - void *object, unsigned int orig_size)
> > + void *object, unsigned long orig_size)
> > {
> > void *p = kasan_reset_tag(object);
> >
> > @@ -867,10 +867,10 @@ static inline void set_orig_size(struct kmem_cache *s,
> > p += get_info_end(s);
> > p += sizeof(struct track) * 2;
> >
> > - *(unsigned int *)p = orig_size;
> > + *(unsigned long *)p = orig_size;
>
> Instead of calculating the offset of the original size in several
> places, should we maybe introduce a function that returns a pointer to
> it?
Good point.
The calculation of various metadata offset (including the original size)
is repeated in several places, and perhaps it's worth cleaning up,
something like this:
enum {
FREE_POINTER_OFFSET,
ALLOC_TRACK_OFFSET,
FREE_TRACK_OFFSET,
ORIG_SIZE_OFFSET,
KASAN_ALLOC_META_OFFSET,
OBJ_EXT_OFFSET,
FINAL_ALIGNMENT_PADDING_OFFSET,
...
};
orig_size = *(unsigned long *)get_metadata_ptr(p, ORIG_SIZE_OFFSET);
... of course, perhaps as a follow-up rather than
as part of this series.
--
Cheers,
Harry / Hyeonggon
Powered by blists - more mailing lists