[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251222110843.980347-1-harry.yoo@oracle.com>
Date: Mon, 22 Dec 2025 20:08:35 +0900
From: Harry Yoo <harry.yoo@...cle.com>
To: akpm@...ux-foundation.org, vbabka@...e.cz
Cc: andreyknvl@...il.com, cl@...two.org, dvyukov@...gle.com, glider@...gle.com,
hannes@...xchg.org, linux-mm@...ck.org, mhocko@...nel.org,
muchun.song@...ux.dev, rientjes@...gle.com, roman.gushchin@...ux.dev,
ryabinin.a.a@...il.com, shakeel.butt@...ux.dev, surenb@...gle.com,
vincenzo.frascino@....com, yeoreum.yun@....com, harry.yoo@...cle.com,
tytso@....edu, adilger.kernel@...ger.ca, linux-ext4@...r.kernel.org,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
hao.li@...ux.dev
Subject: [PATCH V4 0/8] mm/slab: reduce slab accounting memory overhead by allocating slabobj_ext metadata within unsed slab space
RFC V3: https://lore.kernel.org/linux-mm/20251027122847.320924-1-harry.yoo@oracle.com
I believe I addressed all comments in RFC V3 (except handling lazy
allocation of slabobj_exts, which I would prefer to do as future work).
Please let me know if I missed your comments.
If there is no major drawbacks or concerns coming up, I would like to push
this forward for 7.0 merge window after some review & testing.
Have a wonderful end of the year!
RFC V3 -> V4:
- Rebased onto the latest slab/for-next, dropped RFC
- The metadata alignment (after orig_size) fix is now included as patch 1
of this series
- Patch 2: Document that use_freeptr_offset can be used for caches with
constructor (Suren, Vlastimil)
- Patch 6: use get/put_slab_obj_exts() instead of
metadata_access_enable/disable (Suren)
- Patch 7: Change !mem_cgroup_disabled() check to memcg_kmem_online()
(Andrey Ryabinin)
- Added Reviewed-by, Suggested-by tags, thanks!
When CONFIG_MEMCG and CONFIG_MEM_ALLOC_PROFILING are enabled,
the kernel allocates two pointers per object: one for the memory cgroup
(obj_cgroup) to which it belongs, and another for the code location
that requested the allocation.
In two special cases, this overhead can be eliminated by allocating
slabobj_ext metadata from unused space within a slab:
Case 1. The "leftover" space after the last slab object is larger than
the size of an array of slabobj_ext.
Case 2. The per-object alignment padding is larger than
sizeof(struct slabobj_ext).
For these two cases, one or two pointers can be saved per slab object.
Examples: ext4 inode cache (case 1) and xfs inode cache (case 2).
That's approximately 0.7-0.8% (memcg) or 1.5-1.6%% (memcg + mem profiling)
of the total inode cache size.
Implementing case 2 is not straightforward, because the existing code
assumes that slab->obj_exts is an array of slabobj_ext, while case 2
breaks the assumption.
As suggested by Vlastimil, abstract access to individual slabobj_ext
metadata via a new helper named slab_obj_ext():
static inline struct slabobj_ext *slab_obj_ext(struct slab *slab,
unsigned long obj_exts,
unsigned int index)
{
return (struct slabobj_ext *)(obj_exts + slab_get_stride(slab) * index);
}
In the normal case (including case 1), slab->obj_exts points to an array
of slabobj_ext, and the stride is sizeof(struct slabobj_ext).
In case 2, the stride is s->size and
slab->obj_exts = slab_address(slab) + s->red_left_pad + (offset of slabobj_ext)
With this approach, the memcg charging fastpath doesn't need to care the
storage method of slabobj_ext.
Harry Yoo (8):
mm/slab: use unsigned long for orig_size to ensure proper metadata
align
mm/slab: allow specifying free pointer offset when using constructor
ext4: specify the free pointer offset for ext4_inode_cache
mm/slab: abstract slabobj_ext access via new slab_obj_ext() helper
mm/slab: use stride to access slabobj_ext
mm/memcontrol,alloc_tag: handle slabobj_ext access under KASAN poison
mm/slab: save memory by allocating slabobj_ext array from leftover
mm/slab: place slabobj_ext metadata in unused space within s->size
fs/ext4/super.c | 20 ++-
include/linux/slab.h | 39 +++--
mm/memcontrol.c | 31 +++-
mm/slab.h | 120 ++++++++++++++-
mm/slab_common.c | 8 +-
mm/slub.c | 345 +++++++++++++++++++++++++++++++++++--------
6 files changed, 466 insertions(+), 97 deletions(-)
--
2.43.0
Powered by blists - more mailing lists