lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260113061845.159790-1-harry.yoo@oracle.com>
Date: Tue, 13 Jan 2026 15:18:36 +0900
From: Harry Yoo <harry.yoo@...cle.com>
To: akpm@...ux-foundation.org, vbabka@...e.cz
Cc: andreyknvl@...il.com, cl@...two.org, dvyukov@...gle.com, glider@...gle.com,
        hannes@...xchg.org, linux-mm@...ck.org, mhocko@...nel.org,
        muchun.song@...ux.dev, rientjes@...gle.com, roman.gushchin@...ux.dev,
        ryabinin.a.a@...il.com, shakeel.butt@...ux.dev, surenb@...gle.com,
        vincenzo.frascino@....com, yeoreum.yun@....com, harry.yoo@...cle.com,
        tytso@....edu, adilger.kernel@...ger.ca, linux-ext4@...r.kernel.org,
        linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
        hao.li@...ux.dev
Subject: [PATCH V6 0/9] mm/slab: reduce slab accounting memory overhead by allocating slabobj_ext metadata within unsed slab space 

V5: https://lore.kernel.org/linux-mm/20260105080230.13171-1-harry.yoo@oracle.com
V5 -> V6:

- Patch 1: Added Closes: tag for related discussion (Vlastimil)
  https://lore.kernel.org/linux-mm/1372138e-5837-4634-81de-447a1ef0a5ad@suse.cz

- Patch 3: Addressed Vlastimil's comments
  https://lore.kernel.org/linux-mm/e28c08e4-5048-429b-97a0-8d51e494efcd@suse.cz

- Patch 4: Fixed incorrect function prototype of slab_obj_ext() on
  !CONFIG_SLAB_OBJ_EXT builds and kept pointer type in
  free_slab_obj_exts() (Hao, Vlastimil)
  https://lore.kernel.org/linux-mm/n6kyluk3nahdxytwek4ijzy4en6mc6ps7fjjgftww4ith7llom@cijm4who24w2
  https://lore.kernel.org/linux-mm/473d479c-4eae-4589-b8c2-e2a29e8e6bc1@suse.cz

- Patch 7, 9: Rewrote obj_exts_in_slab() to check if the pointer is within the
  slab's range, and distinguish by stride (Vlastimil)
  https://lore.kernel.org/linux-mm/644e163d-edd9-4128-9516-0f70a25526df@suse.cz

- Patch 9: Fixed potentioal memory leak due to incorrect impl. of
  obj_exts_in_object() (Vlastimil)
  https://lore.kernel.org/linux-mm/8c67dcbe-f393-4da6-8d24-f9da79c246c4@suse.cz/

- Patch 9: Fixed incorrect ksize() implementation (Hao)
  https://lore.kernel.org/linux-mm/fgx3lapibabra4x7tewx55nuvxz235ruvm3agpprjbdcmt3rc6@h54ln5tfdssz

When CONFIG_MEMCG and CONFIG_MEM_ALLOC_PROFILING are enabled,
the kernel allocates two pointers per object: one for the memory cgroup
(actually, obj_cgroup) to which it belongs, and another for the code
location that requested the allocation.

In two special cases, this overhead can be eliminated by allocating
slabobj_ext metadata from unused space within a slab:

  Case 1. The "leftover" space after the last slab object is larger than
          the size of an array of slabobj_ext.

  Case 2. The per-object alignment padding is larger than
          sizeof(struct slabobj_ext).

For these two cases, one or two pointers can be saved per slab object.
Examples: ext4 inode cache (case 1) and xfs inode cache (case 2).
That's approximately 0.7-0.8% (memcg) or 1.5-1.6% (memcg + mem profiling)
of the total inode cache size.

Implementing case 2 is not straightforward, because the existing code
assumes that slab->obj_exts is an array of slabobj_ext, while case 2
breaks the assumption.

As suggested by Vlastimil, abstract access to individual slabobj_ext
metadata via a new helper named slab_obj_ext():

static inline struct slabobj_ext *slab_obj_ext(struct slab *slab,
                                               unsigned long obj_exts,
                                               unsigned int index)
{
        return (struct slabobj_ext *)(obj_exts + slab_get_stride(slab) * index);
} 

In the normal case (including case 1), slab->obj_exts points to an array
of slabobj_ext, and the stride is sizeof(struct slabobj_ext).

In case 2, the stride is s->size and
slab->obj_exts = slab_address(slab) + s->red_left_pad + (offset of slabobj_ext)

With this approach, the memcg charging fastpath doesn't need to care the
storage method of slabobj_ext.

Harry Yoo (9):
  mm/slab: use unsigned long for orig_size to ensure proper metadata
    align
  mm/slab: allow specifying free pointer offset when using constructor
  ext4: specify the free pointer offset for ext4_inode_cache
  mm/slab: abstract slabobj_ext access via new slab_obj_ext() helper
  mm/slab: use stride to access slabobj_ext
  mm/memcontrol,alloc_tag: handle slabobj_ext access under KASAN poison
  mm/slab: save memory by allocating slabobj_ext array from leftover
  mm/slab: move [__]ksize and slab_ksize() to mm/slub.c
  mm/slab: place slabobj_ext metadata in unused space within s->size

 fs/ext4/super.c      |  20 +-
 include/linux/slab.h |  39 ++--
 mm/memcontrol.c      |  31 +++-
 mm/slab.h            | 145 +++++++++++----
 mm/slab_common.c     |  69 +------
 mm/slub.c            | 429 +++++++++++++++++++++++++++++++++++++------
 6 files changed, 552 insertions(+), 181 deletions(-)

-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ