[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20260115130642.3419324-1-edumazet@google.com>
Date: Thu, 15 Jan 2026 13:06:42 +0000
From: Eric Dumazet <edumazet@...gle.com>
To: Vlastimil Babka <vbabka@...e.cz>, Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel <linux-kernel@...r.kernel.org>, Christoph Lameter <cl@...two.org>,
David Rientjes <rientjes@...gle.com>, Roman Gushchin <roman.gushchin@...ux.dev>,
Harry Yoo <harry.yoo@...cle.com>, Eric Dumazet <eric.dumazet@...il.com>,
Eric Dumazet <edumazet@...gle.com>
Subject: [PATCH] slub: Make sure cache_from_obj() is inlined
clang ignores the inline attribute because it thinks cache_from_obj()
is too big.
Moves the slow path in a separate function (__cache_from_obj())
and use __fastpath_inline to please clang and CONFIG_SLUB_TINY configs.
This makes kmem_cache_free() and build_detached_freelist()
slightly faster.
$ size mm/slub.clang.before.o mm/slub.clang.after.o
text data bss dec hex filename
77716 7657 4208 89581 15ded mm/slub.clang.before.o
77766 7673 4208 89647 15e2f mm/slub.clang.after.o
$ scripts/bloat-o-meter -t mm/slub.clang.before.o mm/slub.clang.after.o
Function old new delta
__cache_from_obj - 211 +211
build_detached_freelist 542 569 +27
kmem_cache_free 896 919 +23
cache_from_obj 229 - -229
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
---
mm/slub.c | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 861592ac54257b9d148ff921e6d8f62aced607b3..88a842411c5c3d770ff0070b592f745832d13d1a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -6748,15 +6748,10 @@ static inline struct kmem_cache *virt_to_cache(const void *obj)
return slab->slab_cache;
}
-static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
+static struct kmem_cache *__cache_from_obj(struct kmem_cache *s, void *x)
{
- struct kmem_cache *cachep;
+ struct kmem_cache *cachep = virt_to_cache(x);
- if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
- !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS))
- return s;
-
- cachep = virt_to_cache(x);
if (WARN(cachep && cachep != s,
"%s: Wrong slab cache. %s but object is from %s\n",
__func__, s->name, cachep->name))
@@ -6764,6 +6759,15 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
return cachep;
}
+static __fastpath_inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
+{
+ if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
+ !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS))
+ return s;
+
+ return __cache_from_obj(s, x);
+}
+
/**
* kmem_cache_free - Deallocate an object
* @s: The cache the allocation was from.
base-commit: 944aacb68baf7624ab8d277d0ebf07f025ca137c
--
2.52.0.457.g6b5491de43-goog
Powered by blists - more mailing lists