[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230905141348.32946-4-feng.tang@intel.com>
Date: Tue, 5 Sep 2023 22:13:48 +0800
From: Feng Tang <feng.tang@...el.com>
To: Vlastimil Babka <vbabka@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: Feng Tang <feng.tang@...el.com>
Subject: [RFC Patch 3/3] mm/slub: setup maxim per-node partial according to cpu numbers
Currently most of the slab's min_partial is set to 5 (as MIN_PARTIAL
is 5). This is fine for older or small systesms, and could be too
small for a large system with hundreds of CPUs, when per-node
'list_lock' is contended for allocating from and freeing to per-node
partial list.
So enlarge it based on the CPU numbers per node.
Signed-off-by: Feng Tang <feng.tang@...el.com>
---
include/linux/nodemask.h | 1 +
mm/slub.c | 9 +++++++--
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
index 8d07116caaf1..6e22caab186d 100644
--- a/include/linux/nodemask.h
+++ b/include/linux/nodemask.h
@@ -530,6 +530,7 @@ static inline int node_random(const nodemask_t *maskp)
#define num_online_nodes() num_node_state(N_ONLINE)
#define num_possible_nodes() num_node_state(N_POSSIBLE)
+#define num_cpu_nodes() num_node_state(N_CPU)
#define node_online(node) node_state((node), N_ONLINE)
#define node_possible(node) node_state((node), N_POSSIBLE)
diff --git a/mm/slub.c b/mm/slub.c
index 09ae1ed642b7..984e012d7bbc 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4533,6 +4533,7 @@ static int calculate_sizes(struct kmem_cache *s)
static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
{
+ unsigned long min_partial;
s->flags = kmem_cache_flags(s->size, flags, s->name);
#ifdef CONFIG_SLAB_FREELIST_HARDENED
s->random = get_random_long();
@@ -4564,8 +4565,12 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
* The larger the object size is, the more slabs we want on the partial
* list to avoid pounding the page allocator excessively.
*/
- s->min_partial = min_t(unsigned long, MAX_PARTIAL, ilog2(s->size) / 2);
- s->min_partial = max_t(unsigned long, MIN_PARTIAL, s->min_partial);
+
+ min_partial = rounddown_pow_of_two(num_cpus() / num_cpu_nodes());
+ min_partial = max_t(unsigned long, MIN_PARTIAL, min_partial);
+
+ s->min_partial = min_t(unsigned long, min_partial * 2, ilog2(s->size) / 2);
+ s->min_partial = max_t(unsigned long, min_partial, s->min_partial);
set_cpu_partial(s);
--
2.27.0
Powered by blists - more mailing lists