[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200222092428.99488-1-wenyang@linux.alibaba.com>
Date: Sat, 22 Feb 2020 17:24:28 +0800
From: Wen Yang <wenyang@...ux.alibaba.com>
To: Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Wen Yang <wenyang@...ux.alibaba.com>, Roman Gushchin <guro@...com>,
Xunlei Pang <xlpang@...ux.alibaba.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [PATCH] mm/slub: improve count_partial() for CONFIG_SLUB_CPU_PARTIAL
In the cloud server scenario, reading "/proc/slabinfo" can possibily
block the slab allocation on another CPU for a while, 200ms in extreme
cases. If the slab object is to carry network packet, targeting the
far-end disk array, it causes block IO jitter issues.
This is because the list_lock, which protecting the node partial list,
is taken when couting the free objects resident in that list. It introduces
locking contention when the page(s) is moved between CPU and node partial
lists in allocation path on another CPU.
We also observed that in this scenario, CONFIG_SLUB_CPU_PARTIAL is turned
on by default, and count_partial() is useless because the returned number
is far from the reality.
Therefore, we can simply return 0, then nr_free is also 0, and eventually
active_objects == total_objects. We do not introduce any regression, and
it's preferable to show the unrealistic uniform 100% slab utilization
rather than some very high but incorrect value.
Co-developed-by: Roman Gushchin <guro@...com>
Signed-off-by: Roman Gushchin <guro@...com>
Signed-off-by: Wen Yang <wenyang@...ux.alibaba.com>
Cc: Christoph Lameter <cl@...ux.com>
Cc: Pekka Enberg <penberg@...nel.org>
Cc: David Rientjes <rientjes@...gle.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Xunlei Pang <xlpang@...ux.alibaba.com>
Cc: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org
---
mm/slub.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 17dc00e..d5b7230 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2411,14 +2411,16 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n)
static unsigned long count_partial(struct kmem_cache_node *n,
int (*get_count)(struct page *))
{
- unsigned long flags;
unsigned long x = 0;
+#ifndef CONFIG_SLUB_CPU_PARTIAL
+ unsigned long flags;
struct page *page;
spin_lock_irqsave(&n->list_lock, flags);
list_for_each_entry(page, &n->partial, slab_list)
x += get_count(page);
spin_unlock_irqrestore(&n->list_lock, flags);
+#endif
return x;
}
#endif /* CONFIG_SLUB_DEBUG || CONFIG_SYSFS */
--
1.8.3.1
Powered by blists - more mailing lists