[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240819070204.753179-1-liuyongqiang13@huawei.com>
Date: Mon, 19 Aug 2024 15:02:04 +0800
From: Yongqiang Liu <liuyongqiang13@...wei.com>
To: <linux-mm@...ck.org>
CC: <linux-kernel@...r.kernel.org>, <zhangxiaoxu5@...wei.com>, <cl@...ux.com>,
<wangkefeng.wang@...wei.com>, <penberg@...nel.org>, <rientjes@...gle.com>,
<iamjoonsoo.kim@....com>, <akpm@...ux-foundation.org>, <vbabka@...e.cz>,
<roman.gushchin@...ux.dev>, <42.hyeyoo@...il.com>
Subject: [PATCH] mm, slub: prefetch freelist in ___slab_alloc()
commit 0ad9500e16fe ("slub: prefetch next freelist pointer in
slab_alloc()") introduced prefetch_freepointer() for fastpath
allocation. Use it at the freelist firt load could have a bit
improvement in some workloads. Here is hackbench results at
arm64 machine(about 3.8%):
Before:
average time cost of 'hackbench -g 100 -l 1000': 17.068
Afther:
average time cost of 'hackbench -g 100 -l 1000': 16.416
There is also having about 5% improvement at x86_64 machine
for hackbench.
Signed-off-by: Yongqiang Liu <liuyongqiang13@...wei.com>
---
mm/slub.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/slub.c b/mm/slub.c
index c9d8a2497fd6..f9daaff10c6a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3630,6 +3630,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
VM_BUG_ON(!c->slab->frozen);
c->freelist = get_freepointer(s, freelist);
c->tid = next_tid(c->tid);
+ prefetch_freepointer(s, c->freelist);
local_unlock_irqrestore(&s->cpu_slab->lock, flags);
return freelist;
--
2.25.1
Powered by blists - more mailing lists