[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1597061872-58724-1-git-send-email-xlpang@linux.alibaba.com>
Date: Mon, 10 Aug 2020 20:17:49 +0800
From: Xunlei Pang <xlpang@...ux.alibaba.com>
To: Vlastimil Babka <vbabka@...e.cz>, Christoph Lameter <cl@...ux.com>,
Wen Yang <wenyang@...ux.alibaba.com>,
Roman Gushchin <guro@...com>, Pekka Enberg <penberg@...il.com>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
David Rientjes <rientjes@...gle.com>,
Xunlei Pang <xlpang@...ux.alibaba.com>
Cc: linux-kernel@...r.kernel.org,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: [PATCH v2 0/3] mm/slub: Fix count_partial() problem
v1->v2:
- Improved changelog and variable naming for PATCH 1~2.
- PATCH3 adds per-cpu counter to avoid performance regression
in concurrent __slab_free().
[Testing]
On my 32-cpu 2-socket physical machine:
Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
perf stat --null --repeat 10 -- hackbench 20 thread 20000
== original, no patched
19.211637055 seconds time elapsed ( +- 0.57% )
== patched with patch1~2
Performance counter stats for 'hackbench 20 thread 20000' (10 runs):
21.731833146 seconds time elapsed ( +- 0.17% )
== patched with patch1~3
Performance counter stats for 'hackbench 20 thread 20000' (10 runs):
19.112106847 seconds time elapsed ( +- 0.64% )
Xunlei Pang (3):
mm/slub: Introduce two counters for partial objects
mm/slub: Get rid of count_partial()
mm/slub: Use percpu partial free counter
mm/slab.h | 2 +
mm/slub.c | 124 +++++++++++++++++++++++++++++++++++++++++++-------------------
2 files changed, 89 insertions(+), 37 deletions(-)
--
1.8.3.1
Powered by blists - more mailing lists