[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230926060911.266511-1-ying.huang@intel.com>
Date: Tue, 26 Sep 2023 14:09:01 +0800
From: Huang Ying <ying.huang@...el.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Arjan Van De Ven <arjan@...ux.intel.com>,
Huang Ying <ying.huang@...el.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Vlastimil Babka <vbabka@...e.cz>,
David Hildenbrand <david@...hat.com>,
Johannes Weiner <jweiner@...hat.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Matthew Wilcox <willy@...radead.org>,
Christoph Lameter <cl@...ux.com>
Subject: [PATCH -V2 00/10] mm: PCP high auto-tuning
The page allocation performance requirements of different workloads
are often different. So, we need to tune the PCP (Per-CPU Pageset)
high on each CPU automatically to optimize the page allocation
performance.
The list of patches in series is as follows,
1 mm, pcp: avoid to drain PCP when process exit
2 cacheinfo: calculate per-CPU data cache size
3 mm, pcp: reduce lock contention for draining high-order pages
4 mm: restrict the pcp batch scale factor to avoid too long latency
5 mm, page_alloc: scale the number of pages that are batch allocated
6 mm: add framework for PCP high auto-tuning
7 mm: tune PCP high automatically
8 mm, pcp: decrease PCP high if free pages < high watermark
9 mm, pcp: avoid to reduce PCP high unnecessarily
10 mm, pcp: reduce detecting time of consecutive high order page freeing
Patch 1/2/3 optimize the PCP draining for consecutive high-order pages
freeing.
Patch 4/5 optimize batch freeing and allocating.
Patch 6/7/8/9 implement and optimize a PCP high auto-tuning method.
Patch 10 optimize the PCP draining for consecutive high order page
freeing based on PCP high auto-tuning.
The test results for patches with performance impact are as follows,
kbuild
======
On a 2-socket Intel server with 224 logical CPU, we run 8 kbuild
instances in parallel (each with `make -j 28`) in 8 cgroup. This
simulates the kbuild server that is used by 0-Day kbuild service.
build time lock contend% free_high alloc_zone
---------- ---------- --------- ----------
base 100.0 13.5 100.0 100.0
patch1 99.2 10.6 19.2 95.6
patch3 99.2 11.7 7.1 95.6
patch5 98.4 10.0 8.2 97.1
patch7 94.9 0.7 3.0 19.0
patch9 94.9 0.6 2.7 15.0
patch10 94.9 0.9 8.8 18.6
The PCP draining optimization (patch 1/3) and PCP batch allocation
optimization (patch 5) reduces zone lock contention a little. The PCP
high auto-tuning (patch 7/9/10) reduces build time visibly. Where the
tuning target: the number of pages allocated from zone reduces
greatly. So, the zone contention cycles% reduces greatly.
With PCP tuning patches (patch 7/9/10), the average used memory during
test increases up to 21.0% because more pages are cached in PCP. But
at the end of the test, the number of the used memory decreases to the
same level as that of the base patch. That is, the pages cached in
PCP will be released to zone after not being used actively.
netperf SCTP_STREAM_MANY
========================
On a 2-socket Intel server with 128 logical CPU, we tested
SCTP_STREAM_MANY test case of netperf test suite with 64-pair
processes.
score lock contend% free_high alloc_zone cache miss rate%
----- ---------- --------- ---------- ----------------
base 100.0 2.0 100.0 100.0 1.3
patch1 99.7 2.0 99.7 99.7 1.3
patch3 105.5 1.2 13.2 105.4 1.2
patch5 106.9 1.2 13.4 106.9 1.3
patch7 103.5 1.8 6.8 90.8 7.6
patch9 103.7 1.8 6.6 89.8 7.7
patch10 106.9 1.2 13.5 106.9 1.2
The PCP draining optimization (patch 1+3) improves performance. The
PCP high auto-tuning (patch 7/9) reduces performance a little because
PCP draining cannot be triggered in time sometimes. So, the cache
miss rate% increases. The further PCP draining optimization (patch
10) based on PCP tuning restore the performance.
lmbench3 UNIX (AF_UNIX)
=======================
On a 2-socket Intel server with 128 logical CPU, we tested UNIX
(AF_UNIX socket) test case of lmbench3 test suite with 16-pair
processes.
score lock contend% free_high alloc_zone cache miss rate%
----- ---------- --------- ---------- ----------------
base 100.0 50.0 100.0 100.0 0.3
patch1 117.1 45.8 72.6 108.9 0.2
patch3 201.6 21.2 7.4 111.5 0.2
patch5 201.9 20.9 7.5 112.7 0.3
patch7 194.2 19.3 7.3 111.5 2.9
patch9 193.1 19.2 7.2 110.4 2.9
patch10 196.8 21.0 7.4 111.2 2.1
The PCP draining optimization (patch 1/3) improves performance much.
The PCP tuning (patch 7/9) reduces performance a little because PCP
draining cannot be triggered in time sometimes. The further PCP
draining optimization (patch 10) based on PCP tuning restores the
performance partly.
The patchset adds several fields in struct per_cpu_pages. The struct
layout before/after the patchset is as follows,
base
====
struct per_cpu_pages {
spinlock_t lock; /* 0 4 */
int count; /* 4 4 */
int high; /* 8 4 */
int batch; /* 12 4 */
short int free_factor; /* 16 2 */
short int expire; /* 18 2 */
/* XXX 4 bytes hole, try to pack */
struct list_head lists[13]; /* 24 208 */
/* size: 256, cachelines: 4, members: 7 */
/* sum members: 228, holes: 1, sum holes: 4 */
/* padding: 24 */
} __attribute__((__aligned__(64)));
patched
=======
struct per_cpu_pages {
spinlock_t lock; /* 0 4 */
int count; /* 4 4 */
int count_min; /* 8 4 */
int high; /* 12 4 */
int high_min; /* 16 4 */
int high_max; /* 20 4 */
int batch; /* 24 4 */
u8 flags; /* 28 1 */
u8 alloc_factor; /* 29 1 */
u8 expire; /* 30 1 */
/* XXX 1 byte hole, try to pack */
short int free_count; /* 32 2 */
/* XXX 6 bytes hole, try to pack */
struct list_head lists[13]; /* 40 208 */
/* size: 256, cachelines: 4, members: 12 */
/* sum members: 241, holes: 2, sum holes: 7 */
/* padding: 8 */
} __attribute__((__aligned__(64)));
The size of the struct doesn't changed with the patchset.
Changelog:
v2:
- Fix the kbuild test configuration and results. Thanks Andrew for
reminding on test results!
- Add document for sysctl behavior extension in [06/10] per Andrew's comments.
Best Regards,
Huang, Ying
Powered by blists - more mailing lists