[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240111183321.19984-1-ryncsn@gmail.com>
Date: Fri, 12 Jan 2024 02:33:18 +0800
From: Kairui Song <ryncsn@...il.com>
To: linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Yu Zhao <yuzhao@...gle.com>,
Chris Li <chrisl@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
linux-kernel@...r.kernel.org,
Kairui Song <kasong@...cent.com>
Subject: [PATCH v2 0/3] mm, lru_gen: batch update pages when aging
From: Kairui Song <kasong@...cent.com>
Hi, this is updated version of previous series:
https://lore.kernel.org/linux-mm/20231222102255.56993-1-ryncsn@gmail.com/
Currently when MGLRU ages, it moves the pages one by one and updates mm
counter page by page, which is correct but the overhead can be optimized
by batching these operations.
In pervious series I only test with memtier which didn't show a good
enough improment. Acutally in-mem fio benifits the most from patch 3:
Ramdisk fio test in a 4G memcg on a EPYC 7K62 with:
fio -name=mglru --numjobs=16 --directory=/mnt --size=960m \
--buffered=1 --ioengine=io_uring --iodepth=128 \
--iodepth_batch_submit=32 --iodepth_batch_complete=32 \
--rw=randread --random_distribution=zipf:0.5 --norandommap \
--time_based --ramp_time=1m --runtime=5m --group_reporting
Before this series:
bw ( MiB/s): min= 7644, max= 9293, per=100.00%, avg=8777.77, stdev=16.59, samples=9568
iops : min=1956954, max=2379053, avg=2247108.51, stdev=4247.22, samples=9568
After this series (+7.5%):
bw ( MiB/s): min= 8462, max= 9902, per=100.00%, avg=9444.77, stdev=16.43, samples=9568
iops : min=2166433, max=2535135, avg=2417858.23, stdev=4205.15, samples=9568
However it's highly related to the actual timing and use case.
Besides, batch moving also has a good effect on LRU ordering. Currently when
MGLRU ages, it walks the LRU backward, and the protected pages are moved to
the tail of newer gen one by one, which reverses the order of pages in
LRU. Moving them in batches can help keep their order, only in a small
scope though due to the scan limit of MAX_LRU_BATCH pages.
I noticed a higher performance gain if there are a lot of pages getting
protected, but hard to reproduce, so instead I tested using a simpler
benchmark, memtier, also for a more generic result. The main overhead
here is not aging but the result is also looking good:
Average result of 18 test runs:
Before: 44017.78 Ops/sec
After patch 1-3: 44890.50 Ops/sec (+1.8%)
Some more test result in commit messages.
Kairui Song (3):
mm, lru_gen: batch update counters on againg
mm, lru_gen: move pages in bulk when aging
mm, lru_gen: try to prefetch next page when canning LRU
mm/vmscan.c | 140 ++++++++++++++++++++++++++++++++++++++++++++++------
1 file changed, 124 insertions(+), 16 deletions(-)
--
2.43.0
Powered by blists - more mailing lists