[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z2sFzKmHPI0kI_fq@google.com>
Date: Tue, 24 Dec 2024 12:04:44 -0700
From: Yu Zhao <yuzhao@...gle.com>
To: kernel test robot <oliver.sang@...el.com>
Cc: oe-lkp@...ts.linux.dev, lkp@...el.com, Kairui Song <kasong@...cent.com>,
Kalesh Singh <kaleshsingh@...gle.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH mm-unstable v3 6/6] mm/mglru: rework workingset protection
On Mon, Dec 23, 2024 at 04:44:44PM +0800, kernel test robot wrote:
>
>
> Hello,
>
> kernel test robot noticed a 5.7% regression of will-it-scale.per_process_ops on:
Thanks, Oliver!
> commit: 3b7734aa8458b62ecbfd785ca7918e831565006e ("[PATCH mm-unstable v3 6/6] mm/mglru: rework workingset protection")
> url: https://github.com/intel-lab-lkp/linux/commits/Yu-Zhao/mm-mglru-clean-up-workingset/20241208-061714
> base: v6.13-rc1
> patch link: https://lore.kernel.org/all/20241207221522.2250311-7-yuzhao@google.com/
> patch subject: [PATCH mm-unstable v3 6/6] mm/mglru: rework workingset protection
>
> testcase: will-it-scale
> config: x86_64-rhel-9.4
> compiler: gcc-12
> test machine: 104 threads 2 sockets (Skylake) with 192G memory
> parameters:
>
> nr_task: 100%
> mode: process
> test: pread2
> cpufreq_governor: performance
I think this is very likely caused by my change to folio_mark_accessed()
that unncessarily dirties cache lines shared between different cores.
Could you try the following fix please?
diff --git a/mm/swap.c b/mm/swap.c
index 062c8565b899..54bce14fef30 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -395,7 +395,8 @@ static void lru_gen_inc_refs(struct folio *folio)
do {
if ((old_flags & LRU_REFS_MASK) == LRU_REFS_MASK) {
- folio_set_workingset(folio);
+ if (!folio_test_workingset(folio))
+ folio_set_workingset(folio);
return;
}
Powered by blists - more mailing lists