[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230321075632.28775-1-yang.yang29@zte.com.cn>
Date: Tue, 21 Mar 2023 07:56:32 +0000
From: Yang Yang <yang.yang29@....com.cn>
To: yujie.liu@...el.com, akpm@...ux-foundation.org, hannes@...xchg.org,
iamjoonsoo.kim@....com
Cc: bagasdotme@...il.com, feng.tang@...el.com, fengwei.yin@...el.com,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, lkp@...el.com, oe-lkp@...ts.linux.dev,
ran.xiaokai@....com.cn, willy@...radead.org,
yang.yang29@....com.cn, ying.huang@...el.com,
zhengjun.xing@...ux.intel.com
Subject: [linus:master] [swap_state] 5649d113ff: vm-scalability.throughput -33.1% regression
> commit:
> 04bac040bc ("mm/hugetlb: convert get_hwpoison_huge_page() to folios")
> 5649d113ff ("swap_state: update shadow_nodes for anonymous page")
> 04bac040bc71b4b3 5649d113ffce9f532a9ecc5ab96
> ---------------- ---------------------------
> %stddev %change %stddev
> \ | \
> 10026093 ± 3% -33.1% 6702748 ± 2% vm-scalability.throughput
> 04bac040bc71b4b3 5649d113ffce9f532a9ecc5ab96
> ---------------- ---------------------------
> %stddev %change %stddev
> \ | \
> 553378 -11.1% 492012 ± 2% vm-scalability.median
I see the two results are much different, one is -33.1%, another is -11.1%.
So I tried more times to reproduce on my machine, and see a 8% of regression
of vm-scalability.throughput.
As this test add/delete/clear swap cache frequently, the impact of commit
5649d113ff might be magnified ?
Commit 5649d113ff tried to fix the problem that if swap space is huge and
apps are using many shadow entries, shadow nodes may waste much space in
memory. So the shadow nodes should be reclaimed when it's number is huge while
memory is in tense.
I reviewed commit 5649d113ff carefully, and didn't found any obviously
problem. If we want to correctly update shadow_nodes for anonymous page,
we have to update them when add/delete/clear swap cache.
Thanks.
Powered by blists - more mailing lists