[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f2401d3-dd4f-cbc6-8cb4-4e92fc64998c@linux.alibaba.com>
Date: Mon, 20 Jul 2020 11:01:10 +0800
From: Alex Shi <alex.shi@...ux.alibaba.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: Alexander Duyck <alexander.duyck@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Tejun Heo <tj@...nel.org>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
Yang Shi <yang.shi@...ux.alibaba.com>,
Matthew Wilcox <willy@...radead.org>,
Johannes Weiner <hannes@...xchg.org>,
kbuild test robot <lkp@...el.com>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>, cgroups@...r.kernel.org,
Shakeel Butt <shakeelb@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Wei Yang <richard.weiyang@...il.com>,
"Kirill A. Shutemov" <kirill@...temov.name>
Subject: Re: [PATCH v16 00/22] per memcg lru_lock
在 2020/7/19 下午11:23, Hugh Dickins 写道:
> I noticed that 5.8-rc5, with lrulock v16 applied, took significantly
> longer to run loads than without it applied, when there should have been
> only slight differences in system time. Comparing /proc/vmstat, something
> that stood out was "pgrotated 0" for the patched kernels, which led here:
>
> If pagevec_lru_move_fn() is now to TestClearPageLRU (I have still not
> decided whether that's good or not, but assume here that it is good),
> then functions called though it must be changed not to expect PageLRU!
>
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
Good catch!
Thanks a lot, Hugh!
except 6 changes should apply, looks we add one more in swap.c file to stop
!PageRLU further actions!
Many Thanks!
Alex
@@ -649,7 +647,7 @@ void deactivate_file_page(struct page *page)
* In a workload with many unevictable page such as mprotect,
* unevictable page deactivation for accelerating reclaim is pointless.
*/
- if (PageUnevictable(page))
+ if (PageUnevictable(page) || !PageLRU(page))
return;
if (likely(get_page_unless_zero(page))) {
Powered by blists - more mailing lists