[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.2007190801490.3521@eggly.anvils>
Date: Sun, 19 Jul 2020 08:23:14 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Alex Shi <alex.shi@...ux.alibaba.com>
cc: Alexander Duyck <alexander.duyck@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Tejun Heo <tj@...nel.org>, Hugh Dickins <hughd@...gle.com>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
Yang Shi <yang.shi@...ux.alibaba.com>,
Matthew Wilcox <willy@...radead.org>,
Johannes Weiner <hannes@...xchg.org>,
kbuild test robot <lkp@...el.com>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>, cgroups@...r.kernel.org,
Shakeel Butt <shakeelb@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Wei Yang <richard.weiyang@...il.com>,
"Kirill A. Shutemov" <kirill@...temov.name>
Subject: Re: [PATCH v16 00/22] per memcg lru_lock
On Fri, 17 Jul 2020, Alex Shi wrote:
> 在 2020/7/16 下午10:11, Alexander Duyck 写道:
> >> Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu,
> >> and Yun Wang. Hugh Dickins also shared his kbuild-swap case. Thanks!
> > Hi Alex,
> >
> > I think I am seeing a regression with this patch set when I run the
> > will-it-scale/page_fault3 test. Specifically the processes result is
> > dropping from 56371083 to 43127382 when I apply these patches.
> >
> > I haven't had a chance to bisect and figure out what is causing it,
> > and wanted to let you know in case you are aware of anything specific
> > that may be causing this.
>
>
> Thanks a lot for the info!
>
> Actually, the patch 17th, and patch 13th may changed performance a little,
> like the 17th, intel LKP found vm-scalability.throughput 68.0% improvement,
> and stress-ng.remap.ops_per_sec -76.3% regression, or stress-ng.memfd.ops_per_sec
> +23.2%. etc.
>
> This kind performance interference is known and acceptable.
That may be too blithe a response.
I can see that I've lots of other mails to reply to, from you and from
others - I got held up for a week in advancing from gcc 4.8 on my test
machines. But I'd better rush this to you before reading further, because
what I was hunting the last few days rather invalidates earlier testing.
And I'm glad that I held back from volunteering a Tested-by - though,
yes, v13 and later are stable where the older versions were unstable.
I noticed that 5.8-rc5, with lrulock v16 applied, took significantly
longer to run loads than without it applied, when there should have been
only slight differences in system time. Comparing /proc/vmstat, something
that stood out was "pgrotated 0" for the patched kernels, which led here:
If pagevec_lru_move_fn() is now to TestClearPageLRU (I have still not
decided whether that's good or not, but assume here that it is good),
then functions called though it must be changed not to expect PageLRU!
Signed-off-by: Hugh Dickins <hughd@...gle.com>
---
mm/swap.c | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
--- 5.8-rc5-lru16/mm/swap.c 2020-07-15 21:03:42.781236769 -0700
+++ linux/mm/swap.c 2020-07-18 13:28:14.000000000 -0700
@@ -227,7 +227,7 @@ static void pagevec_lru_move_fn(struct p
static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec)
{
- if (PageLRU(page) && !PageUnevictable(page)) {
+ if (!PageUnevictable(page)) {
del_page_from_lru_list(page, lruvec, page_lru(page));
ClearPageActive(page);
add_page_to_lru_list_tail(page, lruvec, page_lru(page));
@@ -300,7 +300,7 @@ void lru_note_cost_page(struct page *pag
static void __activate_page(struct page *page, struct lruvec *lruvec)
{
- if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
+ if (!PageActive(page) && !PageUnevictable(page)) {
int lru = page_lru_base_type(page);
int nr_pages = hpage_nr_pages(page);
@@ -357,7 +357,8 @@ void activate_page(struct page *page)
page = compound_head(page);
lruvec = lock_page_lruvec_irq(page);
- __activate_page(page, lruvec);
+ if (PageLRU(page))
+ __activate_page(page, lruvec);
unlock_page_lruvec_irq(lruvec);
}
#endif
@@ -515,9 +516,6 @@ static void lru_deactivate_file_fn(struc
bool active;
int nr_pages = hpage_nr_pages(page);
- if (!PageLRU(page))
- return;
-
if (PageUnevictable(page))
return;
@@ -558,7 +556,7 @@ static void lru_deactivate_file_fn(struc
static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec)
{
- if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
+ if (PageActive(page) && !PageUnevictable(page)) {
int lru = page_lru_base_type(page);
int nr_pages = hpage_nr_pages(page);
@@ -575,7 +573,7 @@ static void lru_deactivate_fn(struct pag
static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec)
{
- if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
+ if (PageAnon(page) && PageSwapBacked(page) &&
!PageSwapCache(page) && !PageUnevictable(page)) {
bool active = PageActive(page);
int nr_pages = hpage_nr_pages(page);
Powered by blists - more mailing lists