[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAHk-=wjEd58q7Seh18w9S0+UWAGRYgjTiOahAyYiHCrc1N6YZw@mail.gmail.com>
Date: Mon, 12 Oct 2020 09:33:33 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Xing Zhengjun <zhengjun.xing@...ux.intel.com>
Cc: kernel test robot <rong.a.chen@...el.com>,
Michael Larabel <Michael@...haellarabel.com>,
Matthieu Baerts <matthieu.baerts@...sares.net>,
Dave Chinner <david@...morbit.com>,
Matthew Wilcox <willy@...radead.org>, Chris Mason <clm@...com>,
Jan Kara <jack@...e.cz>, Amir Goldstein <amir73il@...il.com>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
kernel test robot <lkp@...el.com>, zhengjun.xing@...el.com
Subject: Re: [LKP] [mm] 5ef64cc898: vm-scalability.throughput -20.6% regression
On Sun, Oct 11, 2020 at 11:57 PM Xing Zhengjun
<zhengjun.xing@...ux.intel.com> wrote:
>
> Hi Linus,
>
> Do you have time to look at this? Thanks. I re-test it in v5.9-rc8,
> the regression still existed.
This is one of the series vm-scalability tests that got a huge
improvement (up to 160%) when I did the complete page fairness patch
(ie commit 2a9127fcf229).
But since that fairness thing caused regressions elsewhere, it got
mostly limited, so we are likely back in the same ballpark as before
(although hopefully without some of the absolute _worst_ latency
peaks, who knows).
All these vm-scalability tests seem to be very noisy and unreliable,
and the exact details of the page locking can cause huge differences
for almost random reasons.
I think the main issue is just that bad timing luck and the fact that
the page lock is too contended under some loads makes for test cases
that can show fairly bi-modal behavior.
Linus
Powered by blists - more mailing lists