[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190806110024.GA32615@google.com>
Date: Tue, 6 Aug 2019 20:00:24 +0900
From: Minchan Kim <minchan@...nel.org>
To: Michal Hocko <mhocko@...nel.org>
Cc: kernel test robot <oliver.sang@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Miguel de Dios <migueldedios@...gle.com>,
Wei Wang <wvw@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Mel Gorman <mgorman@...hsingularity.net>, lkp@...org
Subject: Re: [mm] 755d6edc1a: will-it-scale.per_process_ops -4.1% regression
On Tue, Aug 06, 2019 at 10:04:15AM +0200, Michal Hocko wrote:
> On Tue 06-08-19 15:05:47, kernel test robot wrote:
> > Greeting,
> >
> > FYI, we noticed a -4.1% regression of will-it-scale.per_process_ops due to commit:
>
> I have to confess I cannot make much sense from numbers because they
> seem to be too volatile and the main contributor doesn't stand up for
> me. Anyway, regressions on microbenchmarks like this are not all that
> surprising when a locking is slightly changed and the critical section
> made shorter. I have seen that in the past already.
I guess if it's multi process workload. The patch will give more chance
to be scheduled out so TLB miss ratio would be bigger than old.
I see it's natural trade-off for latency vs. performance so only thing
I could think is just increase threshold from 32 to 64 or 128?
>
> That being said I would still love to get to bottom of this bug rather
> than play with the lock duration by a magic. In other words
> http://lkml.kernel.org/r/20190730125751.GS9330@dhcp22.suse.cz
Yes, if we could remove mark_page_accessed there, it would be best.
I added a commen in the thread.
Powered by blists - more mailing lists