[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wgXr1JcW3hyomWh8Y8Kr9wNq-+6r+CocY8EfXvuW7giHg@mail.gmail.com>
Date: Mon, 24 Feb 2020 14:12:51 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: "Eric W. Biederman" <ebiederm@...ssion.com>
Cc: Feng Tang <feng.tang@...el.com>, Oleg Nesterov <oleg@...hat.com>,
Jiri Olsa <jolsa@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
kernel test robot <rong.a.chen@...el.com>,
Ingo Molnar <mingo@...nel.org>,
Vince Weaver <vincent.weaver@...ne.edu>,
Jiri Olsa <jolsa@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
"Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com>,
Ravi Bangoria <ravi.bangoria@...ux.ibm.com>,
Stephane Eranian <eranian@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
andi.kleen@...el.com, "Huang, Ying" <ying.huang@...el.com>
Subject: Re: [LKP] Re: [perf/x86] 81ec3f3c4c: will-it-scale.per_process_ops
-5.5% regression
On Mon, Feb 24, 2020 at 2:02 PM Eric W. Biederman <ebiederm@...ssion.com> wrote:
>
> Other than scratching my head about why are we optimizing neither do I.
You can see the background on lore
https://lore.kernel.org/lkml/20200205123216.GO12867@shao2-debian/
and the thread about the largely unexplained regression there. I had a
wild handwaving theory on what's going on in
https://lore.kernel.org/lkml/CAHk-=wjkSb1OkiCSn_fzf2v7A=K0bNsUEeQa+06XMhTO+oQUaA@mail.gmail.com/
but yes, the contention only happens once you have a lot of cores.
That said, I suspect it actually improves performance on that
microbenchmark even without the contention - just not as noticeably.
I'm running a kernel with the patch right now, but I wasn't going to
boot back into an old kernel just to test that. I was hoping that the
kernel test robot people would just check it out.
> It would help to have a comment somewhere in the code or the commit
> message that says the issue is contetion under load.
Note that even without the contention, on that "send a lot of signals"
case it does avoid the second atomic op, and the profile really does
look better.
That profile improvement I can see even on my own machine, and I see
how the nasty CPU bug avoidance (the "verw" on the system call exit
path) goes from 30% to 31% cost.
And that increase in the relative cost of the "verw" on the profile
must mean that the actual real code just improved in performance (even
if I didn't actually time it).
With the contention, you get that added odd extra regression that
seems to depend on exact cacheline placement.
So I think the patch improves performance (for this "lots of queued
signals" case) in general, and I hope it will also then get rid of
that contention regression.
Linus
Powered by blists - more mailing lists