[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wjKFTzfDWjAAabHTZcityeLpHmEQRrKdTuk0f4GWcoohQ@mail.gmail.com>
Date: Sun, 23 Feb 2020 09:37:06 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Feng Tang <feng.tang@...el.com>
Cc: Jiri Olsa <jolsa@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
kernel test robot <rong.a.chen@...el.com>,
Ingo Molnar <mingo@...nel.org>,
Vince Weaver <vincent.weaver@...ne.edu>,
Jiri Olsa <jolsa@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
"Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com>,
Ravi Bangoria <ravi.bangoria@...ux.ibm.com>,
Stephane Eranian <eranian@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
andi.kleen@...el.com, "Huang, Ying" <ying.huang@...el.com>
Subject: Re: [LKP] Re: [perf/x86] 81ec3f3c4c: will-it-scale.per_process_ops
-5.5% regression
On Sun, Feb 23, 2020 at 6:11 AM Feng Tang <feng.tang@...el.com> wrote:
>
> I tried to use perf-c2c on one platform (not the one that show
> the 5.5% regression), and found the main "hitm" points to the
> "root_user" global data, as there is a task for each CPU doing
> the signal stress test, and both __sigqueue_alloc() and
> __sigqueue_free() will call get_user() and free_uid() to inc/dec
> this root_user's refcount.
What's around it for you?
There might be that 'uidhash_lock' spinlock right next to it, and
maybe that exacerbates the issue?
> Then I added some alignement inside struct "user_struct" (for
> "root_user"), then the -5.5% is gone, with a +2.6% instead.
Do you actually need to align things inside the struct, or is it
sufficient to just align the structure itself?
IOW, is the cache conflicts _within_ the user_struct itself, or is it
with some nearby data (like that uidhash_lock or whatever?)
> One thing I don't understand is, this -5.5% only happens in
> one 2 sockets, 96C/192T Cascadelake platform, as we've run
> the same test on several different platforms. In therory,
> the false sharing may also take effect?
Is that the biggest machine you have access to?
Maybe it just isn't noticeable with smaller core counts. A lot of
conflict loads tend to have "exponential" behavior - when things get
overloaded, performance plummets because it just makes things worse as
everybody gets slower at that contention point and now it gets even
more contended...
Linus
Powered by blists - more mailing lists