[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200222124359.GA86836@shbuild999.sh.intel.com>
Date: Sat, 22 Feb 2020 20:43:59 +0800
From: Feng Tang <feng.tang@...el.com>
To: "Kleen, Andi" <andi.kleen@...el.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
"Chen, Rong A" <rong.a.chen@...el.com>,
Jiri Olsa <jolsa@...hat.com>, Ingo Molnar <mingo@...nel.org>,
Vince Weaver <vincent.weaver@...ne.edu>,
Jiri Olsa <jolsa@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com>,
Ravi Bangoria <ravi.bangoria@...ux.ibm.com>,
Stephane Eranian <eranian@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
"lkp@...ts.01.org" <lkp@...ts.01.org>,
"Huang, Ying" <ying.huang@...el.com>
Subject: Re: [LKP] Re: [perf/x86] 81ec3f3c4c: will-it-scale.per_process_ops
-5.5% regression
Hi Andi,
On Sat, Feb 22, 2020 at 02:05:02AM +0800, Kleen, Andi wrote:
>
>
> >So likely, this commit changes the layout of the kernel text
> >and data,
>
> It should be only data here. text changes all the time anyways,
> but data tends to be more stable.
Yes, I also did en experiment by modifying the gcc option to let
all functions address aligned to 32 or 64, and the 5.5% gap still
exist for the 2 commmits.
> > which may trigger some cacheline level change. From
> >the system map of the 2 kernels, a big trunk of symbol's address
> >changes which follow the global "pmu",
>
> I wonder if it's the effect Andrew predicted a long time ago from
> using __read_mostly. If all the __read_mostlies are moved somewhere
> else the remaining read/write variables will get more sensitive to false sharing.
>
> A simple experiment would be to add a __cacheline_aligned to align it,
> and then add
>
> ____cacheline_aligned char dummy[0];
>
> at the end to pad it to 64bytes.
Thanks for the suggestion, I tried this and the 5.5 regrssion is gone!
which also confirms the offset for the bulk of stuff following "pmu"
causes the performance drop.
>
> Or hopefully Jiri can figure it out from the C2C data.
I'm also trying to debug following Jiri's "perf c2c" suggestion.
Thanks,
Feng
Powered by blists - more mailing lists