[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200818082943.GA65567@shbuild999.sh.intel.com>
Date: Tue, 18 Aug 2020 16:29:43 +0800
From: Feng Tang <feng.tang@...el.com>
To: Borislav Petkov <bp@...e.de>
Cc: kernel test robot <rong.a.chen@...el.com>,
Tony Luck <tony.luck@...el.com>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org
Subject: Re: [LKP] Re: [x86/mce] 1de08dccd3: will-it-scale.per_process_ops
-14.1% regression
Hi Borislav,
On Sat, Apr 25, 2020 at 03:01:36PM +0200, Borislav Petkov wrote:
> On Sat, Apr 25, 2020 at 07:44:14PM +0800, kernel test robot wrote:
> > Greeting,
> >
> > FYI, we noticed a -14.1% regression of will-it-scale.per_process_ops due to commit:
> >
> >
> > commit: 1de08dccd383482a3e88845d3554094d338f5ff9 ("x86/mce: Add a struct mce.kflags field")
>
> I don't see how a struct mce member addition will cause any performance
> regression. Please check your test case.
Sorry for the late response.
We've done more rounds of test, and the test results are consistent.
Our suspect is the commit changes the data alignment of other kernel
domains than mce, which causes the performance change to this malloc
microbenchmark.
Without the patch, size of 'struct mce' is 120 bytes, while it will
be 128 bytes after adding the '__u64 kflags'
And we also debugged further:
* add "mce=off" to kernel cmdline, the performance change keeps.
* change the 'kflags' from __u64 to __u32 (the size of mce will
go back to 120 bytes), the performance change is gone
* only comment off '__u64 kflags', peformance change is gone.
We also tried perf c2c tool to capture some data, but the platform
is a Xeon Phi which doesn't support it. Capturing raw HITM event
also can not provide useful data.
0day has reported quite some strange peformance bump like this,
https://lore.kernel.org/lkml/20200205123216.GO12867@shao2-debian/
https://lore.kernel.org/lkml/20200114085637.GA29297@shao2-debian/
https://lore.kernel.org/lkml/20200330011254.GA14393@feng-iot/
for some of which, the bump could be gone if we hack to force all
kernel functions to be aligned, but it doesn't work for this case.
So together with the debugging above, we thought this could be a
data alignment change caused performance bump.
Thanks,
Feng
> Thx.
>
> --
> Regards/Gruss,
> Boris.
>
> SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
> _______________________________________________
> LKP mailing list -- lkp@...ts.01.org
> To unsubscribe send an email to lkp-leave@...ts.01.org
Powered by blists - more mailing lists