[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <40dc40d5-58ff-e5b9-4e33-b0ce08ca0521@redhat.com>
Date: Fri, 12 Apr 2019 12:43:03 -0400
From: Waiman Long <longman@...hat.com>
To: kernel test robot <rong.a.chen@...el.com>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Andrew Morton <akpm@...ux-foundation.org>,
Davidlohr Bueso <dave@...olabs.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Will Deacon <will.deacon@....com>,
LKML <linux-kernel@...r.kernel.org>,
"H. Peter Anvin" <hpa@...or.com>, tipbuild@...or.com, lkp@...org
Subject: Re: [locking/rwsem] 1b94536f2d: stress-ng.bad-altstack.ops_per_sec
-32.7% regression
On 04/12/2019 10:20 AM, kernel test robot wrote:
> Greeting,
>
> FYI, we noticed a -32.7% regression of stress-ng.bad-altstack.ops_per_sec due to commit:
>
>
> commit: 1b94536f2debc98260fb17b44f7f262e3336f7e0 ("locking/rwsem: Implement lock handoff to prevent lock starvation")
> https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git WIP.locking/core
>
> in testcase: stress-ng
> on test machine: 272 threads Intel(R) Xeon Phi(TM) CPU 7255 @ 1.10GHz with 112G memory
> with following parameters:
>
> nr_threads: 100%
> disk: 1HDD
> testtime: 5s
> class: memory
> cpufreq_governor: performance
>
>
>
>
> Details are as below:
> -------------------------------------------------------------------------------------------------->
>
>
> To reproduce:
>
> git clone https://github.com/intel/lkp-tests.git
> cd lkp-tests
> bin/lkp install job.yaml # job file is attached in this email
> bin/lkp run job.yaml
>
> =========================================================================================
> class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime:
> memory/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2018-04-03.cgz/lkp-knm02/stress-ng/5s
>
> commit:
> 1bcfe0e4cb ("locking/rwsem: Improve scalability via a new locking scheme")
> 1b94536f2d ("locking/rwsem: Implement lock handoff to prevent lock starvation")
>
> 1bcfe0e4cb0efdba 1b94536f2debc98260fb17b44f7
> ---------------- ---------------------------
> fail:runs %reproduction fail:runs
> | | |
> 1:4 -25% :4 dmesg.WARNING:at_ip__mutex_lock/0x
> :4 25% 1:4 kmsg.DHCP/BOOTP:Reply_not_for_us_on_eth#,op[#]xid[#]
> %stddev %change %stddev
> \ | \
> 52766 ± 19% -32.8% 35434 ± 3% stress-ng.bad-altstack.ops
> 10521 ± 19% -32.7% 7081 ± 3% stress-ng.bad-altstack.ops_per_sec
> 71472 ± 16% -37.1% 44986 stress-ng.stackmmap.ops
> 14281 ± 16% -37.0% 9001 stress-ng.stackmmap.ops_per_sec
The lock handoff patch does have the side effect of reducing throughput
for better fairness when there is extreme contention on a rwsem. I
believe later patches that enable reader optimistic spinning should
bring back some of the lost performance.
Cheers,
Longman
Powered by blists - more mailing lists