lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53A0CAE5.9000702@intel.com>
Date:	Tue, 17 Jun 2014 16:10:29 -0700
From:	Dave Hansen <dave.hansen@...el.com>
To:	paulmck@...ux.vnet.ibm.com
CC:	LKML <linux-kernel@...r.kernel.org>,
	Josh Triplett <josh@...htriplett.org>,
	"Chen, Tim C" <tim.c.chen@...el.com>,
	Andi Kleen <ak@...ux.intel.com>,
	Christoph Lameter <cl@...ux.com>
Subject: Re: [bisected] pre-3.16 regression on open() scalability

On 06/13/2014 03:45 PM, Paul E. McKenney wrote:
>> > Could the additional RCU quiescent states be causing us to be doing more
>> > RCU frees that we were before, and getting less benefit from the lock
>> > batching that RCU normally provides?
> Quite possibly.  One way to check would be to use the debugfs files
> rcu/*/rcugp, which give a count of grace periods since boot for each
> RCU flavor.  Here "*" is rcu_preempt for CONFIG_PREEMPT and rcu_sched
> for !CONFIG_PREEMPT.

With the previously-mentioned workload, rcugp's "age" averages 9 with
the old kernel (or RCU_COND_RESCHED_LIM at a high value) and 2 with the
current kernel which contains this regression.

I also checked the rate and sources for how I'm calling cond_resched.
I'm calling it 5x for every open/close() pair in my test case, which
take about 7us.  So, _cond_resched() is, on average, only being called
every microsecond.  That doesn't seem _too_ horribly extreme.

>  3895.165846 |     8)               |  SyS_open() {
>  3895.165846 |     8)   0.065 us    |    _cond_resched();
>  3895.165847 |     8)   0.064 us    |    _cond_resched();
>  3895.165849 |     8)   2.406 us    |  }
>  3895.165849 |     8)   0.199 us    |  SyS_close();
>  3895.165850 |     8)               |  do_notify_resume() {
>  3895.165850 |     8)   0.063 us    |    _cond_resched();
>  3895.165851 |     8)   0.069 us    |    _cond_resched();
>  3895.165852 |     8)   0.060 us    |    _cond_resched();
>  3895.165852 |     8)   2.194 us    |  }
>  3895.165853 |     8)               |  SyS_open() {

The more I think about it, the more I think we can improve on a purely
call-based counter.

First, it couples the number of cond_resched() directly calls with the
benefits we see out of RCU.  We really don't *need* to see more grace
periods if we have more cond_resched() calls.

It also ends up eating a new cacheline in a bunch of pretty hot paths.
It would be nice to be able to keep the fast path part of this as at
least read-only.

Could we do something (functionally) like the attached patch?  Instead
of counting cond_resched() calls, we could just specify some future time
by which we want have a quiescent state.  We could even push the time to
be something _just_ before we would have declared a stall.


View attachment "rcu-halfstall.patch" of type "text/x-patch" (2697 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ