lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150701004214.GA30853@x>
Date:	Tue, 30 Jun 2015 17:42:14 -0700
From:	Josh Triplett <josh@...htriplett.org>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	linux-kernel@...r.kernel.org, mingo@...nel.org,
	laijs@...fujitsu.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org,
	dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
	fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com
Subject: Re: [PATCH RFC tip/core/rcu 0/5] Expedited grace periods encouraging
 normal ones

On Tue, Jun 30, 2015 at 05:15:58PM -0700, Paul E. McKenney wrote:
> On Tue, Jun 30, 2015 at 04:46:33PM -0700, josh@...htriplett.org wrote:
> > On Tue, Jun 30, 2015 at 03:12:24PM -0700, Paul E. McKenney wrote:
> > > On Tue, Jun 30, 2015 at 03:00:15PM -0700, josh@...htriplett.org wrote:
> > > > On Tue, Jun 30, 2015 at 02:48:05PM -0700, Paul E. McKenney wrote:
> > > > > Hello!
> > > > > 
> > > > > This series contains some highly experimental patches that allow normal
> > > > > grace periods to take advantage of the work done by concurrent expedited
> > > > > grace periods.  This can reduce the overhead incurred by normal grace
> > > > > periods by eliminating the need for force-quiescent-state scans that
> > > > > would otherwise have happened after the expedited grace period completed.
> > > > > It is not clear whether this is a useful tradeoff.  Nevertheless, this
> > > > > series contains the following patches:
> > > > 
> > > > While it makes sense to avoid unnecessarily delaying a normal grace
> > > > period if the expedited machinery has provided the necessary delay, I'm
> > > > also *deeply* concerned that this will create a new class of
> > > > nondeterministic performance issues.  Something that uses RCU may
> > > > perform badly due to grace period latency, but then suddenly start
> > > > performing well because an unrelated task starts hammering expedited
> > > > grace periods.  This seems particularly likely during boot, for
> > > > instance, where RCU grace periods can be a significant component of boot
> > > > time (when you're trying to boot to userspace in small fractions of a
> > > > second).
> > > 
> > > I will take that as another vote against.  And for a reason that I had
> > > not yet come up with, so good show!  ;-)
> > 
> > Consider it a fairly weak concern against.  Increasing performance seems
> > like a good thing in general; I just don't relish the future "feels less
> > responsive" bug reports that take a long time to track down and turn out
> > to be "this completely unrelated driver was loaded and started using
> > expedited grace periods".
> 
> From what I can see, this one needs a good reason to go in, as opposed
> to a good reason to stay out.
> 
> > Then again, perhaps the more relevant concern would be why drivers use
> > expedited grace periods in the first place.
> 
> Networking uses expedited grace periods when RTNL is held to reduce
> contention on that lock.

Wait, what?  Why is anything using traditional (non-S) RCU while *any*
lock is held?

> Several other places have used it to minimize
> user-visible grace-period slowdown.  But there are probably places that
> would be better served doing something different.  That is after all
> the common case for most synchronization primitives.  ;-)

Sounds likely. :)

- Josh Triplett
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ