lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140613224926.GW4581@linux.vnet.ibm.com>
Date:	Fri, 13 Jun 2014 15:49:26 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Josh Triplett <josh@...htriplett.org>
Cc:	Frederic Weisbecker <fweisbec@...il.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: Re: [PATCH] rcu: Only pin GP kthread when full dynticks is actually
 used

On Fri, Jun 13, 2014 at 02:10:35PM -0700, Josh Triplett wrote:
> On Fri, Jun 13, 2014 at 01:48:22PM -0700, Paul E. McKenney wrote:
> > On Fri, Jun 13, 2014 at 09:44:41AM -0700, Josh Triplett wrote:
> > > On Fri, Jun 13, 2014 at 06:21:32PM +0200, Frederic Weisbecker wrote:
> > > > On Fri, Jun 13, 2014 at 09:16:30AM -0700, Paul E. McKenney wrote:
> > > > > > Is it because we have dynticks CPUs staying too long in the kernel without
> > > > > > taking any quiescent states? Are we perhaps missing some rcu_user_enter() or
> > > > > > things?
> > > > > 
> > > > > Sort of the former, but combined with the fact that in-kernel CPUs still
> > > > > need scheduling-clock interrupts for RCU to make progress.  I could
> > > > > move this to RCU's context-switch hook, but that could be very bad for
> > > > > workloads that do lots of context switching.
> > > > 
> > > > Or I can restart the tick if the CPU stays in the kernel for too long without
> > > > a tick. I think that's what we were doing before but we removed that because
> > > > we never implemented it correctly (we sent scheduler IPI that did nothing...)
> > > 
> > > I wonder if timer slack would make sense here: when you have at least
> > > one RCU callback pending, set a timer with a huge amount of timer slack,
> > > and cancel it if you end up handling the callback via a trip through the
> > > scheduler.
> > 
> > But in this case, we need the tick even if the current CPU has no callbacks
> > because it might be in an RCU read-side critical section.
> 
> Don't we handle that case via the slowpath of rcu_read_unlock, and a
> flag set via IPI?  ("Oh, that CPU has taken too long to note a quiescent
> state; send it an IPI to set the special flag that makes unlock do the
> work.")

There was once such logic on the force-quiescent-state path, and making
that handle this new case was my first proposal.  As Frederic pointed
out, that change requires rcu_needs_cpu()'s cooperation, because otherwise
the CPU will take the IPI, see that it still has but one runnable task,
and then keep its scheduling-clock interrupt off.

The thing that involves rcu_read_unlock_special() is a flag set
by the scheduling-clock interrupt, which doesn't help here.  Also,
if a CPU stays in the kernel for a very long time without passing
through any RCU read-side critical sections, there is nothing that
rcu_read_unlock_special() can do to help.

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ