lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 5 Aug 2014 18:21:39 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	Oleg Nesterov <oleg@...hat.com>, linux-kernel@...r.kernel.org,
	mingo@...nel.org, laijs@...fujitsu.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
	dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
	fweisbec@...il.com, bobby.prani@...il.com
Subject: Re: [PATCH v3 tip/core/rcu 3/9] rcu: Add synchronous grace-period
 waiting for RCU-tasks

On Tue, Aug 05, 2014 at 08:57:11PM -0400, Steven Rostedt wrote:
> On Sat, 2 Aug 2014 15:58:57 -0700
> "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:
> 
> > On Sat, Aug 02, 2014 at 04:47:19PM +0200, Oleg Nesterov wrote:
> > > On 08/01, Paul E. McKenney wrote:
> > > >
> > > > On Fri, Aug 01, 2014 at 11:32:51AM -0700, Paul E. McKenney wrote:
> > > > > On Fri, Aug 01, 2014 at 05:09:26PM +0200, Oleg Nesterov wrote:
> > > > > > On 07/31, Paul E. McKenney wrote:
> > > > > > >
> > > > > > > +void synchronize_rcu_tasks(void)
> > > > > > > +{
> > > > > > > +	/* Complain if the scheduler has not started.  */
> > > > > > > +	rcu_lockdep_assert(!rcu_scheduler_active,
> > > > > > > +			   "synchronize_rcu_tasks called too soon");
> > > > > > > +
> > > > > > > +	/* Wait for the grace period. */
> > > > > > > +	wait_rcu_gp(call_rcu_tasks);
> > > > > > > +}
> > > > > >
> > > > > > Btw, what about CONFIG_PREEMPT=n ?
> > > > > >
> > > > > > I mean, can't synchronize_rcu_tasks() be synchronize_sched() in this
> > > > > > case?
> > > > >
> > > > > Excellent point, indeed it can!
> > > > >
> > > > > And if I do it right, it will make CONFIG_TASKS_RCU=y safe for kernel
> > > > > tinification.  ;-)
> > > >
> > > > Unless, that is, we need to wait for trampolines in the idle loop...
> > > >
> > > > Sounds like a question for Steven.  ;-)
> > > 
> > > Sure, but the full blown synchronize_rcu_tasks() can't handle the idle threads
> > > anyway. An idle thread can not be deactivated and for_each_process() can't see
> > > it anyway.
> > 
> > Indeed, if idle threads need to be tracked, their tracking will need to
> > be at least partially special-cased.
> 
> Yeah, idle threads can be affected by the trampolines. That is, we can
> still hook a trampoline to some function in the idle loop.
> 
> But we should be able to make the hardware call that puts the CPU to
> sleep a quiescent state too. May need to be arch dependent. :-/

OK, my plan for this eventuality is to do the following:

1.	Ignore the ->on_rq field, as idle tasks are always on a runqueue.

2.	Watch the context-switch counter.

3.	Ignore dyntick-idle state for idle tasks.

4.	If there is no quiescent state from a given idle task after
	a few seconds, schedule rcu_tasks_kthread() on top of the
	offending CPU.

Your idea is an interesting one, but does require another set of
dyntick-idle-like functions and counters.  Or moving the current
rcu_idle_enter() and rcu_idle_exit() calls deeper into the idle loop.

Not sure which is a better approach.  Alternatively, we could just
rely on #4 above, on the grounds that battery life should not be
too badly degraded by the occasional RCU-tasks interference.

Note that this is a different situation than NO_HZ_FULL in realtime
environments, where the worst case causes trouble even if it happens
very infrequently.

						Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists