lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 31 Jul 2014 11:34:03 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	josh@...htriplett.org
Cc:	linux-kernel@...r.kernel.org, mingo@...nel.org,
	laijs@...fujitsu.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	tglx@...utronix.de, peterz@...radead.org, rostedt@...dmis.org,
	dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
	fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com
Subject: Re: [PATCH v2 tip/core/rcu 03/10] rcu: Add synchronous grace-period
 waiting for RCU-tasks

On Thu, Jul 31, 2014 at 09:58:52AM -0700, josh@...htriplett.org wrote:
> On Wed, Jul 30, 2014 at 05:39:35PM -0700, Paul E. McKenney wrote:
> > From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
> > 
> > It turns out to be easier to add the synchronous grace-period waiting
> > functions to RCU-tasks than to work around their absense in rcutorture,
> > so this commit adds them.  The key point is that the existence of
> > call_rcu_tasks() means that rcutorture needs an rcu_barrier_tasks().
> > 
> > Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> 
> With rcu_barrier_tasks being a trivial wrapper, why not just let
> rcutorture call synchronize_rcu_tasks directly?

I considered that, but took the rcu_barrier_tasks() approach so that
should anyone ever use call_rcu_tasks() from a module, they have the
rcu_barrier_tasks() call to use at module-exit time.

But I don't feel all that strongly about it.

							Thanx, Paul

> >  include/linux/rcupdate.h |  2 ++
> >  kernel/rcu/update.c      | 55 ++++++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 57 insertions(+)
> > 
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index 3299ff98ad03..17c7e25c38be 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -216,6 +216,8 @@ void synchronize_sched(void);
> >   * memory ordering guarantees.
> >   */
> >  void call_rcu_tasks(struct rcu_head *head, void (*func)(struct rcu_head *head));
> > +void synchronize_rcu_tasks(void);
> > +void rcu_barrier_tasks(void);
> >  
> >  #ifdef CONFIG_PREEMPT_RCU
> >  
> > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > index b92268647a01..c8d304dc6d8a 100644
> > --- a/kernel/rcu/update.c
> > +++ b/kernel/rcu/update.c
> > @@ -387,6 +387,61 @@ void call_rcu_tasks(struct rcu_head *rhp, void (*func)(struct rcu_head *rhp))
> >  }
> >  EXPORT_SYMBOL_GPL(call_rcu_tasks);
> >  
> > +/**
> > + * synchronize_rcu_tasks - wait until an rcu-tasks grace period has elapsed.
> > + *
> > + * Control will return to the caller some time after a full rcu-tasks
> > + * grace period has elapsed, in other words after all currently
> > + * executing rcu-tasks read-side critical sections have elapsed.  These
> > + * read-side critical sections are delimited by calls to schedule(),
> > + * cond_resched_rcu_qs(), idle execution, userspace execution, calls
> > + * to synchronize_rcu_tasks(), and (in theory, anyway) cond_resched().
> > + *
> > + * This is a very specialized primitive, intended only for a few uses in
> > + * tracing and other situations requiring manipulation of function
> > + * preambles and profiling hooks.  The synchronize_rcu_tasks() function
> > + * is not (yet) intended for heavy use from multiple CPUs.
> > + *
> > + * Note that this guarantee implies further memory-ordering guarantees.
> > + * On systems with more than one CPU, when synchronize_rcu_tasks() returns,
> > + * each CPU is guaranteed to have executed a full memory barrier since the
> > + * end of its last RCU-tasks read-side critical section whose beginning
> > + * preceded the call to synchronize_rcu_tasks().  In addition, each CPU
> > + * having an RCU-tasks read-side critical section that extends beyond
> > + * the return from synchronize_rcu_tasks() is guaranteed to have executed
> > + * a full memory barrier after the beginning of synchronize_rcu_tasks()
> > + * and before the beginning of that RCU-tasks read-side critical section.
> > + * Note that these guarantees include CPUs that are offline, idle, or
> > + * executing in user mode, as well as CPUs that are executing in the kernel.
> > + *
> > + * Furthermore, if CPU A invoked synchronize_rcu_tasks(), which returned
> > + * to its caller on CPU B, then both CPU A and CPU B are guaranteed
> > + * to have executed a full memory barrier during the execution of
> > + * synchronize_rcu_tasks() -- even if CPU A and CPU B are the same CPU
> > + * (but again only if the system has more than one CPU).
> > + */
> > +void synchronize_rcu_tasks(void)
> > +{
> > +	/* Complain if the scheduler has not started.  */
> > +	rcu_lockdep_assert(!rcu_scheduler_active,
> > +			   "synchronize_rcu_tasks called too soon");
> > +
> > +	/* Wait for the grace period. */
> > +	wait_rcu_gp(call_rcu_tasks);
> > +}
> > +
> > +/**
> > + * rcu_barrier_tasks - Wait for in-flight call_rcu_tasks() callbacks.
> > + *
> > + * Although the current implementation is guaranteed to wait, it is not
> > + * obligated to, for example, if there are no pending callbacks.
> > + */
> > +void rcu_barrier_tasks(void)
> > +{
> > +	/* There is only one callback queue, so this is easy.  ;-) */
> > +	synchronize_rcu_tasks();
> > +}
> > +
> >  /* RCU-tasks kthread that detects grace periods and invokes callbacks. */
> >  static int __noreturn rcu_tasks_kthread(void *arg)
> >  {
> > -- 
> > 1.8.1.5
> > 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ