lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170123191207.GG28085@linux.vnet.ibm.com>
Date:   Mon, 23 Jan 2017 11:12:07 -0800
From:   "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:     Lance Roy <ldr709@...il.com>
Cc:     linux-kernel@...r.kernel.org, mingo@...nel.org,
        jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
        rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
        dvhart@...ux.intel.com, fweisbec@...il.com, oleg@...hat.com,
        bobby.prani@...il.com
Subject: Re: [PATCH v2 tip/core/rcu 2/3] srcu: Force full grace-period
 ordering

On Mon, Jan 23, 2017 at 12:38:29AM -0800, Lance Roy wrote:
> On Sun, 15 Jan 2017 14:42:34 -0800
> "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:
> 
> > If a process invokes synchronize_srcu(), is delayed just the right amount
> > of time, and thus does not sleep when waiting for the grace period to
> > complete, there is no ordering between the end of the grace period and
> > the code following the synchronize_srcu().  Similarly, there can be a
> > lack of ordering between the end of the SRCU grace period and callback
> > invocation.
> > 
> > This commit adds the necessary ordering.
> > 
> > Reported-by: Lance Roy <ldr709@...il.com>
> > Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> > ---
> >  include/linux/rcupdate.h | 12 ++++++++++++
> >  kernel/rcu/srcu.c        |  5 +++++
> >  kernel/rcu/tree.h        | 12 ------------
> >  3 files changed, 17 insertions(+), 12 deletions(-)
> > 
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index 01f71e1d2e94..6ade6a52d9d4 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -1161,5 +1161,17 @@ do { \
> >  		ftrace_dump(oops_dump_mode); \
> >  } while (0)
> >  
> > +/*
> > + * Place this after a lock-acquisition primitive to guarantee that
> > + * an UNLOCK+LOCK pair acts as a full barrier.  This guarantee applies
> > + * if the UNLOCK and LOCK are executed by the same CPU or if the
> > + * UNLOCK and LOCK operate on the same lock variable.
> > + */
> > +#ifdef CONFIG_PPC
> > +#define smp_mb__after_unlock_lock()	smp_mb()  /* Full ordering for
> > lock. */ +#else /* #ifdef CONFIG_PPC */
> > +#define smp_mb__after_unlock_lock()	do { } while (0)
> > +#endif /* #else #ifdef CONFIG_PPC */
> > +
> >  
> >  #endif /* __LINUX_RCUPDATE_H */
> > diff --git a/kernel/rcu/srcu.c b/kernel/rcu/srcu.c
> > index ddabf5fbf562..f2abfbae258c 100644
> > --- a/kernel/rcu/srcu.c
> > +++ b/kernel/rcu/srcu.c
> > @@ -359,6 +359,7 @@ void call_srcu(struct srcu_struct *sp, struct rcu_head
> > *head, head->next = NULL;
> >  	head->func = func;
> >  	spin_lock_irqsave(&sp->queue_lock, flags);
> > +	smp_mb__after_unlock_lock(); /* Caller's prior accesses before GP. */
> >  	rcu_batch_queue(&sp->batch_queue, head);
> >  	if (!sp->running) {
> >  		sp->running = true;
> > @@ -392,6 +393,7 @@ static void __synchronize_srcu(struct srcu_struct *sp,
> > int trycount) head->next = NULL;
> >  	head->func = wakeme_after_rcu;
> >  	spin_lock_irq(&sp->queue_lock);
> > +	smp_mb__after_unlock_lock(); /* Caller's prior accesses before GP. */
> >  	if (!sp->running) {
> >  		/* steal the processing owner */
> >  		sp->running = true;
> > @@ -413,6 +415,8 @@ static void __synchronize_srcu(struct srcu_struct *sp,
> > int trycount) 
> >  	if (!done)
> >  		wait_for_completion(&rcu.completion);
> > +
> > +	smp_mb(); /* Caller's later accesses after GP. */
> 
> I think that this memory barrier is only necessary when done == false, as
> otherwise srcu_advance_batches() should provide sufficient memory ordering.

Let me make sure that I understand your rationale here.

The idea is that although srcu_readers_active_idx_check() did a full
memory barrier, this might have happened on some other CPU, which
would not have provided ordering to the current CPU in the race case
where current CPU didn't actually sleep.  (This can happen where the
current task is preempted, and then is resumed just as the grace period
completes.)

Or are you concerned about some other sequence of events?

(I have moved the smp_mb() inside the "if (!done)" in the meantime.)

> >  }
> >  
> >  /**
> > @@ -587,6 +591,7 @@ static void srcu_invoke_callbacks(struct srcu_struct *sp)
> >  	int i;
> >  	struct rcu_head *head;
> >  
> > +	smp_mb(); /* Callback accesses after GP. */
> 
> Shouldn't srcu_advance_batches() have already run all necessary memory barriers?

It does look that way:

o	process_srcu() is the only thing that invokes srcu_invoke_callbacks().

o	process_srcu() invokes srcu_advance_batches() immediately before
	srcu_invoke_callbacks(), so any memory barriers invoked from
	srcu_advance_batches() affect process_srcu() (unlike the earlier
	example where srcu_advance_batches() might be executed in the
	context of some other task).

o	srcu_advance_batches() unconditionally invokes try_check_zero(),
	which in turn unconditionally invokes srcu_readers_active_idx_check(),
	which in turn invokes smp_mb().

	This smp_mb() precedes a successful check that all pre-existing
	readers are done, otherwise srcu_advance_batches() won't have
	returned (or won't have advanced the callbacks, which in turn
	will prevent them from being invoked).

I have removed this memory barrier and added a comment.

> >  	for (i = 0; i < SRCU_CALLBACK_BATCH; i++) {
> >  		head = rcu_batch_dequeue(&sp->batch_done);
> >  		if (!head)
> > diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> > index fe98dd24adf8..abcc25bdcb29 100644
> > --- a/kernel/rcu/tree.h
> > +++ b/kernel/rcu/tree.h
> > @@ -688,18 +688,6 @@ static inline void rcu_nocb_q_lengths(struct rcu_data
> > *rdp, long *ql, long *qll) #endif /* #ifdef CONFIG_RCU_TRACE */
> >  
> >  /*
> > - * Place this after a lock-acquisition primitive to guarantee that
> > - * an UNLOCK+LOCK pair act as a full barrier.  This guarantee applies
> > - * if the UNLOCK and LOCK are executed by the same CPU or if the
> > - * UNLOCK and LOCK operate on the same lock variable.
> > - */
> > -#ifdef CONFIG_PPC
> > -#define smp_mb__after_unlock_lock()	smp_mb()  /* Full ordering for
> > lock. */ -#else /* #ifdef CONFIG_PPC */
> > -#define smp_mb__after_unlock_lock()	do { } while (0)
> > -#endif /* #else #ifdef CONFIG_PPC */
> > -
> > -/*
> >   * Wrappers for the rcu_node::lock acquire and release.
> >   *
> >   * Because the rcu_nodes form a tree, the tree traversal locking will observe

And thank you for your review and comments!!!

								Thanx, Paul

> Thanks,
> Lance
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ