lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 23 Jan 2017 00:38:29 -0800
From:   Lance Roy <ldr709@...il.com>
To:     "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:     linux-kernel@...r.kernel.org, mingo@...nel.org,
        jiangshanlai@...il.com, dipankar@...ibm.com,
        akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
        josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
        rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
        dvhart@...ux.intel.com, fweisbec@...il.com, oleg@...hat.com,
        bobby.prani@...il.com
Subject: Re: [PATCH v2 tip/core/rcu 2/3] srcu: Force full grace-period
 ordering

On Sun, 15 Jan 2017 14:42:34 -0800
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:

> If a process invokes synchronize_srcu(), is delayed just the right amount
> of time, and thus does not sleep when waiting for the grace period to
> complete, there is no ordering between the end of the grace period and
> the code following the synchronize_srcu().  Similarly, there can be a
> lack of ordering between the end of the SRCU grace period and callback
> invocation.
> 
> This commit adds the necessary ordering.
> 
> Reported-by: Lance Roy <ldr709@...il.com>
> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> ---
>  include/linux/rcupdate.h | 12 ++++++++++++
>  kernel/rcu/srcu.c        |  5 +++++
>  kernel/rcu/tree.h        | 12 ------------
>  3 files changed, 17 insertions(+), 12 deletions(-)
> 
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 01f71e1d2e94..6ade6a52d9d4 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -1161,5 +1161,17 @@ do { \
>  		ftrace_dump(oops_dump_mode); \
>  } while (0)
>  
> +/*
> + * Place this after a lock-acquisition primitive to guarantee that
> + * an UNLOCK+LOCK pair acts as a full barrier.  This guarantee applies
> + * if the UNLOCK and LOCK are executed by the same CPU or if the
> + * UNLOCK and LOCK operate on the same lock variable.
> + */
> +#ifdef CONFIG_PPC
> +#define smp_mb__after_unlock_lock()	smp_mb()  /* Full ordering for
> lock. */ +#else /* #ifdef CONFIG_PPC */
> +#define smp_mb__after_unlock_lock()	do { } while (0)
> +#endif /* #else #ifdef CONFIG_PPC */
> +
>  
>  #endif /* __LINUX_RCUPDATE_H */
> diff --git a/kernel/rcu/srcu.c b/kernel/rcu/srcu.c
> index ddabf5fbf562..f2abfbae258c 100644
> --- a/kernel/rcu/srcu.c
> +++ b/kernel/rcu/srcu.c
> @@ -359,6 +359,7 @@ void call_srcu(struct srcu_struct *sp, struct rcu_head
> *head, head->next = NULL;
>  	head->func = func;
>  	spin_lock_irqsave(&sp->queue_lock, flags);
> +	smp_mb__after_unlock_lock(); /* Caller's prior accesses before GP. */
>  	rcu_batch_queue(&sp->batch_queue, head);
>  	if (!sp->running) {
>  		sp->running = true;
> @@ -392,6 +393,7 @@ static void __synchronize_srcu(struct srcu_struct *sp,
> int trycount) head->next = NULL;
>  	head->func = wakeme_after_rcu;
>  	spin_lock_irq(&sp->queue_lock);
> +	smp_mb__after_unlock_lock(); /* Caller's prior accesses before GP. */
>  	if (!sp->running) {
>  		/* steal the processing owner */
>  		sp->running = true;
> @@ -413,6 +415,8 @@ static void __synchronize_srcu(struct srcu_struct *sp,
> int trycount) 
>  	if (!done)
>  		wait_for_completion(&rcu.completion);
> +
> +	smp_mb(); /* Caller's later accesses after GP. */
I think that this memory barrier is only necessary when done == false, as
otherwise srcu_advance_batches() should provide sufficient memory ordering.

>  }
>  
>  /**
> @@ -587,6 +591,7 @@ static void srcu_invoke_callbacks(struct srcu_struct *sp)
>  	int i;
>  	struct rcu_head *head;
>  
> +	smp_mb(); /* Callback accesses after GP. */
Shouldn't srcu_advance_batches() have already run all necessary memory barriers?

>  	for (i = 0; i < SRCU_CALLBACK_BATCH; i++) {
>  		head = rcu_batch_dequeue(&sp->batch_done);
>  		if (!head)
> diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> index fe98dd24adf8..abcc25bdcb29 100644
> --- a/kernel/rcu/tree.h
> +++ b/kernel/rcu/tree.h
> @@ -688,18 +688,6 @@ static inline void rcu_nocb_q_lengths(struct rcu_data
> *rdp, long *ql, long *qll) #endif /* #ifdef CONFIG_RCU_TRACE */
>  
>  /*
> - * Place this after a lock-acquisition primitive to guarantee that
> - * an UNLOCK+LOCK pair act as a full barrier.  This guarantee applies
> - * if the UNLOCK and LOCK are executed by the same CPU or if the
> - * UNLOCK and LOCK operate on the same lock variable.
> - */
> -#ifdef CONFIG_PPC
> -#define smp_mb__after_unlock_lock()	smp_mb()  /* Full ordering for
> lock. */ -#else /* #ifdef CONFIG_PPC */
> -#define smp_mb__after_unlock_lock()	do { } while (0)
> -#endif /* #else #ifdef CONFIG_PPC */
> -
> -/*
>   * Wrappers for the rcu_node::lock acquire and release.
>   *
>   * Because the rcu_nodes form a tree, the tree traversal locking will observe

Thanks,
Lance

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ