[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160828134321.GC19706@linux.vnet.ibm.com>
Date: Sun, 28 Aug 2016 06:43:22 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Manfred Spraul <manfred@...orfullife.com>
Cc: benh@...nel.crashing.org, Ingo Molnar <mingo@...e.hu>,
Boqun Feng <boqun.feng@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, 1vier1@....de,
Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [PATCH 2/4] barrier.h: Move smp_mb__after_unlock_lock to
barrier.h
On Sun, Aug 28, 2016 at 01:56:14PM +0200, Manfred Spraul wrote:
> spin_unlock() + spin_lock() together do not form a full memory barrier:
>
> a=1;
> spin_unlock(&b);
> spin_lock(&c);
> + smp_mb__after_unlock_lock();
> d=1;
Better would be s/d=1/r1=d/ above.
Then another process doing this:
d=1
smp_mb()
r2=a
might have the after-the-dust-settles outcome of r1==0&&r2==0.
The advantage of this scenario is that it can happen on real hardware.
>
> Without the smp_mb__after_unlock_lock(), other CPUs can observe the
> write to d without seeing the write to a.
>
> Signed-off-by: Manfred Spraul <manfred@...orfullife.com>
With the upgraded commit log, I am OK with the patch below.
However, others will probably want to see at least one use of
smp_mb__after_unlock_lock() outside of RCU.
Thanx, Paul
> ---
> include/asm-generic/barrier.h | 16 ++++++++++++++++
> kernel/rcu/tree.h | 12 ------------
> 2 files changed, 16 insertions(+), 12 deletions(-)
>
> diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
> index fe297b5..9b4d28f 100644
> --- a/include/asm-generic/barrier.h
> +++ b/include/asm-generic/barrier.h
> @@ -244,6 +244,22 @@ do { \
> smp_acquire__after_ctrl_dep(); \
> VAL; \
> })
> +
> +#ifndef smp_mb__after_unlock_lock
> +/*
> + * Place this after a lock-acquisition primitive to guarantee that
> + * an UNLOCK+LOCK pair act as a full barrier. This guarantee applies
> + * if the UNLOCK and LOCK are executed by the same CPU or if the
> + * UNLOCK and LOCK operate on the same lock variable.
> + */
> +#ifdef CONFIG_PPC
> +#define smp_mb__after_unlock_lock() smp_mb() /* Full ordering for lock. */
> +#else /* #ifdef CONFIG_PPC */
> +#define smp_mb__after_unlock_lock() do { } while (0)
> +#endif /* #else #ifdef CONFIG_PPC */
> +
> +#endif
> +
> #endif
>
> #endif /* !__ASSEMBLY__ */
> diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> index e99a523..a0cd9ab 100644
> --- a/kernel/rcu/tree.h
> +++ b/kernel/rcu/tree.h
> @@ -687,18 +687,6 @@ static inline void rcu_nocb_q_lengths(struct rcu_data *rdp, long *ql, long *qll)
> #endif /* #ifdef CONFIG_RCU_TRACE */
>
> /*
> - * Place this after a lock-acquisition primitive to guarantee that
> - * an UNLOCK+LOCK pair act as a full barrier. This guarantee applies
> - * if the UNLOCK and LOCK are executed by the same CPU or if the
> - * UNLOCK and LOCK operate on the same lock variable.
> - */
> -#ifdef CONFIG_PPC
> -#define smp_mb__after_unlock_lock() smp_mb() /* Full ordering for lock. */
> -#else /* #ifdef CONFIG_PPC */
> -#define smp_mb__after_unlock_lock() do { } while (0)
> -#endif /* #else #ifdef CONFIG_PPC */
> -
> -/*
> * Wrappers for the rcu_node::lock acquire and release.
> *
> * Because the rcu_nodes form a tree, the tree traversal locking will observe
> --
> 2.5.5
>
Powered by blists - more mailing lists