[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160829173802.GA27002@linux-80c1.suse>
Date:   Mon, 29 Aug 2016 10:38:02 -0700
From:   Davidlohr Bueso <dave@...olabs.net>
To:     Manfred Spraul <manfred@...orfullife.com>
Cc:     benh@...nel.crashing.org, paulmck@...ux.vnet.ibm.com,
        Ingo Molnar <mingo@...e.hu>, Boqun Feng <boqun.feng@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        LKML <linux-kernel@...r.kernel.org>, 1vier1@....de
Subject: Re: [PATCH 1/4 v4] spinlock: Document memory barrier rules
On Mon, 29 Aug 2016, Manfred Spraul wrote:
>Right now, the spinlock machinery tries to guarantee barriers even for
>unorthodox locking cases, which ends up as a constant stream of updates
>as the architectures try to support new unorthodox ideas.
>
>The patch proposes to clarify the rules:
>spin_lock is ACQUIRE, spin_unlock is RELEASE.
>spin_unlock_wait is also ACQUIRE.
>Code that needs further guarantees must use appropriate explicit barriers.
>
>Architectures that can implement some barriers for free can define the
>barriers as NOPs.
>
>As the initial step, the patch converts ipc/sem.c to the new defines:
>- With commit 2c6100227116
>  ("locking/qspinlock: Fix spin_unlock_wait() some more"),
>  (and the commits for the other archs), spin_unlock_wait() is an
>  ACQUIRE.
>  Therefore the smp_rmb() after spin_unlock_wait() can be removed.
>- smp_mb__after_spin_lock() instead of a direct smp_mb().
>  This allows that architectures override it with a less expensive
>  barrier if this is sufficient for their hardware/spinlock
>  implementation.
>
>For overriding, the same approach as for smp_mb__before_spin_lock()
>is used: If smp_mb__after_spin_lock is already defined, then it is
>not changed.
>
>Signed-off-by: Manfred Spraul <manfred@...orfullife.com>
>---
> Documentation/locking/spinlocks.txt |  5 +++++
> include/linux/spinlock.h            | 12 ++++++++++++
> ipc/sem.c                           | 16 +---------------
Preferably this would have been two patches, specially since you
remove the redundant barrier in complexmode_enter(), which is 
kind of mixing core spinlocking and core sysv sems. But anyway,
this will be the patch that we _don't_ backport to stable, right?
Reviewed-by: Davidlohr Bueso <dave@...olabs.net>
Powered by blists - more mailing lists
 
