[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160701165258.GA27566@linux-80c1.suse>
Date: Fri, 1 Jul 2016 09:52:58 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: Manfred Spraul <manfred@...orfullife.com>
Cc: Stephen Rothwell <sfr@...b.auug.org.au>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, "H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, linux-next@...r.kernel.org,
1vier1@....de, felixh@...ormatik.uni-bremen.de,
stable@...r.kernel.org
Subject: Re: [PATCH 1/2] ipc/sem.c: Fix complex_count vs. simple op race
On Thu, 30 Jun 2016, Manfred Spraul wrote:
>On 06/28/2016 07:27 AM, Davidlohr Bueso wrote:
>If I understand it right, it means that spin_lock() is both an acquire
>and a release - for qspinlocks.
I wouldn't say that: the _Q_LOCKED_VAL stores are still unordered inside
the acquire region but no further than the the unlock store-release; which
is fine for regular mutual exclusion.
>It this valid for all spinlock implementations, for all architectures?
>Otherwise: How can I detect in generic code if I can rely on a release
>inside spin_lock()?
With ticket locks this was never an issue as special lock readers
(spin_unlock_wait(), spin_is_locked()) will always see the lock taken as
they observe waiters by adding itself to the tail -- something the above
commit mimics and piggy backs on to save expensive smp_rmb().
In addition, see 726328d92a4 (locking/spinlock, arch: Update and fix
spin_unlock_wait() implementations).
Thanks,
Davidlohr
Powered by blists - more mailing lists