lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 14 May 2018 23:26:17 -0700
From:   tip-bot for Andrea Parri <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     catalin.marinas@....com, andrea.parri@...rulasolutions.com,
        hpa@...or.com, will.deacon@....com, peterz@...radead.org,
        mingo@...nel.org, akpm@...ux-foundation.org,
        torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
        paulmck@...ux.vnet.ibm.com, tglx@...utronix.de
Subject: [tip:locking/core] locking/spinlocks/arm64: Remove smp_mb() from
 arch_spin_is_locked()

Commit-ID:  c6f5d02b6a0fb91be5d656885ce02cf28952181d
Gitweb:     https://git.kernel.org/tip/c6f5d02b6a0fb91be5d656885ce02cf28952181d
Author:     Andrea Parri <andrea.parri@...rulasolutions.com>
AuthorDate: Mon, 14 May 2018 16:01:28 -0700
Committer:  Ingo Molnar <mingo@...nel.org>
CommitDate: Tue, 15 May 2018 08:11:15 +0200

locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked()

The following commit:

  38b850a73034f ("arm64: spinlock: order spin_{is_locked,unlock_wait} against local locks")

... added an smp_mb() to arch_spin_is_locked(), in order
"to ensure that the lock value is always loaded after any other locks have
been taken by the current CPU", and reported one example (the "insane case"
in ipc/sem.c) relying on such guarantee.

It is however understood that spin_is_locked() is not required to provide
such an ordering guarantee (a guarantee that is currently not provided by
all the implementations/archs), and that callers relying on such ordering
should instead insert suitable memory barriers before acting on the result
of spin_is_locked().

Following a recent auditing [1] of the callers of {,raw_}spin_is_locked(),
revealing that none of them are relying on the ordering guarantee anymore,
this commit removes the leading smp_mb() from the primitive thus reverting
38b850a73034f.

[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
    https://marc.info/?l=linux-kernel&m=152042843808540&w=2
    https://marc.info/?l=linux-kernel&m=152043346110262&w=2

Signed-off-by: Andrea Parri <andrea.parri@...rulasolutions.com>
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Acked-by: Will Deacon <will.deacon@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: akiyks@...il.com
Cc: boqun.feng@...il.com
Cc: dhowells@...hat.com
Cc: j.alglave@....ac.uk
Cc: linux-arch@...r.kernel.org
Cc: luc.maranget@...ia.fr
Cc: npiggin@...il.com
Cc: parri.andrea@...il.com
Cc: stern@...land.harvard.edu
Link: http://lkml.kernel.org/r/1526338889-7003-2-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
 arch/arm64/include/asm/spinlock.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index ebdae15d665d..26c5bd7d88d8 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -122,11 +122,6 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
-	/*
-	 * Ensure prior spin_lock operations to other locks have completed
-	 * on this CPU before we test whether "lock" is locked.
-	 */
-	smp_mb(); /* ^^^ */
 	return !arch_spin_value_unlocked(READ_ONCE(*lock));
 }
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ