lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 3 Jun 2019 06:39:42 -0700
From:   tip-bot for Mark Rutland <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     will.deacon@....com, paul.burton@...s.com, tglx@...utronix.de,
        mark.rutland@....com, linux-kernel@...r.kernel.org,
        torvalds@...ux-foundation.org, peterz@...radead.org,
        jhogan@...nel.org, mingo@...nel.org, ralf@...ux-mips.org,
        hpa@...or.com
Subject: [tip:locking/core] locking/atomic, mips: Use s64 for atomic64

Commit-ID:  d184cf1a449ca4cb0d86f3dd82c3337c645ea6c0
Gitweb:     https://git.kernel.org/tip/d184cf1a449ca4cb0d86f3dd82c3337c645ea6c0
Author:     Mark Rutland <mark.rutland@....com>
AuthorDate: Wed, 22 May 2019 14:22:41 +0100
Committer:  Ingo Molnar <mingo@...nel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, mips: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the mips atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long or __s64, matching the generated
headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long on 64-bit. This will be converted in a subsequent
patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@....com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: James Hogan <jhogan@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Paul Burton <paul.burton@...s.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ralf Baechle <ralf@...ux-mips.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Will Deacon <will.deacon@....com>
Cc: aou@...s.berkeley.edu
Cc: arnd@...db.de
Cc: bp@...en8.de
Cc: catalin.marinas@....com
Cc: davem@...emloft.net
Cc: fenghua.yu@...el.com
Cc: heiko.carstens@...ibm.com
Cc: herbert@...dor.apana.org.au
Cc: ink@...assic.park.msu.ru
Cc: linux@...linux.org.uk
Cc: mattst88@...il.com
Cc: mpe@...erman.id.au
Cc: palmer@...ive.com
Cc: paulus@...ba.org
Cc: rth@...ddle.net
Cc: tony.luck@...el.com
Cc: vgupta@...opsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-10-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
 arch/mips/include/asm/atomic.h | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 94096299fc56..9a82dd11c0e9 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -254,10 +254,10 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v)
 #define atomic64_set(v, i)	WRITE_ONCE((v)->counter, (i))
 
 #define ATOMIC64_OP(op, c_op, asm_op)					      \
-static __inline__ void atomic64_##op(long i, atomic64_t * v)		      \
+static __inline__ void atomic64_##op(s64 i, atomic64_t * v)		      \
 {									      \
 	if (kernel_uses_llsc) {						      \
-		long temp;						      \
+		s64 temp;						      \
 									      \
 		loongson_llsc_mb();					      \
 		__asm__ __volatile__(					      \
@@ -280,12 +280,12 @@ static __inline__ void atomic64_##op(long i, atomic64_t * v)		      \
 }
 
 #define ATOMIC64_OP_RETURN(op, c_op, asm_op)				      \
-static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
+static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v)   \
 {									      \
-	long result;							      \
+	s64 result;							      \
 									      \
 	if (kernel_uses_llsc) {						      \
-		long temp;						      \
+		s64 temp;						      \
 									      \
 		loongson_llsc_mb();					      \
 		__asm__ __volatile__(					      \
@@ -314,12 +314,12 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
 }
 
 #define ATOMIC64_FETCH_OP(op, c_op, asm_op)				      \
-static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v)  \
+static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v)    \
 {									      \
-	long result;							      \
+	s64 result;							      \
 									      \
 	if (kernel_uses_llsc) {						      \
-		long temp;						      \
+		s64 temp;						      \
 									      \
 		loongson_llsc_mb();					      \
 		__asm__ __volatile__(					      \
@@ -386,14 +386,14 @@ ATOMIC64_OPS(xor, ^=, xor)
  * Atomically test @v and subtract @i if @v is greater or equal than @i.
  * The function returns the old value of @v minus @i.
  */
-static __inline__ long atomic64_sub_if_positive(long i, atomic64_t * v)
+static __inline__ s64 atomic64_sub_if_positive(s64 i, atomic64_t * v)
 {
-	long result;
+	s64 result;
 
 	smp_mb__before_llsc();
 
 	if (kernel_uses_llsc) {
-		long temp;
+		s64 temp;
 
 		__asm__ __volatile__(
 		"	.set	push					\n"

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ