[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1578512578.228195979@decadent.org.uk>
Date: Wed, 08 Jan 2020 19:43:48 +0000
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org, Denis Kirjanov <kda@...ux-powerpc.org>,
"Ingo Molnar" <mingo@...nel.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
"Andrey Ryabinin" <aryabinin@...tuozzo.com>,
"Mark Rutland" <mark.rutland@....com>,
"Dmitry Vyukov" <dvyukov@...gle.com>,
"Linus Torvalds" <torvalds@...ux-foundation.org>,
"H. Peter Anvin" <hpa@...or.com>,
"Thomas Gleixner" <tglx@...utronix.de>
Subject: [PATCH 3.16 50/63] locking/x86: Remove the unused atomic_inc_short()
methd
3.16.81-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Dmitry Vyukov <dvyukov@...gle.com>
commit 31b35f6b4d5285a311e10753f4eb17304326b211 upstream.
It is completely unused and implemented only on x86.
Remove it.
Suggested-by: Mark Rutland <mark.rutland@....com>
Signed-off-by: Dmitry Vyukov <dvyukov@...gle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@...tuozzo.com>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/20170526172900.91058-1-dvyukov@google.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
[bwh: Backported to 3.16 because this function is broken after
"x86/atomic: Fix smp_mb__{before,after}_atomic()":
- Adjust context]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
arch/tile/lib/atomic_asm_32.S | 3 +--
arch/x86/include/asm/atomic.h | 13 -------------
2 files changed, 1 insertion(+), 15 deletions(-)
--- a/arch/tile/lib/atomic_asm_32.S
+++ b/arch/tile/lib/atomic_asm_32.S
@@ -24,8 +24,7 @@
* has an opportunity to return -EFAULT to the user if needed.
* The 64-bit routines just return a "long long" with the value,
* since they are only used from kernel space and don't expect to fault.
- * Support for 16-bit ops is included in the framework but we don't provide
- * any (x86_64 has an atomic_inc_short(), so we might want to some day).
+ * Support for 16-bit ops is included in the framework but we don't provide any.
*
* Note that the caller is advised to issue a suitable L1 or L2
* prefetch on the address being manipulated to avoid extra stalls.
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -205,19 +205,6 @@ static inline int __atomic_add_unless(at
return c;
}
-/**
- * atomic_inc_short - increment of a short integer
- * @v: pointer to type int
- *
- * Atomically adds 1 to @v
- * Returns the new value of @u
- */
-static inline short int atomic_inc_short(short int *v)
-{
- asm(LOCK_PREFIX "addw $1, %0" : "+m" (*v));
- return *v;
-}
-
/* These are x86-specific, used by some header files */
#define atomic_clear_mask(mask, addr) \
asm volatile(LOCK_PREFIX "andl %0,%1" \
Powered by blists - more mailing lists