[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210804191554.1252776-9-vgupta@synopsys.com>
Date: Wed, 4 Aug 2021 12:15:51 -0700
From: Vineet Gupta <Vineet.Gupta1@...opsys.com>
To: linux-snps-arc@...ts.infradead.org
Cc: Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will@...nel.org>, Arnd Bergmann <arnd@...db.de>,
Mark Rutland <mark.rutland@....com>,
linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
Vladimir Isaev <Vladimir.Isaev@...opsys.com>,
Vineet Gupta <Vineet.Gupta1@...opsys.com>
Subject: [PATCH 08/11] ARC: xchg: !LLSC: remove UP micro-optimization/hack
It gets in the way of cleaning things up and is a maintenance
pain-in-neck !
Signed-off-by: Vineet Gupta <vgupta@...opsys.com>
---
arch/arc/include/asm/cmpxchg.h | 12 +-----------
1 file changed, 1 insertion(+), 11 deletions(-)
diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
index d42917e803e1..bac9b564a140 100644
--- a/arch/arc/include/asm/cmpxchg.h
+++ b/arch/arc/include/asm/cmpxchg.h
@@ -113,15 +113,9 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
* - For !LLSC, cmpxchg() needs to use that lock (see above) and there is lot
* of kernel code which calls xchg()/cmpxchg() on same data (see llist.h)
* Hence xchg() needs to follow same locking rules.
- *
- * Technically the lock is also needed for UP (boils down to irq save/restore)
- * but we can cheat a bit since cmpxchg() atomic_ops_lock() would cause irqs to
- * be disabled thus can't possibly be interrupted/preempted/clobbered by xchg()
- * Other way around, xchg is one instruction anyways, so can't be interrupted
- * as such
*/
-#if !defined(CONFIG_ARC_HAS_LLSC) && defined(CONFIG_SMP)
+#ifndef CONFIG_ARC_HAS_LLSC
#define arch_xchg(ptr, with) \
({ \
@@ -134,10 +128,6 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
old_val; \
})
-#else
-
-#define arch_xchg(ptr, with) _xchg(ptr, with)
-
#endif
/*
--
2.25.1
Powered by blists - more mailing lists