[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210512093310.5635-1-bp@alien8.de>
Date: Wed, 12 May 2021 11:33:10 +0200
From: Borislav Petkov <bp@...en8.de>
To: X86 ML <x86@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>
Subject: [PATCH] x86/asm: Simplify __smp_mb() definition
From: Borislav Petkov <bp@...e.de>
Drop the bitness ifdeffery in favor of using the rSP register
specification for 32 and 64 bit depending on the build.
No functional changes.
Signed-off-by: Borislav Petkov <bp@...e.de>
---
arch/x86/include/asm/barrier.h | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 4819d5e5a335..3ba772a69cc8 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -54,11 +54,8 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
#define dma_rmb() barrier()
#define dma_wmb() barrier()
-#ifdef CONFIG_X86_32
-#define __smp_mb() asm volatile("lock; addl $0,-4(%%esp)" ::: "memory", "cc")
-#else
-#define __smp_mb() asm volatile("lock; addl $0,-4(%%rsp)" ::: "memory", "cc")
-#endif
+#define __smp_mb() asm volatile("lock; addl $0,-4(%%" _ASM_SP ")" ::: "memory", "cc")
+
#define __smp_rmb() dma_rmb()
#define __smp_wmb() barrier()
#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
--
2.29.2
Powered by blists - more mailing lists