[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240424191740.3088894-2-keescook@chromium.org>
Date: Wed, 24 Apr 2024 12:17:35 -0700
From: Kees Cook <keescook@...omium.org>
To: Mark Rutland <mark.rutland@....com>
Cc: Kees Cook <keescook@...omium.org>,
Will Deacon <will@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Boqun Feng <boqun.feng@...il.com>,
Catalin Marinas <catalin.marinas@....com>,
linux-arm-kernel@...ts.infradead.org,
Jakub Kicinski <kuba@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>,
Arnd Bergmann <arnd@...db.de>,
Andrew Morton <akpm@...ux-foundation.org>,
"David S. Miller" <davem@...emloft.net>,
David Ahern <dsahern@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Uros Bizjak <ubizjak@...il.com>,
linux-kernel@...r.kernel.org,
x86@...nel.org,
linux-arch@...r.kernel.org,
netdev@...r.kernel.org,
linux-hardening@...r.kernel.org
Subject: [PATCH 2/4] arm64: atomics: lse: Silence intentional wrapping addition
Annotate atomic_add_return() and atomic_sub_return() to avoid signed
overflow instrumentation. They are expected to wrap around.
Signed-off-by: Kees Cook <keescook@...omium.org>
---
Cc: Will Deacon <will@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Boqun Feng <boqun.feng@...il.com>
Cc: Mark Rutland <mark.rutland@....com>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: linux-arm-kernel@...ts.infradead.org
---
arch/arm64/include/asm/atomic_lse.h | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
index 87f568a94e55..a33576b20b52 100644
--- a/arch/arm64/include/asm/atomic_lse.h
+++ b/arch/arm64/include/asm/atomic_lse.h
@@ -10,6 +10,8 @@
#ifndef __ASM_ATOMIC_LSE_H
#define __ASM_ATOMIC_LSE_H
+#include <linux/overflow.h>
+
#define ATOMIC_OP(op, asm_op) \
static __always_inline void \
__lse_atomic_##op(int i, atomic_t *v) \
@@ -82,13 +84,13 @@ ATOMIC_FETCH_OP_SUB( )
static __always_inline int \
__lse_atomic_add_return##name(int i, atomic_t *v) \
{ \
- return __lse_atomic_fetch_add##name(i, v) + i; \
+ return wrapping_add(int, __lse_atomic_fetch_add##name(i, v), i);\
} \
\
static __always_inline int \
__lse_atomic_sub_return##name(int i, atomic_t *v) \
{ \
- return __lse_atomic_fetch_sub(i, v) - i; \
+ return wrapping_sub(int, __lse_atomic_fetch_sub(i, v), i); \
}
ATOMIC_OP_ADD_SUB_RETURN(_relaxed)
@@ -189,13 +191,13 @@ ATOMIC64_FETCH_OP_SUB( )
static __always_inline long \
__lse_atomic64_add_return##name(s64 i, atomic64_t *v) \
{ \
- return __lse_atomic64_fetch_add##name(i, v) + i; \
+ return wrapping_add(s64, __lse_atomic64_fetch_add##name(i, v), i); \
} \
\
static __always_inline long \
__lse_atomic64_sub_return##name(s64 i, atomic64_t *v) \
{ \
- return __lse_atomic64_fetch_sub##name(i, v) - i; \
+ return wrapping_sub(s64, __lse_atomic64_fetch_sub##name(i, v), i); \
}
ATOMIC64_OP_ADD_SUB_RETURN(_relaxed)
--
2.34.1
Powered by blists - more mailing lists