[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260126002936.2676435-3-elver@google.com>
Date: Mon, 26 Jan 2026 01:25:11 +0100
From: Marco Elver <elver@...gle.com>
To: elver@...gle.com, Peter Zijlstra <peterz@...radead.org>, Will Deacon <will@...nel.org>
Cc: Ingo Molnar <mingo@...nel.org>, Thomas Gleixner <tglx@...utronix.de>,
Boqun Feng <boqun.feng@...il.com>, Waiman Long <longman@...hat.com>,
Bart Van Assche <bvanassche@....org>, llvm@...ts.linux.dev,
Catalin Marinas <catalin.marinas@....com>, Arnd Bergmann <arnd@...db.de>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: [PATCH 2/3] arm64: Optimize __READ_ONCE() with CONFIG_LTO=y
Rework arm64 LTO __READ_ONCE() to improve code generation as follows:
1. Replace the _Generic-based __unqual_scalar_typeof() with the builtin
typeof_unqual(). This strips qualifiers from all types, not just
integer types, which is required to be able to assign (must be
non-const) to __u.__val in the non-atomic case (required for #2).
One subtle point here is that non-integer types of __val could be const
or volatile within the union with the old __unqual_scalar_typeof(), if
the passed variable is const or volatile. This would then result in a
forced load from the stack if __u.__val is volatile; in the case of
const, it does look odd if the underlying storage changes, but the
compiler is told said member is "const" -- it smells like UB.
2. Eliminate the atomic flag and ternary conditional expression. Move
the fallback volatile load into the default case of the switch,
ensuring __u is unconditionally initialized across all paths.
The statement expression now unconditionally returns __u.__val.
This refactoring appears to help the compiler improve (or fix) code
generation.
With a defconfig + LTO + debug options builds, we observe different
codegen for the following functions:
btrfs_reclaim_sweep (708 -> 1032 bytes)
btrfs_sinfo_bg_reclaim_threshold_store (200 -> 204 bytes)
check_mem_access (3652 -> 3692 bytes) [inlined bpf_map_is_rdonly]
console_flush_all (1268 -> 1264 bytes)
console_lock_spinning_disable_and_check (180 -> 176 bytes)
igb_add_filter (640 -> 636 bytes)
igb_config_tx_modes (2404 -> 2400 bytes)
kvm_vcpu_on_spin (480 -> 476 bytes)
map_freeze (376 -> 380 bytes)
netlink_bind (1664 -> 1656 bytes)
nmi_cpu_backtrace (404 -> 400 bytes)
set_rps_cpu (516 -> 520 bytes)
swap_cluster_readahead (944 -> 932 bytes)
tcp_accecn_third_ack (328 -> 336 bytes)
tcp_create_openreq_child (1764 -> 1772 bytes)
tcp_data_queue (5784 -> 5892 bytes)
tcp_ecn_rcv_synack (620 -> 628 bytes)
xen_manage_runstate_time (944 -> 896 bytes)
xen_steal_clock (340 -> 296 bytes)
Increase of some functions are due to more aggressive inlining due to
better codegen (in this build, e.g. bpf_map_is_rdonly is no longer
present due to being inlined completely).
Signed-off-by: Marco Elver <elver@...gle.com>
---
arch/arm64/include/asm/rwonce.h | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
index fc0fb42b0b64..9963948f4b44 100644
--- a/arch/arm64/include/asm/rwonce.h
+++ b/arch/arm64/include/asm/rwonce.h
@@ -32,8 +32,7 @@
#define __READ_ONCE(x) \
({ \
typeof(&(x)) __x = &(x); \
- int atomic = 1; \
- union { __unqual_scalar_typeof(*__x) __val; char __c[1]; } __u; \
+ union { TYPEOF_UNQUAL(*__x) __val; char __c[1]; } __u; \
switch (sizeof(x)) { \
case 1: \
asm volatile(__LOAD_RCPC(b, %w0, %1) \
@@ -56,9 +55,9 @@
: "Q" (*__x) : "memory"); \
break; \
default: \
- atomic = 0; \
+ __u.__val = *(volatile typeof(*__x) *)__x; \
} \
- atomic ? (typeof(*__x))__u.__val : (*(volatile typeof(*__x) *)__x);\
+ __u.__val; \
})
#endif /* !BUILD_VDSO */
--
2.52.0.457.g6b5491de43-goog
Powered by blists - more mailing lists