[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251202074121.81364-1-maohan4761@gmail.com>
Date: Tue, 2 Dec 2025 15:41:20 +0800
From: maohan4761@...il.com
To: pjw@...nel.org,
palmer@...belt.com
Cc: guoren@...nel.org,
linux-riscv@...ts.infradead.org,
linux-kernel@...r.kernel.org,
Mao Han <han_mao@...ux.alibaba.com>
Subject: [PATCH 0/1] riscv: Optimize user copy with efficient unaligned access support
From: Mao Han <han_mao@...ux.alibaba.com>
Many modern high-performance processors now handle unaligned memory
accesses with performance nearly on par with aligned accesses. However,
the current kernel implementation of
fallback_scalar_usercopy_sum_enabled still defaults to aligning accesses
to register size boundaries. This path incurs additional
shift-and-combine operations that fail to fully leverage hardware
capabilities and cannot utilize wide-load/store instructions for
trailing data chunks smaller than 9 × SZREG.
This patch introduces an optimized code path enabled by
RISCV_EFFICIENT_UNALIGNED_ACCESS, which relies on hardware support for
efficient unaligned memory accesses. By doing so, the software can fully
exploit the maximum available load/store instruction width and avoids
complex bit-manipulation logic for reassembly.
The optimization significantly improves performance of
__asm_copy_to/from_user for transfers smaller than
riscv_v_usercopy_threshold. In particular, for small data sizes (8–72
bytes) under unaligned scenarios, performance gains range from 40% to
600%.
lmbench’s sig hndl/catch test shows approximately a 20% improvement,
and iperf small-packet throughput sees noticeable gains as well.
Mao Han (1):
riscv: Optimize user copy with efficient unaligned access support
arch/riscv/lib/uaccess.S | 113 +++++++++++++++++++++++++++++++++++++++
1 file changed, 113 insertions(+)
--
2.25.1
Powered by blists - more mailing lists