[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210617012509.34265-1-mcroce@linux.microsoft.com>
Date: Thu, 17 Jun 2021 03:25:06 +0200
From: Matteo Croce <mcroce@...ux.microsoft.com>
To: linux-riscv@...ts.infradead.org
Cc: linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Atish Patra <atish.patra@....com>,
Emil Renner Berthing <kernel@...il.dk>,
Akira Tsukamoto <akira.tsukamoto@...il.com>,
Drew Fustini <drew@...gleboard.org>,
Bin Meng <bmeng.cn@...il.com>,
David Laight <David.Laight@...lab.com>,
Guo Ren <guoren@...nel.org>
Subject: [PATCH v2 0/3] riscv: optimized mem* functions
From: Matteo Croce <mcroce@...rosoft.com>
Replace the assembly mem{cpy,move,set} with C equivalent.
Try to access RAM with the largest bit width possible, but without
doing unaligned accesses.
Tested on a BeagleV Starlight with a SiFive U74 core, where the
improvement is noticeable.
v1 -> v2:
- reduce the threshold from 64 to 16 bytes
- fix KASAN build
- optimize memset
Matteo Croce (3):
riscv: optimized memcpy
riscv: optimized memmove
riscv: optimized memset
arch/riscv/include/asm/string.h | 18 ++--
arch/riscv/kernel/Makefile | 1 -
arch/riscv/kernel/riscv_ksyms.c | 17 ----
arch/riscv/lib/Makefile | 4 +-
arch/riscv/lib/memcpy.S | 108 ---------------------
arch/riscv/lib/memmove.S | 64 -------------
arch/riscv/lib/memset.S | 113 ----------------------
arch/riscv/lib/string.c | 162 ++++++++++++++++++++++++++++++++
8 files changed, 172 insertions(+), 315 deletions(-)
delete mode 100644 arch/riscv/kernel/riscv_ksyms.c
delete mode 100644 arch/riscv/lib/memcpy.S
delete mode 100644 arch/riscv/lib/memmove.S
delete mode 100644 arch/riscv/lib/memset.S
create mode 100644 arch/riscv/lib/string.c
--
2.31.1
Powered by blists - more mailing lists