[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210625010200.362755-1-mcroce@linux.microsoft.com>
Date: Fri, 25 Jun 2021 03:01:57 +0200
From: Matteo Croce <mcroce@...ux.microsoft.com>
To: linux-kernel@...r.kernel.org, Nick Kossifidis <mick@....forth.gr>,
Guo Ren <guoren@...nel.org>,
Christoph Hellwig <hch@...radead.org>,
David Laight <David.Laight@...lab.com>,
Palmer Dabbelt <palmer@...belt.com>,
Emil Renner Berthing <kernel@...il.dk>,
Drew Fustini <drew@...gleboard.org>
Cc: linux-arch@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Nick Desaulniers <ndesaulniers@...gle.com>,
linux-riscv@...ts.infradead.org
Subject: [PATCH 0/3] lib/string: optimized mem* functions
From: Matteo Croce <mcroce@...rosoft.com>
Rewrite the generic mem{cpy,move,set} so that memory is accessed with
the widest size possible, but without doing unaligned accesses.
This was originally posted as C string functions for RISC-V[1], but as
there was no specific RISC-V code, it was proposed for the generic
lib/string.c implementation.
Tested on RISC-V and on x86_64 by undefining __HAVE_ARCH_MEM{CPY,SET,MOVE}
and HAVE_EFFICIENT_UNALIGNED_ACCESS.
Further testing on big endian machines will be appreciated, as I don't
have such hardware at the moment.
[1] https://lore.kernel.org/linux-riscv/20210617152754.17960-1-mcroce@linux.microsoft.com/
Matteo Croce (3):
lib/string: optimized memcpy
lib/string: optimized memmove
lib/string: optimized memset
lib/string.c | 129 ++++++++++++++++++++++++++++++++++++++++++++-------
1 file changed, 112 insertions(+), 17 deletions(-)
--
2.31.1
Powered by blists - more mailing lists