[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <59bed43df37b4361a8a1cb31b8582e9b@AcuMS.aculab.com>
Date: Sun, 28 Jan 2024 12:47:00 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Jisheng Zhang' <jszhang@...nel.org>, Paul Walmsley
<paul.walmsley@...ive.com>, Palmer Dabbelt <palmer@...belt.com>, Albert Ou
<aou@...s.berkeley.edu>
CC: "linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Matteo Croce
<mcroce@...rosoft.com>, kernel test robot <lkp@...el.com>
Subject: RE: [PATCH 2/3] riscv: optimized memmove
From: Jisheng Zhang
> Sent: 28 January 2024 11:10
>
> When the destination buffer is before the source one, or when the
> buffers doesn't overlap, it's safe to use memcpy() instead, which is
> optimized to use a bigger data size possible.
>
..
> + * Simply check if the buffer overlaps an call memcpy() in case,
> + * otherwise do a simple one byte at time backward copy.
I'd at least do a 64bit copy loop if the addresses are aligned.
Thinks a bit more....
Put the copy 64 bytes code (the body of the memcpy() loop)
into it an inline function and call it with increasing addresses
in memcpy() are decrementing addresses in memmove.
So memcpy() contains:
src_lim = src_lim + count;
... alignment copy
for (; src + 64 <= src_lim; src += 64; dest += 64)
copy_64_bytes(dest, src);
... tail copy
Then you can do something very similar for backwards copies.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists