[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6cff2a895db94e6fadd4ddffb8906a73@AcuMS.aculab.com>
Date: Tue, 15 Jun 2021 08:57:07 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Matteo Croce' <mcroce@...ux.microsoft.com>,
"linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Atish Patra <atish.patra@....com>,
"Emil Renner Berthing" <kernel@...il.dk>,
Akira Tsukamoto <akira.tsukamoto@...il.com>,
Drew Fustini <drew@...gleboard.org>,
Bin Meng <bmeng.cn@...il.com>
Subject: RE: [PATCH 1/3] riscv: optimized memcpy
From: Matteo Croce
> Sent: 15 June 2021 03:38
>
> Write a C version of memcpy() which uses the biggest data size allowed,
> without generating unaligned accesses.
I'm surprised that the C loop:
> + for (; count >= bytes_long; count -= bytes_long)
> + *d.ulong++ = *s.ulong++;
ends up being faster than the ASM 'read lots' - 'write lots' loop.
Especially since there was an earlier patch to convert
copy_to/from_user() to use the ASM 'read lots' - 'write lots' loop
instead of a tight single register copy loop.
I'd also guess that the performance needs to be measured on
different classes of riscv cpu.
A simple cpu will behave differently to one that can execute
multiple instructions per clock.
Any form of 'out of order' execution also changes things.
The other big change is whether the cpu can to a memory
read and write in the same clock.
I'd guess that riscv exist with some/all of those features.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists