[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48db247e-f6fd-cb4b-7cc5-455bf26bb153@gmail.com>
Date: Mon, 11 Jul 2022 18:00:27 +0300
From: Andrey Semashev <andrey.semashev@...il.com>
To: Yu-Jen Chang <arthurchang09@...il.com>
Cc: andy@...nel.org, akinobu.mita@...il.com,
Ching-Chun Huang <jserv@...s.ncku.edu.tw>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] lib/string.c: Optimize memchr()
On 7/11/22 17:52, Yu-Jen Chang wrote:
> Andrey Semashev <andrey.semashev@...il.com> 於 2022年7月11日 週一 凌晨4:01寫道:
>>
>> On 7/10/22 17:28, Yu-Jen Chang wrote:
>>> The original version of memchr() is implemented with the byte-wise
>>> comparing technique, which does not fully use 64-bits or 32-bits
>>> registers in CPU. We use word-wide comparing so that 8 characters
>>> can be compared at the same time on CPU. This code is base on
>>> David Laight's implementation.
>>>
>>> We create two files to measure the performance. The first file
>>> contains on average 10 characters ahead the target character.
>>> The second file contains at least 1000 characters ahead the
>>> target character. Our implementation of “memchr()” is slightly
>>> better in the first test and nearly 4x faster than the orginal
>>> implementation in the second test.
>>>
>>> Signed-off-by: Yu-Jen Chang <arthurchang09@...il.com>
>>> Signed-off-by: Ching-Chun (Jim) Huang <jserv@...s.ncku.edu.tw>
>>> ---
>>> lib/string.c | 28 +++++++++++++++++++++-------
>>> 1 file changed, 21 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/lib/string.c b/lib/string.c
>>> index 80469e6c3..8ca965431 100644
>>> --- a/lib/string.c
>>> +++ b/lib/string.c
>>> @@ -905,21 +905,35 @@ EXPORT_SYMBOL(strnstr);
>>> #ifndef __HAVE_ARCH_MEMCHR
>>> /**
>>> * memchr - Find a character in an area of memory.
>>> - * @s: The memory area
>>> + * @p: The memory area
>>> * @c: The byte to search for
>>> - * @n: The size of the area.
>>> + * @length: The size of the area.
>>> *
>>> * returns the address of the first occurrence of @c, or %NULL
>>> * if @c is not found
>>> */
>>> -void *memchr(const void *s, int c, size_t n)
>>> +void *memchr(const void *p, int c, unsigned long length)
I didn't comment on this initially, but is the change of length type
intentional? Why?
>>> {
>>> - const unsigned char *p = s;
>>> - while (n-- != 0) {
>>> - if ((unsigned char)c == *p++) {
>>> - return (void *)(p - 1);
>>> + u64 mask, val;
>>> + const void *end = p + length;
>>> +
>>> + c &= 0xff;
>>> + if (p <= end - 8) {
>>> + mask = c;
>>> + MEMCHR_MASK_GEN(mask);
>>> +
>>> + for (; p <= end - 8; p += 8) {
>>> + val = *(u64 *)p ^ mask;
>>
>> What if p is not aligned to 8 (or 4 on 32-bit targets) bytes? Not all
>> targets support (efficient) unaligned loads, do they?
>
> I think it works if p is not aligned to 8 or 4 bytes.
>
> Let's say the string is 10 bytes. The for loop here will search the first
> 8 bytes. If the target character is in the last 2 bytes, the second for
> loop will find it. It also work like this on 32-bit machine.
I think you're missing the point. Loads at unaligned addresses may not
be allowed by hardware using conventional load instructions or may be
inefficient. Given that this memchr implementation is used as a fallback
when no hardware-specific version is available, you should be
conservative wrt. hardware capabilities and behavior. You should
probably have a pre-alignment loop.
>>
>>> + if ((val + 0xfefefefefefefeffu) &
>>> + (~val & 0x8080808080808080u))
>>> + break;
>>> }
>>> }
>>> +
>>> + for (; p < end; p++)
>>> + if (*(unsigned char *)p == c)
>>> + return (void *)p;
>>> +
>>> return NULL;
>>> }
>>> EXPORT_SYMBOL(memchr);
>>
Powered by blists - more mailing lists