lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAD4RrFPfwu4Ascj5tdz8qq2Qgnu5GN2eHjVwMW5AqUa1H7JapA@mail.gmail.com>
Date:   Tue, 12 Jul 2022 22:58:32 +0800
From:   Yu-Jen Chang <arthurchang09@...il.com>
To:     Andrey Semashev <andrey.semashev@...il.com>
Cc:     andy@...nel.org, akinobu.mita@...il.com,
        Ching-Chun Huang <jserv@...s.ncku.edu.tw>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] lib/string.c: Optimize memchr()

Andrey Semashev <andrey.semashev@...il.com> 於 2022年7月11日 週一 晚上11:00寫道:
>
> On 7/11/22 17:52, Yu-Jen Chang wrote:
> > Andrey Semashev <andrey.semashev@...il.com> 於 2022年7月11日 週一 凌晨4:01寫道:
> >>
> >> On 7/10/22 17:28, Yu-Jen Chang wrote:
> >>> The original version of memchr() is implemented with the byte-wise
> >>> comparing technique, which does not fully use 64-bits or 32-bits
> >>> registers in CPU. We use word-wide comparing so that 8 characters
> >>> can be compared at the same time on CPU. This code is base on
> >>> David Laight's implementation.
> >>>
> >>> We create two files to measure the performance. The first file
> >>> contains on average 10 characters ahead the target character.
> >>> The second file contains at least 1000 characters ahead the
> >>> target character. Our implementation of “memchr()” is slightly
> >>> better in the first test and nearly 4x faster than the orginal
> >>> implementation in the second test.
> >>>
> >>> Signed-off-by: Yu-Jen Chang <arthurchang09@...il.com>
> >>> Signed-off-by: Ching-Chun (Jim) Huang <jserv@...s.ncku.edu.tw>
> >>> ---
> >>>  lib/string.c | 28 +++++++++++++++++++++-------
> >>>  1 file changed, 21 insertions(+), 7 deletions(-)
> >>>
> >>> diff --git a/lib/string.c b/lib/string.c
> >>> index 80469e6c3..8ca965431 100644
> >>> --- a/lib/string.c
> >>> +++ b/lib/string.c
> >>> @@ -905,21 +905,35 @@ EXPORT_SYMBOL(strnstr);
> >>>  #ifndef __HAVE_ARCH_MEMCHR
> >>>  /**
> >>>   * memchr - Find a character in an area of memory.
> >>> - * @s: The memory area
> >>> + * @p: The memory area
> >>>   * @c: The byte to search for
> >>> - * @n: The size of the area.
> >>> + * @length: The size of the area.
> >>>   *
> >>>   * returns the address of the first occurrence of @c, or %NULL
> >>>   * if @c is not found
> >>>   */
> >>> -void *memchr(const void *s, int c, size_t n)
> >>> +void *memchr(const void *p, int c, unsigned long length)
>
> I didn't comment on this initially, but is the change of length type
> intentional? Why?

It is my mistake. I will change the type back to size_t.

>
> >>>  {
> >>> -     const unsigned char *p = s;
> >>> -     while (n-- != 0) {
> >>> -             if ((unsigned char)c == *p++) {
> >>> -                     return (void *)(p - 1);
> >>> +     u64 mask, val;
> >>> +     const void *end = p + length;
> >>> +
> >>> +     c &= 0xff;
> >>> +     if (p <= end - 8) {
> >>> +             mask = c;
> >>> +             MEMCHR_MASK_GEN(mask);
> >>> +
> >>> +             for (; p <= end - 8; p += 8) {
> >>> +                     val = *(u64 *)p ^ mask;
> >>
> >> What if p is not aligned to 8 (or 4 on 32-bit targets) bytes? Not all
> >> targets support (efficient) unaligned loads, do they?
> >
> > I think it works if p is not aligned to 8 or 4 bytes.
> >
> > Let's say the string is 10 bytes. The for loop here will search the first
> > 8 bytes. If the target character is in the last 2 bytes, the second for
> > loop will find it. It also work like this on 32-bit machine.
>
> I think you're missing the point. Loads at unaligned addresses may not
> be allowed by hardware using conventional load instructions or may be
> inefficient. Given that this memchr implementation is used as a fallback
> when no hardware-specific version is available, you should be
> conservative wrt. hardware capabilities and behavior. You should
> probably have a pre-alignment loop.

Got it. I add  pre-alignment loop. It aligns the address to 8 or 4bytes.

void *memchr(const void *p, int c, size_t length)
{
    u64 mask, val;
    const void *end = p + length;
    c &= 0xff;
    while ((long ) p & (sizeof(long) - 1)) {
        if (p >= end)
            return NULL;
        if (*(unsigned char *)p == c)
            return (void *) p;
        p++;
    }
    if (p <= end - 8) {
        mask = c;
        MEMCHR_MASK_GEN(mask);

        for (; p <= end - 8; p += 8) {
            val = *(u64*)p ^ mask;
            if ((val + 0xfefefefefefefeffull)
& (~val & 0x8080808080808080ull))
                break;
        }
    }

    for (; p < end; p++)
        if (*(unsigned char *)p == c)
            return (void *)p;

    return NULL;
}

>
> >>
> >>> +                     if ((val + 0xfefefefefefefeffu) &
> >>> +                         (~val & 0x8080808080808080u))
> >>> +                             break;
> >>>               }
> >>>       }
> >>> +
> >>> +     for (; p < end; p++)
> >>> +             if (*(unsigned char *)p == c)
> >>> +                     return (void *)p;
> >>> +
> >>>       return NULL;
> >>>  }
> >>>  EXPORT_SYMBOL(memchr);
> >>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ