lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 16 Sep 2019 14:29:52 -0700
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     Andy Lutomirski <luto@...nel.org>
Cc:     Rasmus Villemoes <linux@...musvillemoes.dk>,
        Borislav Petkov <bp@...en8.de>,
        Rasmus Villemoes <mail@...musvillemoes.dk>,
        x86-ml <x86@...nel.org>, Josh Poimboeuf <jpoimboe@...hat.com>,
        lkml <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] Improve memset

On Mon, Sep 16, 2019 at 10:41 AM Andy Lutomirski <luto@...nel.org> wrote:
>
> After some experimentation, I think y'all are just doing it wrong.
> GCC is very clever about this as long as it's given the chance.  This
> test, for example, generates excellent code:
>
> #include <string.h>
>
> __THROW __nonnull ((1)) __attribute__((always_inline)) void
> *memset(void *s, int c, size_t n)
> {
>     asm volatile ("nop");
>     return s;
> }
>
> /* generates 'nop' */
> void zero(void *dest, size_t size)
> {
>     __builtin_memset(dest, 0, size);
> }

I think the point was that we'd like to get the default memset (for
when __builtin_memset() doesn't generate inline code) also inlined
into just "rep stosb", instead of that tail-call "jmp memset".

> So I'm thinking maybe compiler.h should actually do something like:
>
> #define memset __builtin_memset
>
> and we should have some appropriate magic so that the memset inline is
> exempt from the macro.

That "appropriate magic" is easy enough: make sure the memset inline
shows up before the macro definition.

However, gcc never actually inlines the memset() for me, always doing
that "jmp memset"

> FWIW, this obviously wrong code:
>
> __THROW __nonnull ((1)) __attribute__((always_inline)) void
> *memset(void *s, int c, size_t n)
> {
>     __builtin_memset(s, c, n);
>     return s;
> }
>
> generates 'jmp memset'.  It's not entirely clear to me exactly what's
> happening here.

I think calling memset() is always the default fallback for
__builtin_memset, and because it can't be recursiveyl inlined, it's
done as a call. Which is then turned into a tailcall because the
calling conventions match, thus the "jmp memset".

But as mentioned, the example you claim generates excellent code
doesn't actually inline memset() at all for me, and they are all doing
"jmp memset" except for the cases that get turned into direct stores.

Eg (removing the cfi annotations etc stuff):

        zero:
                movq    %rsi, %rdx
                xorl    %esi, %esi
                jmp     memset

rather than that "nop" showing up inside the zero function.

But I agree that when __builtin_memset() generates manual inline code,
it does the right thing, ie

        memset_a_bit:
                movl    $0, (%rdi)
                ret

is clearly the right thing to do. We knew that.

                  Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ