lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 16 Sep 2019 10:40:53 -0700
From:   Andy Lutomirski <luto@...nel.org>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Rasmus Villemoes <linux@...musvillemoes.dk>,
        Borislav Petkov <bp@...en8.de>,
        Rasmus Villemoes <mail@...musvillemoes.dk>,
        x86-ml <x86@...nel.org>, Andy Lutomirski <luto@...nel.org>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        lkml <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] Improve memset

On Mon, Sep 16, 2019 at 10:25 AM Linus Torvalds
<torvalds@...ux-foundation.org> wrote:
>
> On Mon, Sep 16, 2019 at 2:18 AM Rasmus Villemoes
> <linux@...musvillemoes.dk> wrote:
> >
> > Eh, this benchmark doesn't seem to provide any hints on where to set the
> > cut-off for a compile-time constant n, i.e. the 32 in
>
> Yes, you'd need to use proper fixed-size memset's with
> __builtin_memset() to test that case. Probably easy enough with some
> preprocessor macros to expand to a lot of cases.
>
> But even then it will not show some of the advantages of inlining the
> memset (quite often you have a "memset structure to zero, then
> initialize a couple of fields" pattern, and gcc does much better for
> that when it just inlines the memset to stores - to the point of just
> removing all the memset entirely and just storing a couple of zeroes
> between the fields you initialized).

After some experimentation, I think y'all are just doing it wrong.
GCC is very clever about this as long as it's given the chance.  This
test, for example, generates excellent code:

#include <string.h>

__THROW __nonnull ((1)) __attribute__((always_inline)) void
*memset(void *s, int c, size_t n)
{
    asm volatile ("nop");
    return s;
}

/* generates 'nop' */
void zero(void *dest, size_t size)
{
    __builtin_memset(dest, 0, size);
}

/* xorl %eax, %eax */
int test(void)
{
    int x;
    __builtin_memset(&x, 0, sizeof(x));
    return x;
}

/* movl $0, (%rdi) */
void memset_a_bit(int *ptr)
{
    __builtin_memset(ptr, 0, sizeof(*ptr));
}

So I'm thinking maybe compiler.h should actually do something like:

#define memset __builtin_memset

and we should have some appropriate magic so that the memset inline is
exempt from the macro.  Or maybe there's some very clever way to put
all of this into the memset inline function.  FWIW, this obviously
wrong code:

__THROW __nonnull ((1)) __attribute__((always_inline)) void
*memset(void *s, int c, size_t n)
{
    __builtin_memset(s, c, n);
    return s;
}

generates 'jmp memset'.  It's not entirely clear to me exactly what's
happening here.

--Andy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ