[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3fc31917-9452-3a10-d11d-056bf2d8b97d@rasmusvillemoes.dk>
Date: Mon, 16 Sep 2019 11:18:33 +0200
From: Rasmus Villemoes <linux@...musvillemoes.dk>
To: Borislav Petkov <bp@...en8.de>,
Rasmus Villemoes <mail@...musvillemoes.dk>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
x86-ml <x86@...nel.org>, Andy Lutomirski <luto@...nel.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
lkml <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] Improve memset
On 13/09/2019 18.36, Borislav Petkov wrote:
> On Fri, Sep 13, 2019 at 12:42:32PM +0200, Borislav Petkov wrote:
>> Or should we talk to Intel hw folks about it...
>
> Or, I can do something like this, while waiting. Benchmark at the end.
>
> The numbers are from a KBL box:
>
> model : 158
> model name : Intel(R) Core(TM) i5-9600K CPU @ 3.70GHz
> stepping : 12
>
> and if I'm not doing anything wrong with the benchmark
Eh, this benchmark doesn't seem to provide any hints on where to set the
cut-off for a compile-time constant n, i.e. the 32 in
__b_c_p(n) && n <= 32
- unless gcc has unrolled your loop completely, which I find highly
unlikely.
(the asm looks
> ok
By "looks ok", do you mean the the builtin_memset() have been made into
calls to libc memset(), or how has gcc expanded that? And if so, what's
the disassembly of your libc's memset()? The thing is, what needs to be
compared is how a rep;stosb of 32 bytes compares to 4 immediate stores.
In fact, perhaps we shouldn't even try to find a cutoff. If __b_c_p(n),
just use __builtin_memset unconditionally. If n is smallish, gcc will do
a few stores, and if n is largish and gcc ends up emitting a call to
memset(), well, we can optimize memset() itself based on cpu
capabilities _and_ it's not the call/ret that will dominate. There are
also optimization and diagnostic advantages of having gcc know the
semantics of the memset() call (e.g. the tr.b DSE you showed).
but I could very well be missing something), the numbers say that
> the REP; STOSB is better from sizes of 8 and upwards and up to two
> cachelines we're pretty much on-par with the builtin variant.
I don't think that's what the numbers say.
Rasmus
Powered by blists - more mailing lists