lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YnWiasChfzbEP67C@zx2c4.com>
Date:   Sat, 7 May 2022 00:34:18 +0200
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     David Laight <David.Laight@...LAB.COM>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Borislav Petkov <bp@...en8.de>,
        LKML <linux-kernel@...r.kernel.org>,
        "x86@...nel.org" <x86@...nel.org>,
        Filipe Manana <fdmanana@...e.com>,
        "linux-crypto@...r.kernel.org" <linux-crypto@...r.kernel.org>
Subject: Re: [patch 3/3] x86/fpu: Make FPU protection more robust

Hi David,

On Thu, May 05, 2022 at 11:34:40AM +0000, David Laight wrote:
> OTOH the entropy mixing is very likely to be 'cold cache'
> and all the unrolling in blakes7 will completely kill
> performance.

I've seen you mention the BLAKE2s unrolling in like 8 different threads
now, and I'm not convinced that you're entirely wrong, nor am I
convinced that you're entirely right. My response to you is the same as
always: please send a patch with some measurements! I'd love to get this
worked out in a real way.

The last time I went benching these, the unrolled code was ~100 cycles
faster, if I recall correctly, than the rolled code, when used from
WireGuard's hot path. I don't doubt that a cold path would be more
fraught, though, as that's a decent amount of code. So the question is
how to re-roll the rounds without sacrificing those 100 cycles.

In order to begin to figure that out, we have to look at why the
re-rolled loop is slow and the unrolled loop fast. It's not because of
complicated pipeline things. It's because the BLAKE2s permutation is
actually 10 different permutations, one for each round. Take a look at
the core function, G, and its uses of the round number, r:

    #define G(r, i, a, b, c, d) do { \
        a += b + m[blake2s_sigma[r][2 * i + 0]]; \
        d = ror32(d ^ a, 16); \
        c += d; \
        b = ror32(b ^ c, 12); \
        a += b + m[blake2s_sigma[r][2 * i + 1]]; \
        d = ror32(d ^ a, 8); \
        c += d; \
        b = ror32(b ^ c, 7); \
    } while (0)

The blake2s_sigma array is a `static const u8 blake2s_sigma[10][16]`,
with a row for every one of the 10 rounds. What this is actually doing
is reading the message words in a different order each round, so that
the whole permutation is different.

When the loop is unrolled, blake2s_sigma gets inlined, and then there
are no memory accesses. When it's re-rolled, every round accesses
blake2s_sigma 16 times, which hinders performance.

You'll notice, on the other hand, that the SIMD hand coded assembly
implementations do not unroll. The trick is to hide the cost of the
blake2s_sigma indirection in the data dependencies, so that performance
isn't affected. Naively re-rolling the generic code does not inspire the
compiler to do that. But maybe you can figure something out?

Anyway, that's about where my thinking is on this, but I'd love to see
some patches from you at some point if you're interested.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ