[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251226202433.107af09a@pumpkin>
Date: Fri, 26 Dec 2025 20:24:33 +0000
From: david laight <david.laight@...box.com>
To: Eric Biggers <ebiggers@...nel.org>
Cc: linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org, Ard
Biesheuvel <ardb@...nel.org>, "Jason A . Donenfeld" <Jason@...c4.com>,
Herbert Xu <herbert@...dor.apana.org.au>, Thorsten Blum
<thorsten.blum@...ux.dev>, Nathan Chancellor <nathan@...nel.org>, Nick
Desaulniers <nick.desaulniers+lkml@...il.com>, Bill Wendling
<morbo@...gle.com>, Justin Stitt <justinstitt@...gle.com>, David Sterba
<dsterba@...e.com>, llvm@...ts.linux.dev, linux-btrfs@...r.kernel.org
Subject: Re: [PATCH] lib/crypto: blake2b: Roll up BLAKE2b round loop on
32-bit
On Fri, 5 Dec 2025 12:14:11 -0800
Eric Biggers <ebiggers@...nel.org> wrote:
> On Fri, Dec 05, 2025 at 02:16:44PM +0000, david laight wrote:
> > Note that executing two G() in parallel probably requires the source
> > interleave the instructions for the two G() rather than relying on the
> > cpu's 'out of order execution' to do all the work
> > (Intel cpu might manage it...).
>
> I actually tried that earlier, and it didn't help. Either the compiler
> interleaved the calculations already, or the CPU did, or both.
>
> It definitely could use some more investigation to better understand
> exactly what is going on, though.
>
> You're welcome to take a closer look, if you're interested.
I had a quick look at the objdump output for the 'not unrolled loop'
of blake2s on x86-64 compiled with gcc 12.2.
The generated code seemed reasonable.
A single register tracked the array of offsets for the data buffer.
So on x86 there was a read of the offset then nn(%rsp,%reg,4) to
get the value (%reg,8 for blake2b).
There weren't many spills to stack, I suspect that 14 of the v[]
were assigned to registers - but didn't analyse the entire loop.
The fully unrolled loop is harder to read, but one of the v[] still
needs spilling to stack.
Each 1/2G has at least one memory read and seven ALU operations.
The Intel cpu (Haswell onwards) can execute 4 ALU instructions
every clock - so however well the multiple G get scheduled each
1/2G will be (pretty much) two clocks.
That really means it should be possible to include the second
memory read (for the not-unrolled loop) without slowing things down.
Even if the nn(%rsp,%reg,8) needs an extra ALU operations the change
shouldn't be massive.
Which makes be wonder whether the slowdown for rolling-up the loop
is due to data cache effects rather than actual ALU instructions.
Of course this is x86 and the nn(%rsp,%reg,8) addressing mode helps.
Otherwise you'd want to multiply the offsets by 8 and, ideally, add
in the stack offset of the data[] array allowing the simpler (%sp,%reg)
addressing mode.
I've still not done any timings, on holiday with the wrong computers.
David
>
> - Eric
>
Powered by blists - more mailing lists