lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251205141644.313404db@pumpkin>
Date: Fri, 5 Dec 2025 14:16:44 +0000
From: david laight <david.laight@...box.com>
To: Eric Biggers <ebiggers@...nel.org>
Cc: linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org, Ard
 Biesheuvel <ardb@...nel.org>, "Jason A . Donenfeld" <Jason@...c4.com>,
 Herbert Xu <herbert@...dor.apana.org.au>, Thorsten Blum
 <thorsten.blum@...ux.dev>, Nathan Chancellor <nathan@...nel.org>, Nick
 Desaulniers <nick.desaulniers+lkml@...il.com>, Bill Wendling
 <morbo@...gle.com>, Justin Stitt <justinstitt@...gle.com>, David Sterba
 <dsterba@...e.com>, llvm@...ts.linux.dev, linux-btrfs@...r.kernel.org
Subject: Re: [PATCH] lib/crypto: blake2b: Roll up BLAKE2b round loop on
 32-bit

On Wed,  3 Dec 2025 11:06:52 -0800
Eric Biggers <ebiggers@...nel.org> wrote:

> BLAKE2b has a state of 16 64-bit words.  Add the message data in and
> there are 32 64-bit words.  With the current code where all the rounds
> are unrolled to enable constant-folding of the blake2b_sigma values,
> this results in a very large code size on 32-bit kernels, including a
> recurring issue where gcc uses a large amount of stack.
> 
> There's just not much benefit to this unrolling when the code is already
> so large.  Let's roll up the rounds when !CONFIG_64BIT.  Then, remove
> the now-unnecessary override of the stack frame size warning.
> 
> Code size improvements for blake2b_compress_generic():
> 
>                   Size before (bytes)    Size after (bytes)
>                   -------------------    ------------------
>     i386, gcc           27584                 3632
>     i386, clang         18208                 3248
>     arm32, gcc          19912                 2860
>     arm32, clang        21336                 3344
> 
> Running the BLAKE2b benchmark on a !CONFIG_64BIT kernel on an x86_64
> processor shows a 16384B throughput change of 351 => 340 MB/s (gcc) or
> 442 MB/s => 375 MB/s (clang).  So clearly not much of a slowdown either.
> But also that microbenchmark also effectively disregards cache usage,
> which is important in practice and is far better in the smaller code.

Any idea how many clocks those are for each G() ?
That number would give an idea of the actual 'quality' of the code.

A quick count shows 14 alu operations with a register dependency
chain length of 12.
So however hard you try G() will take 12 clocks (on 64bit) provided
all the instructions have no extra result latency (probably true).
That means there is plenty of time for two memory reads for each of the
m[*b2b_sigma++] accesses (including the increment).
On x86-64 there aren't enough registers to hold all of v[], so there also
need to be another 4 reads and writes for each G().
Total 8 memory reads, 4 memory writes and 12 alu clocks - shouldn't be
too hard to get that to run in 12 clocks.

Because of the long register dependency chain there will be gains from
running two G() in parallel.
I don't think you'll get two to run in 12 clocks on x86-64.
While two reads and a write can be done in each clock on later cpu it
definitely needs a 'following wind'.
Arm-64 is another matter, that should be able to hold all of v[] in
registers throughout.
(Although the memcpy(v, ctx->h, 64) probably needs replacing with 8
separate assignments.)

Note that executing two G() in parallel probably requires the source
interleave the instructions for the two G() rather than relying on the
cpu's 'out of order execution' to do all the work
(Intel cpu might manage it...).

While all that is a lot of changes to get right, I suspect that just:
	const u8 *b2b_sigma = blake2b_sigma[0];

#define G(a, b, c, d) \
	a += b + m[b2b_sigma[0]];
	...
	a += b + m[b2b_sigma[1]];
	b2b_sigma += 2;
	...

will remove almost all the benefit from unrolling the 'ROUND' loop.
Especially since the loop termination condition can use b2b_sigma.

Everything except x86 will also benefit from multiplying all of
blake2b_sigma by 8 and doing *(u64 *)((u8)m + b2b_sigma[0]) for
the accesses.

32bit is another matter entirely.
I think gcc can handle u64 as either a pair of 32bit values or as
a register-pair (so pairs of simode ops, or single dimode ones).
The code seems better if you stop it doing the latter, but breathe
on something that joins up the two parts and you are stuck with it.

I can't help thinking that swapping the order of the bit-pairs in the
index to v[] would make it easier to write ROUND() as a real loop.
The code size (and I-cache) reduction might make up for any losses,
especially since the code is really memory limited (because of the
accesses to v[]) rather than alu limited - so a few more alu
instructions may make little difference.

	David


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ