lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240329093130.GA65937@sol.localdomain>
Date: Fri, 29 Mar 2024 02:31:30 -0700
From: Eric Biggers <ebiggers@...nel.org>
To: Ard Biesheuvel <ardb@...nel.org>
Cc: linux-crypto@...r.kernel.org, x86@...nel.org,
	linux-kernel@...r.kernel.org, Andy Lutomirski <luto@...nel.org>,
	"Chang S . Bae" <chang.seok.bae@...el.com>
Subject: Re: [PATCH v2 0/6] Faster AES-XTS on modern x86_64 CPUs

On Fri, Mar 29, 2024 at 11:03:07AM +0200, Ard Biesheuvel wrote:
> 
> Retested this v2:
> 
> Tested-by: Ard Biesheuvel <ardb@...nel.org>
> Reviewed-by: Ard Biesheuvel <ardb@...nel.org>
> 
> Hopefully, the AES-KL keylocker implementation can be based on this
> template as well.

As-is, it would be a bit ugly to add keylocker support to my template because my
template always processes 4 registers of AES blocks per iteration of the main
loop (like the existing aes-xts-aesni), whereas the keylocker instructions are
hardcoded to operate on 8 AES blocks at a time in xmm0-xmm7, presumably to
reduce the overhead of unwrapping the key.

I did try an 8-wide version briefly.  There are some older CPUs on which it
helps.  (On newer CPUs, AES latency is lower, and the width increases by moving
to ymm or zmm registers anyway.)  But it didn't seem too attractive to me.  It
causes registers to spill, and it becomes a bit awkward to unroll the AES rounds
when the code size is twice as large, so it may need to be re-rolled.  I should
take a closer look, but I decided to just stay with a 4-wide version for now.

So I *think* AES-KL is best kept separate for now.  I do wonder if the AES-KL
code should adopt the idea of using VEX-coded instructions, though --- surely
it's the case that in practice, any CPU with AES-KL also supports AVX.

> I wouldn't mind retiring the existing xts(aesni)
> code entirely, and using the xts() wrapper around ecb-aes-aesni on
> 32-bit and on non-AVX uarchs with AES-NI.

Yes, it will need to be benchmarked, but that probably makes sense.  If
Wikipedia is to be trusted, on the Intel side only Westmere (from 2010) has
AES-NI but not AVX, and on the AMD side all CPUs with AES-NI have AVX...

- Eric

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ