[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMj1kXGLP7QaEcUcqKMqtKCOcTTTh9n=8hEG_24kn8k3O9bmpg@mail.gmail.com>
Date: Mon, 3 Jun 2024 10:55:41 +0200
From: Ard Biesheuvel <ardb@...nel.org>
To: Eric Biggers <ebiggers@...nel.org>
Cc: linux-crypto@...r.kernel.org, x86@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 0/2] x86_64 AES-GCM improvements
On Mon, 3 Jun 2024 at 00:24, Eric Biggers <ebiggers@...nel.org> wrote:
>
> This patchset adds a VAES and AVX512 / AVX10 implementation of AES-GCM
> (Galois/Counter Mode), which improves AES-GCM performance by up to 162%.
> In addition, it replaces the old AES-NI GCM code from Intel with new
> code that is slightly faster and fixes a number of issues including the
> massive binary size of over 250 KB. See the patches for details.
>
> The end state of the x86_64 AES-GCM assembly code is that we end up with
> two assembly files, one that generates AES-NI code with or without AVX,
> and one that generates VAES code with AVX512 / AVX10 with 256-bit or
> 512-bit vectors. There's no support for VAES alone (without AVX512 /
> AVX10). This differs slightly from what I did with AES-XTS where one
> file generates both AVX and AVX512 / AVX10 code including code using
> VAES alone (without AVX512 / AVX10), and another file generates non-AVX
> code only. For now this seems like the right choice for each particular
> algorithm, though, based on how much being limited to 16 SIMD registers
> and 128-bit vectors resulted in some significantly different design
> choices for AES-GCM, but not quite as much for AES-XTS. CPUs shipping
> with VAES alone also seems to be a temporary thing, so we perhaps
> shouldn't go too much out of our way to support that combination.
>
> Changed in v5:
> - Fixed sparse warnings in gcm_setkey()
> - Fixed some comments in aes-gcm-aesni-x86_64.S
>
This version
Tested-by: Ard Biesheuvel <ardb@...nel.org>
Acked-by: Ard Biesheuvel <ardb@...nel.org>
Powered by blists - more mailing lists