[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <89A68071-B57B-49E4-B034-514CA43BC46C@intel.com>
Date: Fri, 14 May 2021 20:36:06 +0000
From: "Bae, Chang Seok" <chang.seok.bae@...el.com>
To: Ard Biesheuvel <ardb@...nel.org>
CC: Herbert Xu <herbert@...dor.apana.org.au>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>, Borislav Petkov <bp@...e.de>,
Andy Lutomirski <luto@...nel.org>, X86 ML <x86@...nel.org>,
"Williams, Dan J" <dan.j.williams@...el.com>,
"Hansen, Dave" <dave.hansen@...el.com>,
"Shankar, Ravi V" <ravi.v.shankar@...el.com>,
"Sun, Ning" <ning.sun@...el.com>,
"Dwarakanath, Kumar N" <kumar.n.dwarakanath@...el.com>,
Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using
Key Locker instructions
On Dec 17, 2020, at 02:16, Ard Biesheuvel <ardb@...nel.org> wrote:
>
> We will need to refactor this - cloning the entire driver and just
> replacing aes-ni with aes-kl is a maintenance nightmare.
>
> Please refer to the arm64 tree for an example how to combine chaining
> mode routines implemented in assembler with different implementations
> of the core AES transforms (aes-modes.S is combined with either
> aes-ce.S or aes-neon.S to produce two different drivers)
I just post v2 [1]. PATCH9 [2] refactors some glue code out of AES-NI to
prepare AES-KL.
[ Past a few months were not fully spent on this but it took a while to
address comments and to debug test cases. ]
> ...
>> diff --git a/arch/x86/crypto/aeskl-intel_glue.c b/arch/x86/crypto/aeskl-intel_glue.c
>> new file mode 100644
>> index 000000000000..9e3f900ad4af
>> --- /dev/null
>> +++ b/arch/x86/crypto/aeskl-intel_glue.c
>> @@ -0,0 +1,697 @@
> ...
>> +static void aeskl_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
>> +{
>> + struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm));
>> + int err = 0;
>> +
>> + if (!crypto_simd_usable())
>> + return;
>> +
>
> It is clear that AES-KL cannot be handled by a fallback algorithm,
> given that the key is no longer available. But that doesn't mean that
> you can just give up like this.
>
> This basically implies that we cannot expose the cipher interface at
> all, and so AES-KL can only be used by callers that use the
> asynchronous interface, which rules out 802.11, s/w kTLS, macsec and
> kerberos.
I made not to expose the synchronous interface, in v2.
> This ties in to a related discussion that is going on about when to
> allow kernel mode SIMD. I am currently investigating whether we can
> change the rules a bit so that crypto_simd_usable() is guaranteed to
> be true.
I saw your series [3]. Yes, I’m very interested in it.
>> +static int ecb_encrypt(struct skcipher_request *req)
>> +{
>> + struct crypto_skcipher *tfm;
>> + struct crypto_aes_ctx *ctx;
>> + struct skcipher_walk walk;
>> + unsigned int nbytes;
>> + int err;
>> +
>> + tfm = crypto_skcipher_reqtfm(req);
>> + ctx = aes_ctx(crypto_skcipher_ctx(tfm));
>> +
>> + err = skcipher_walk_virt(&walk, req, true);
>> + if (err)
>> + return err;
>> +
>> + while ((nbytes = walk.nbytes)) {
>> + unsigned int len = nbytes & AES_BLOCK_MASK;
>> + const u8 *src = walk.src.virt.addr;
>> + u8 *dst = walk.dst.virt.addr;
>> +
>> + kernel_fpu_begin();
>> + if (unlikely(ctx->key_length == AES_KEYSIZE_192))
>> + aesni_ecb_enc(ctx, dst, src, len);
>
> Could we please use a proper fallback here, and relay the entire request?
I made a change like this in v2:
+static int ecb_encrypt(struct skcipher_request *req)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+
+ if (likely(keylength(crypto_skcipher_ctx(tfm)) != AES_KEYSIZE_192))
+ return ecb_crypt_common(req, aeskl_ecb_enc);
+ else
+ return ecb_crypt_common(req, aesni_ecb_enc);
+}
>> + else
>> + err = __aeskl_ecb_enc(ctx, dst, src, len);
>> + kernel_fpu_end();
>> +
>> + if (err) {
>> + skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
>
> This doesn't look right. The skcipher scatterlist walker may have a
> live kmap() here so you can't just return.
I’ve added a preparatory patch [4] to deal with cases like this.
Thanks,
Chang
[1] https://lore.kernel.org/lkml/20210514201508.27967-1-chang.seok.bae@intel.com/
[2] https://lore.kernel.org/lkml/20210514201508.27967-10-chang.seok.bae@intel.com/
[3] https://lore.kernel.org/lkml/20201218170106.23280-1-ardb@kernel.org/
[4] https://lore.kernel.org/lkml/20210514201508.27967-9-chang.seok.bae@intel.com/
Powered by blists - more mailing lists