[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230526072301.GC875@sol.localdomain>
Date: Fri, 26 May 2023 00:23:01 -0700
From: Eric Biggers <ebiggers@...nel.org>
To: "Chang S. Bae" <chang.seok.bae@...el.com>
Cc: linux-kernel@...r.kernel.org, linux-crypto@...r.kernel.org,
dm-devel@...hat.com, elliott@....com, gmazyland@...il.com,
luto@...nel.org, dave.hansen@...ux.intel.com, tglx@...utronix.de,
bp@...en8.de, mingo@...nel.org, x86@...nel.org,
herbert@...dor.apana.org.au, ardb@...nel.org,
dan.j.williams@...el.com, bernie.keany@...el.com,
charishma1.gairuboyina@...el.com,
lalithambika.krishnakumar@...el.com, nhuck@...gle.com,
"David S. Miller" <davem@...emloft.net>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH v7 12/12] crypto: x86/aes-kl - Implement the AES-XTS
algorithm
On Wed, May 24, 2023 at 09:57:17AM -0700, Chang S. Bae wrote:
> == API Limitation ==
>
> The setkey() function transforms an AES key into a handle. But, an
> extended key is a usual outcome of setkey() in other AES cipher
> implementations. For this reason, a setkey() failure does not fall
> back to the other. So, expose AES-KL methods via synchronous
> interfaces only.
I don't understand what this paragraph is trying to say.
> +/*
> + * The below wrappers for the encryption/decryption functions
> + * incorporate the feature availability check:
> + *
> + * In the rare event of hardware failure, the wrapping key can be lost
> + * after wake-up from a deep sleep state. Then, this check helps to
> + * avoid any subsequent misuse with populating a proper error code.
> + */
> +
> +static inline int aeskl_enc(const void *ctx, u8 *out, const u8 *in)
> +{
> + if (!valid_keylocker())
> + return -ENODEV;
> +
> + return __aeskl_enc(ctx, out, in);
> +}
Is it not sufficient for the valid_keylocker() check to occur at the top level
(in xts_encrypt() and xts_decrypt()), which would seem to be a better place to
do it? Is this because valid_keylocker() needs to be checked in every
kernel_fpu_begin() section separately, to avoid a race condition? If that's
indeed the reason, can you explain that in the comment?
> +static inline int xts_keylen(struct skcipher_request *req, u32 *keylen)
> +{
> + struct aes_xts_ctx *ctx = aes_xts_ctx(crypto_skcipher_reqtfm(req));
> +
> + if (ctx->crypt_ctx.key_length != ctx->tweak_ctx.key_length)
> + return -EINVAL;
> +
> + *keylen = ctx->crypt_ctx.key_length;
> + return 0;
> +}
This is odd. Why would the key lengths be different here?
> + err = simd_register_skciphers_compat(aeskl_skciphers, ARRAY_SIZE(aeskl_skciphers),
> + aeskl_simd_skciphers);
> + if (err)
> + return err;
> +
> + return 0;
This can be simplified to:
return simd_register_skciphers_compat(aeskl_skciphers,
ARRAY_SIZE(aeskl_skciphers),
aeskl_simd_skciphers);
- Eric
Powered by blists - more mailing lists