[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230721054036.GD847@sol.localdomain>
Date: Thu, 20 Jul 2023 22:40:36 -0700
From: Eric Biggers <ebiggers@...nel.org>
To: Heiko Stuebner <heiko@...ech.de>
Cc: palmer@...belt.com, paul.walmsley@...ive.com,
aou@...s.berkeley.edu, herbert@...dor.apana.org.au,
davem@...emloft.net, conor.dooley@...rochip.com,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-crypto@...r.kernel.org, christoph.muellner@...ll.eu,
Heiko Stuebner <heiko.stuebner@...ll.eu>
Subject: Re: [PATCH v4 10/12] RISC-V: crypto: add Zvkned accelerated AES
encryption implementation
On Tue, Jul 11, 2023 at 05:37:41PM +0200, Heiko Stuebner wrote:
> +config CRYPTO_AES_RISCV
> + tristate "Ciphers: AES (RISCV)"
> + depends on 64BIT && RISCV_ISA_V
> + select CRYPTO_AES
> + help
> + Block ciphers: AES cipher algorithms (FIPS-197)
> + Length-preserving ciphers: AES with ECB, CBC, CTR, CTS,
> + XCTR, and XTS modes
> + AEAD cipher: AES with CBC, ESSIV, and SHA-256
> + for fscrypt and dm-crypt
> +
> + Architecture: riscv using one of
> + - Zvkns
I'm looking forward to having direct support for these AES modes, especially the
modes needed for storage encryption: XTS, and CBC or CTS! None of these AES
modes is actually implemented in this patch yet, though, so they can't be
claimed in the kconfig help text yet. This patch is just a starting point, as
it just adds support for the bare AES block cipher ("aes" in the crypto API).
(BTW, I'm much more interested in, say, AES-XTS support than SM4 support, which
this patchset does include. SM4 is a "national pride cipher" which is somewhat
of a niche thing. I suppose there are already people pushing it for RISC-V
though, as they are everywhere else, so that's to be expected...)
> diff --git a/arch/riscv/crypto/aes-riscv-glue.c b/arch/riscv/crypto/aes-riscv-glue.c
> new file mode 100644
> index 000000000000..85e1187aee22
> --- /dev/null
> +++ b/arch/riscv/crypto/aes-riscv-glue.c
> @@ -0,0 +1,168 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Linux/riscv port of the OpenSSL AES implementation for RISCV
> + *
> + * Copyright (C) 2023 VRULL GmbH
> + * Author: Heiko Stuebner <heiko.stuebner@...ll.eu>
> + */
> +
> +#include <linux/crypto.h>
> +#include <linux/delay.h>
> +#include <linux/err.h>
> +#include <linux/module.h>
> +#include <linux/types.h>
> +#include <asm/simd.h>
> +#include <asm/vector.h>
> +#include <crypto/aes.h>
> +#include <crypto/internal/cipher.h>
> +#include <crypto/internal/simd.h>
> +
> +struct aes_key {
> + u8 key[AES_MAX_KEYLENGTH];
> + int rounds;
> +};
> +
> +/* variant using the zvkned vector crypto extension */
> +void rv64i_zvkned_encrypt(const u8 *in, u8 *out, const struct aes_key *key);
> +void rv64i_zvkned_decrypt(const u8 *in, u8 *out, const struct aes_key *key);
> +int rv64i_zvkned_set_encrypt_key(const u8 *userKey, const int bits,
> + struct aes_key *key);
> +int rv64i_zvkned_set_decrypt_key(const u8 *userKey, const int bits,
> + struct aes_key *key);
> +
> +struct riscv_aes_ctx {
> + struct crypto_cipher *fallback;
> + struct aes_key enc_key;
> + struct aes_key dec_key;
> + unsigned int keylen;
> +};
Can it just use 'struct crypto_aes_ctx'? That's what most of the other AES
implementations use.
> +static int riscv64_aes_init_zvkned(struct crypto_tfm *tfm)
> +{
> + struct riscv_aes_ctx *ctx = crypto_tfm_ctx(tfm);
> + const char *alg = crypto_tfm_alg_name(tfm);
> + struct crypto_cipher *fallback;
> +
> + fallback = crypto_alloc_cipher(alg, 0, CRYPTO_ALG_NEED_FALLBACK);
> + if (IS_ERR(fallback)) {
> + pr_err("Failed to allocate transformation for '%s': %ld\n",
> + alg, PTR_ERR(fallback));
> + return PTR_ERR(fallback);
> + }
> +
> + crypto_cipher_set_flags(fallback,
> + crypto_cipher_get_flags((struct
> + crypto_cipher *)
> + tfm));
> + ctx->fallback = fallback;
> +
> + return 0;
> +}
> +
> +static void riscv_aes_exit(struct crypto_tfm *tfm)
> +{
> + struct riscv_aes_ctx *ctx = crypto_tfm_ctx(tfm);
> +
> + if (ctx->fallback) {
> + crypto_free_cipher(ctx->fallback);
> + ctx->fallback = NULL;
> + }
> +}
> +
> +static int riscv64_aes_setkey_zvkned(struct crypto_tfm *tfm, const u8 *key,
> + unsigned int keylen)
> +{
> + struct riscv_aes_ctx *ctx = crypto_tfm_ctx(tfm);
> + int ret;
> +
> + ctx->keylen = keylen;
> +
> + if (keylen == 16 || keylen == 32) {
> + kernel_rvv_begin();
> + ret = rv64i_zvkned_set_encrypt_key(key, keylen * 8, &ctx->enc_key);
> + if (ret != 1) {
> + kernel_rvv_end();
> + return -EINVAL;
> + }
> +
> + ret = rv64i_zvkned_set_decrypt_key(key, keylen * 8, &ctx->dec_key);
> + kernel_rvv_end();
> + if (ret != 1)
> + return -EINVAL;
> + }
> +
> + ret = crypto_cipher_setkey(ctx->fallback, key, keylen);
> +
> + return ret ? -EINVAL : 0;
> +}
It's a bit annoying that RISC-V doesn't support AES-192, though also not
particularly surprising, seeing as AES-192 is almost never used. (Intel's Key
Locker, for example, is another recent CPU feature that doesn't support
AES-192.) IMO the issue here is really with the kernel crypto API -- it should
treat AES-128, AES-192, and AES-256 as separate algorithms so that
implementations aren't forced to support all three key sizes...
Anyway, for now, as you noticed you do need a fallback to handle AES-192 to make
the kernel crypto API happy.
But, the fallback doesn't have to be a 'crypto_cipher' as you've implemented.
You could just use the AES library. See what arch/arm64/crypto/aes-ce-glue.c
does, for example. Have you considered that? It would be simpler than the
crypto_cipher based approach.
> +
> +static void riscv64_aes_encrypt_zvkned(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
> +{
> + struct riscv_aes_ctx *ctx = crypto_tfm_ctx(tfm);
Always use 'const' for the tfm_ctx in encrypt and decrypt functions, please, as
it must never be modified there.
> +struct crypto_alg riscv64_aes_zvkned_alg = {
static
> + .cra_type = NULL,
Omit that line
> + .cra_alignmask = 0,
Omit that line
> +MODULE_DESCRIPTION("AES (accelerated)");
Maybe "RISC-V accelerated"?
- Eric
Powered by blists - more mailing lists