[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230721044252.GB847@sol.localdomain>
Date: Thu, 20 Jul 2023 21:42:52 -0700
From: Eric Biggers <ebiggers@...nel.org>
To: Heiko Stuebner <heiko@...ech.de>
Cc: palmer@...belt.com, paul.walmsley@...ive.com,
aou@...s.berkeley.edu, herbert@...dor.apana.org.au,
davem@...emloft.net, conor.dooley@...rochip.com,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-crypto@...r.kernel.org, christoph.muellner@...ll.eu,
Heiko Stuebner <heiko.stuebner@...ll.eu>,
Charalampos Mitrodimas <charalampos.mitrodimas@...ll.eu>
Subject: Re: [PATCH v4 08/12] RISC-V: crypto: add a vector-crypto-accelerated
SHA256 implementation
On Tue, Jul 11, 2023 at 05:37:39PM +0200, Heiko Stuebner wrote:
> diff --git a/arch/riscv/crypto/sha256-riscv64-glue.c b/arch/riscv/crypto/sha256-riscv64-glue.c
> new file mode 100644
> index 000000000000..1c9c88029f60
> --- /dev/null
> +++ b/arch/riscv/crypto/sha256-riscv64-glue.c
> @@ -0,0 +1,115 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * Linux/riscv64 port of the OpenSSL SHA256 implementation for RISCV64
> + *
> + * Copyright (C) 2022 VRULL GmbH
> + * Author: Heiko Stuebner <heiko.stuebner@...ll.eu>
> + */
> +
> +#include <linux/module.h>
> +#include <linux/types.h>
> +#include <asm/simd.h>
> +#include <asm/vector.h>
> +#include <crypto/internal/hash.h>
> +#include <crypto/internal/simd.h>
> +#include <crypto/sha2.h>
> +#include <crypto/sha256_base.h>
> +
> +asmlinkage void sha256_block_data_order_zvbb_zvknha(u32 *digest, const void *data,
> + unsigned int num_blks);
> +
> +static void __sha256_block_data_order(struct sha256_state *sst, u8 const *src,
> + int blocks)
> +{
> + sha256_block_data_order_zvbb_zvknha(sst->state, src, blocks);
> +}
Having a double-underscored function wrap around a non-underscored one like this
isn't conventional for Linux kernel code. IIRC some of the other crypto code
happens to do this, but it really is supposed to be the other way around.
I think you should just declare the assembly function to take a 'struct
sha256_state', with a comment mentioning that only the 'u32 state[8]' at the
beginning is actually used. That's what arch/x86/crypto/sha256_ssse3_glue.c
does, for example. Then, __sha256_block_data_order() would be unneeded.
> +static int riscv64_sha256_update(struct shash_desc *desc, const u8 *data,
> + unsigned int len)
> +{
> + if (crypto_simd_usable()) {
crypto_simd_usable() uses may_use_simd() which isn't wired up for RISC-V, so it
gets the default implementation of '!in_interrupt()'. RISC-V does have
may_use_vector() which looks like right thing. I think RISC-V needs a header
arch/riscv/include/asm/simd.h which defines may_use_simd() as a wrapper around
may_use_vector().
> + int ret;
> +
> + kernel_rvv_begin();
> + ret = sha256_base_do_update(desc, data, len,
> + __sha256_block_data_order);
> + kernel_rvv_end();
> + return ret;
> + } else {
> + sha256_update(shash_desc_ctx(desc), data, len);
> + return 0;
> + }
> +}
> +
> +static int riscv64_sha256_finup(struct shash_desc *desc, const u8 *data,
> + unsigned int len, u8 *out)
> +{
> + if (!crypto_simd_usable()) {
> + sha256_update(shash_desc_ctx(desc), data, len);
> + sha256_final(shash_desc_ctx(desc), out);
> + return 0;
> + }
Keep things consistent please. riscv64_sha256_update() could use
!crypto_simd_usable() and an early return too.
> +static int __init sha256_mod_init(void)
riscv64_sha256_mod_init()
> +{
> + /*
> + * From the spec:
> + * Zvknhb supports SHA-256 and SHA-512. Zvknha supports only SHA-256.
> + */
> + if ((riscv_isa_extension_available(NULL, ZVKNHA) ||
> + riscv_isa_extension_available(NULL, ZVKNHB)) &&
> + riscv_isa_extension_available(NULL, ZVBB) &&
> + riscv_vector_vlen() >= 128)
> +
> + return crypto_register_shash(&sha256_alg);
> +
> + return 0;
> +}
> +
> +static void __exit sha256_mod_fini(void)
riscv64_sha256_mod_exit()
> +{
> + if ((riscv_isa_extension_available(NULL, ZVKNHA) ||
> + riscv_isa_extension_available(NULL, ZVKNHB)) &&
> + riscv_isa_extension_available(NULL, ZVBB) &&
> + riscv_vector_vlen() >= 128)
> + crypto_unregister_shash(&sha256_alg);
> +}
If the needed CPU features aren't present, return -ENODEV from the module_init
function instead of 0. Then, the module_exit function can unconditionally
unregister the algorithm.
- Eric
Powered by blists - more mailing lists