[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231109071623.GB1245@sol.localdomain>
Date: Wed, 8 Nov 2023 23:16:23 -0800
From: Eric Biggers <ebiggers@...nel.org>
To: Jerry Shih <jerry.shih@...ive.com>
Cc: Paul Walmsley <paul.walmsley@...ive.com>, palmer@...belt.com,
Albert Ou <aou@...s.berkeley.edu>, herbert@...dor.apana.org.au,
davem@...emloft.net, andy.chiu@...ive.com, greentime.hu@...ive.com,
conor.dooley@...rochip.com, guoren@...nel.org, bjorn@...osinc.com,
heiko@...ech.de, ardb@...nel.org, phoebe.chen@...ive.com,
hongrong.hsu@...ive.com, linux-riscv@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-crypto@...r.kernel.org
Subject: Re: [PATCH 06/12] RISC-V: crypto: add accelerated
AES-CBC/CTR/ECB/XTS implementations
On Tue, Nov 07, 2023 at 04:53:13PM +0800, Jerry Shih wrote:
> On Nov 2, 2023, at 13:16, Eric Biggers <ebiggers@...nel.org> wrote:
> > On Thu, Oct 26, 2023 at 02:36:38AM +0800, Jerry Shih wrote:
> >> +static int ecb_encrypt(struct skcipher_request *req)
> >> +{
> >> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> >> + const struct riscv64_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
> >> + struct skcipher_walk walk;
> >> + unsigned int nbytes;
> >> + int err;
> >> +
> >> + /* If we have error here, the `nbytes` will be zero. */
> >> + err = skcipher_walk_virt(&walk, req, false);
> >> + while ((nbytes = walk.nbytes)) {
> >> + kernel_vector_begin();
> >> + rv64i_zvkned_ecb_encrypt(walk.src.virt.addr, walk.dst.virt.addr,
> >> + nbytes & AES_BLOCK_VALID_SIZE_MASK,
> >> + &ctx->key);
> >> + kernel_vector_end();
> >> + err = skcipher_walk_done(
> >> + &walk, nbytes & AES_BLOCK_REMAINING_SIZE_MASK);
> >> + }
> >> +
> >> + return err;
> >> +}
> >
> > There's no fallback for !crypto_simd_usable() here. I really like it this way.
> > However, for it to work (for skciphers and aeads), RISC-V needs to allow the
> > vector registers to be used in softirq context. Is that already the case?
>
> The kernel-mode-vector could be enabled in softirq, but we don't have nesting
> vector contexts. Will we have the case that kernel needs to jump to softirq for
> encryptions during the regular crypto function? If yes, we need to have fallbacks
> for all algorithms.
Are you asking what happens if a softirq is taken while the CPU is between
kernel_vector_begin() and kernel_vector_end()? I think that needs to be
prevented by making kernel_vector_begin() and kernel_vector_end() disable and
re-enable softirqs, like what kernel_neon_begin() and kernel_neon_end() do on
arm64. Refer to commit 13150149aa6ded which implemented that behavior on arm64.
- Eric
Powered by blists - more mailing lists