[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <5787872fd00f3f761beb2f6e3ee3f5bddd8f2c3a.1745816372.git.herbert@gondor.apana.org.au>
Date: Mon, 28 Apr 2025 13:17:09 +0800
From: Herbert Xu <herbert@...dor.apana.org.au>
To: Linux Crypto Mailing List <linux-crypto@...r.kernel.org>
Cc: linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org, linux-arm-kernel@...ts.infradead.org, linux-mips@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org, linux-riscv@...ts.infradead.org, sparclinux@...r.kernel.org, linux-s390@...r.kernel.org, x86@...nel.org, Ard Biesheuvel <ardb@...nel.org>, "Jason A . Donenfeld " <Jason@...c4.com>, Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [v3 PATCH 03/13] crypto: arm64/sha256 - remove obsolete chunking
logic
From: Eric Biggers <ebiggers@...gle.com>
Since kernel-mode NEON sections are now preemptible on arm64, there is
no longer any need to limit the length of them.
Signed-off-by: Eric Biggers <ebiggers@...gle.com>
Reviewed-by: Ard Biesheuvel <ardb@...nel.org>
Signed-off-by: Herbert Xu <herbert@...dor.apana.org.au>
---
arch/arm64/crypto/sha256-glue.c | 19 ++-----------------
1 file changed, 2 insertions(+), 17 deletions(-)
diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c
index 26f9fdfae87b..d63ea82e1374 100644
--- a/arch/arm64/crypto/sha256-glue.c
+++ b/arch/arm64/crypto/sha256-glue.c
@@ -86,23 +86,8 @@ static struct shash_alg algs[] = { {
static int sha256_update_neon(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
- do {
- unsigned int chunk = len;
-
- /*
- * Don't hog the CPU for the entire time it takes to process all
- * input when running on a preemptible kernel, but process the
- * data block by block instead.
- */
- if (IS_ENABLED(CONFIG_PREEMPTION))
- chunk = SHA256_BLOCK_SIZE;
-
- chunk -= sha256_base_do_update_blocks(desc, data, chunk,
- sha256_neon_transform);
- data += chunk;
- len -= chunk;
- } while (len >= SHA256_BLOCK_SIZE);
- return len;
+ return sha256_base_do_update_blocks(desc, data, len,
+ sha256_neon_transform);
}
static int sha256_finup_neon(struct shash_desc *desc, const u8 *data,
--
2.39.5
Powered by blists - more mailing lists