[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221219220223.3982176-9-elliott@hpe.com>
Date: Mon, 19 Dec 2022 16:02:18 -0600
From: Robert Elliott <elliott@....com>
To: herbert@...dor.apana.org.au, davem@...emloft.net, Jason@...c4.com,
ardb@...nel.org, ap420073@...il.com, David.Laight@...LAB.COM,
ebiggers@...nel.org, tim.c.chen@...ux.intel.com, peter@...jl.ca,
tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com
Cc: linux-crypto@...r.kernel.org, x86@...nel.org,
linux-kernel@...r.kernel.org, Robert Elliott <elliott@....com>
Subject: [PATCH 08/13] crypto: x86/ghash - yield FPU context during long loops
The x86 assembly language implementations using SIMD process data
between kernel_fpu_begin() and kernel_fpu_end() calls. That
disables scheduler preemption, so prevents the CPU core from being
used by other threads.
The update() and finup() functions might be called to process
large quantities of data, which can result in RCU stalls and
soft lockups.
Periodically check if the kernel scheduler wants to run something
else on the CPU. If so, yield the kernel FPU context and let the
scheduler intervene.
Fixes: 0e1227d356e9 ("crypto: ghash - Add PCLMULQDQ accelerated implementation")
Suggested-by: Herbert Xu <herbert@...dor.apana.org.au>
Signed-off-by: Robert Elliott <elliott@....com>
---
arch/x86/crypto/ghash-clmulni-intel_glue.c | 26 ++++++++++++++++------
1 file changed, 19 insertions(+), 7 deletions(-)
diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c
index 1bfde099de0f..cd44339abdbb 100644
--- a/arch/x86/crypto/ghash-clmulni-intel_glue.c
+++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c
@@ -82,7 +82,7 @@ static int ghash_update(struct shash_desc *desc,
if (dctx->bytes) {
int n = min(srclen, dctx->bytes);
- u8 *pos = dst + (GHASH_BLOCK_SIZE - dctx->bytes);
+ u8 *pos = dst + GHASH_BLOCK_SIZE - dctx->bytes;
dctx->bytes -= n;
srclen -= n;
@@ -97,13 +97,25 @@ static int ghash_update(struct shash_desc *desc,
}
}
- kernel_fpu_begin();
- clmul_ghash_update(dst, src, srclen, &ctx->shash);
- kernel_fpu_end();
+ if (srclen >= GHASH_BLOCK_SIZE) {
+ kernel_fpu_begin();
+ for (;;) {
+ const unsigned int chunk = min(srclen, 4096U);
+
+ clmul_ghash_update(dst, src, chunk, &ctx->shash);
+
+ srclen -= chunk & ~(GHASH_BLOCK_SIZE - 1);
+ src += chunk & ~(GHASH_BLOCK_SIZE - 1);
+
+ if (srclen < GHASH_BLOCK_SIZE)
+ break;
+
+ kernel_fpu_yield();
+ }
+ kernel_fpu_end();
+ }
- if (srclen & 0xf) {
- src += srclen - (srclen & 0xf);
- srclen &= 0xf;
+ if (srclen) {
dctx->bytes = GHASH_BLOCK_SIZE - srclen;
while (srclen--)
*dst++ ^= *src++;
--
2.38.1
Powered by blists - more mailing lists