lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221219220223.3982176-6-elliott@hpe.com>
Date:   Mon, 19 Dec 2022 16:02:15 -0600
From:   Robert Elliott <elliott@....com>
To:     herbert@...dor.apana.org.au, davem@...emloft.net, Jason@...c4.com,
        ardb@...nel.org, ap420073@...il.com, David.Laight@...LAB.COM,
        ebiggers@...nel.org, tim.c.chen@...ux.intel.com, peter@...jl.ca,
        tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
        dave.hansen@...ux.intel.com
Cc:     linux-crypto@...r.kernel.org, x86@...nel.org,
        linux-kernel@...r.kernel.org, Robert Elliott <elliott@....com>
Subject: [PATCH 05/13] crypto: x86/sm3 - yield FPU context during long loops

The x86 assembly language implementations using SIMD process data
between kernel_fpu_begin() and kernel_fpu_end() calls. That
disables scheduler preemption, so prevents the CPU core from being
used by other threads.

The update() and finup() functions might be called to process
large quantities of data, which can result in RCU stalls and
soft lockups.

Periodically check if the kernel scheduler wants to run something
else on the CPU. If so, yield the kernel FPU context and let the
scheduler intervene.

Fixes: 930ab34d906d ("crypto: x86/sm3 - add AVX assembly implementation")
Suggested-by: Herbert Xu <herbert@...dor.apana.org.au>
Signed-off-by: Robert Elliott <elliott@....com>
---
 arch/x86/crypto/sm3_avx_glue.c | 34 +++++++++++++++++++++++++++++-----
 1 file changed, 29 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/sm3_avx_glue.c b/arch/x86/crypto/sm3_avx_glue.c
index 661b6f22ffcd..9e4b21c0e748 100644
--- a/arch/x86/crypto/sm3_avx_glue.c
+++ b/arch/x86/crypto/sm3_avx_glue.c
@@ -25,8 +25,7 @@ static int sm3_avx_update(struct shash_desc *desc, const u8 *data,
 {
 	struct sm3_state *sctx = shash_desc_ctx(desc);
 
-	if (!crypto_simd_usable() ||
-			(sctx->count % SM3_BLOCK_SIZE) + len < SM3_BLOCK_SIZE) {
+	if (((sctx->count % SM3_BLOCK_SIZE) + len < SM3_BLOCK_SIZE) || !crypto_simd_usable()) {
 		sm3_update(sctx, data, len);
 		return 0;
 	}
@@ -38,7 +37,19 @@ static int sm3_avx_update(struct shash_desc *desc, const u8 *data,
 	BUILD_BUG_ON(offsetof(struct sm3_state, state) != 0);
 
 	kernel_fpu_begin();
-	sm3_base_do_update(desc, data, len, sm3_transform_avx);
+	for (;;) {
+		const unsigned int chunk = min(len, 4096U);
+
+		sm3_base_do_update(desc, data, chunk, sm3_transform_avx);
+
+		len -= chunk;
+
+		if (!len)
+			break;
+
+		data += chunk;
+		kernel_fpu_yield();
+	}
 	kernel_fpu_end();
 
 	return 0;
@@ -58,8 +69,21 @@ static int sm3_avx_finup(struct shash_desc *desc, const u8 *data,
 	}
 
 	kernel_fpu_begin();
-	if (len)
-		sm3_base_do_update(desc, data, len, sm3_transform_avx);
+	if (len) {
+		for (;;) {
+			const unsigned int chunk = min(len, 4096U);
+
+			sm3_base_do_update(desc, data, chunk, sm3_transform_avx);
+			len -= chunk;
+
+			if (!len)
+				break;
+
+			data += chunk;
+			kernel_fpu_yield();
+		}
+	}
+
 	sm3_base_do_finalize(desc, sm3_transform_avx);
 	kernel_fpu_end();
 
-- 
2.38.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ