lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221219220223.3982176-14-elliott@hpe.com>
Date:   Mon, 19 Dec 2022 16:02:23 -0600
From:   Robert Elliott <elliott@....com>
To:     herbert@...dor.apana.org.au, davem@...emloft.net, Jason@...c4.com,
        ardb@...nel.org, ap420073@...il.com, David.Laight@...LAB.COM,
        ebiggers@...nel.org, tim.c.chen@...ux.intel.com, peter@...jl.ca,
        tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
        dave.hansen@...ux.intel.com
Cc:     linux-crypto@...r.kernel.org, x86@...nel.org,
        linux-kernel@...r.kernel.org, Robert Elliott <elliott@....com>
Subject: [PATCH 13/13] crypto: x86/aria - yield FPU context only when needed

The x86 assembly language implementations using SIMD process data
between kernel_fpu_begin() and kernel_fpu_end() calls. That
disables scheduler preemption, so prevents the CPU core from being
used by other threads.

During ctr mode, rather than break the processing into 256 byte
passes, each of which unilaterally calls kernel_fpu_begin() and
kernel_fpu_end(), periodically check if the kernel scheduler wants
to run something else on the CPU. If so, yield the kernel FPU
context and let the scheduler intervene.

Signed-off-by: Robert Elliott <elliott@....com>
---
 arch/x86/crypto/aria_aesni_avx_glue.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/crypto/aria_aesni_avx_glue.c b/arch/x86/crypto/aria_aesni_avx_glue.c
index c561ea4fefa5..6657ce576e6c 100644
--- a/arch/x86/crypto/aria_aesni_avx_glue.c
+++ b/arch/x86/crypto/aria_aesni_avx_glue.c
@@ -5,6 +5,7 @@
  * Copyright (c) 2022 Taehee Yoo <ap420073@...il.com>
  */
 
+#include <asm/simd.h>
 #include <crypto/algapi.h>
 #include <crypto/internal/simd.h>
 #include <crypto/aria.h>
@@ -85,17 +86,19 @@ static int aria_avx_ctr_encrypt(struct skcipher_request *req)
 		const u8 *src = walk.src.virt.addr;
 		u8 *dst = walk.dst.virt.addr;
 
+		kernel_fpu_begin();
 		while (nbytes >= ARIA_AESNI_PARALLEL_BLOCK_SIZE) {
 			u8 keystream[ARIA_AESNI_PARALLEL_BLOCK_SIZE];
 
-			kernel_fpu_begin();
 			aria_ops.aria_ctr_crypt_16way(ctx, dst, src, keystream,
 						      walk.iv);
-			kernel_fpu_end();
 			dst += ARIA_AESNI_PARALLEL_BLOCK_SIZE;
 			src += ARIA_AESNI_PARALLEL_BLOCK_SIZE;
 			nbytes -= ARIA_AESNI_PARALLEL_BLOCK_SIZE;
+
+			kernel_fpu_yield();
 		}
+		kernel_fpu_end();
 
 		while (nbytes >= ARIA_BLOCK_SIZE) {
 			u8 keystream[ARIA_BLOCK_SIZE];
-- 
2.38.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ