[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221116041342.3841-4-elliott@hpe.com>
Date: Tue, 15 Nov 2022 22:13:21 -0600
From: Robert Elliott <elliott@....com>
To: herbert@...dor.apana.org.au, davem@...emloft.net,
tim.c.chen@...ux.intel.com, ap420073@...il.com, ardb@...nel.org,
Jason@...c4.com, David.Laight@...LAB.COM, ebiggers@...nel.org,
linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: Robert Elliott <elliott@....com>
Subject: [PATCH v4 03/24] crypto: tcrypt - reschedule during cycles speed tests
commit 2af632996b89 ("crypto: tcrypt - reschedule during speed tests")
added cond_resched() calls to "Avoid RCU stalls in the case of
non-preemptible kernel and lengthy speed tests by rescheduling when
advancing from one block size to another."
It only makes those calls if the sec module parameter is used
(run the speed test for a certain number of seconds), not the
default "cycles" mode.
Expand those to also run in "cycles" mode to reduce the rate
of rcu stall warnings:
rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks:
Suggested-by: Herbert Xu <herbert@...dor.apana.org.au>
Tested-by: Taehee Yoo <ap420073@...il.com>
Signed-off-by: Robert Elliott <elliott@....com>
---
crypto/tcrypt.c | 44 ++++++++++++++++++--------------------------
1 file changed, 18 insertions(+), 26 deletions(-)
diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 7a6a56751043..c025ba26b663 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -408,14 +408,13 @@ static void test_mb_aead_speed(const char *algo, int enc, int secs,
}
- if (secs) {
+ if (secs)
ret = test_mb_aead_jiffies(data, enc, bs,
secs, num_mb);
- cond_resched();
- } else {
+ else
ret = test_mb_aead_cycles(data, enc, bs,
num_mb);
- }
+ cond_resched();
if (ret) {
pr_err("%s() failed return code=%d\n", e, ret);
@@ -661,13 +660,11 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
bs + (enc ? 0 : authsize),
iv);
- if (secs) {
- ret = test_aead_jiffies(req, enc, bs,
- secs);
- cond_resched();
- } else {
+ if (secs)
+ ret = test_aead_jiffies(req, enc, bs, secs);
+ else
ret = test_aead_cycles(req, enc, bs);
- }
+ cond_resched();
if (ret) {
pr_err("%s() failed return code=%d\n", e, ret);
@@ -917,14 +914,13 @@ static void test_ahash_speed_common(const char *algo, unsigned int secs,
ahash_request_set_crypt(req, sg, output, speed[i].plen);
- if (secs) {
+ if (secs)
ret = test_ahash_jiffies(req, speed[i].blen,
speed[i].plen, output, secs);
- cond_resched();
- } else {
+ else
ret = test_ahash_cycles(req, speed[i].blen,
speed[i].plen, output);
- }
+ cond_resched();
if (ret) {
pr_err("hashing failed ret=%d\n", ret);
@@ -1184,15 +1180,14 @@ static void test_mb_skcipher_speed(const char *algo, int enc, int secs,
cur->sg, bs, iv);
}
- if (secs) {
+ if (secs)
ret = test_mb_acipher_jiffies(data, enc,
bs, secs,
num_mb);
- cond_resched();
- } else {
+ else
ret = test_mb_acipher_cycles(data, enc,
bs, num_mb);
- }
+ cond_resched();
if (ret) {
pr_err("%s() failed flags=%x\n", e,
@@ -1401,14 +1396,11 @@ static void test_skcipher_speed(const char *algo, int enc, unsigned int secs,
skcipher_request_set_crypt(req, sg, sg, bs, iv);
- if (secs) {
- ret = test_acipher_jiffies(req, enc,
- bs, secs);
- cond_resched();
- } else {
- ret = test_acipher_cycles(req, enc,
- bs);
- }
+ if (secs)
+ ret = test_acipher_jiffies(req, enc, bs, secs);
+ else
+ ret = test_acipher_cycles(req, enc, bs);
+ cond_resched();
if (ret) {
pr_err("%s() failed flags=%x\n", e,
--
2.38.1
Powered by blists - more mailing lists