[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201023192203.400040-6-nivedita@alum.mit.edu>
Date: Fri, 23 Oct 2020 15:22:03 -0400
From: Arvind Sankar <nivedita@...m.mit.edu>
To: Herbert Xu <herbert@...dor.apana.org.au>,
"David S. Miller" <davem@...emloft.net>,
"linux-crypto@...r.kernel.org" <linux-crypto@...r.kernel.org>,
Eric Biggers <ebiggers@...nel.org>,
David Laight <David.Laight@...lab.com>
Cc: linux-kernel@...r.kernel.org, Eric Biggers <ebiggers@...gle.com>
Subject: [PATCH v3 5/5] crypto: lib/sha256 - Unroll LOAD and BLEND loops
Unrolling the LOAD and BLEND loops improves performance by ~8% on x86_64
(tested on Broadwell Xeon) while not increasing code size too much.
Signed-off-by: Arvind Sankar <nivedita@...m.mit.edu>
Reviewed-by: Eric Biggers <ebiggers@...gle.com>
---
lib/crypto/sha256.c | 24 ++++++++++++++++++++----
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c
index e2e29d9b0ccd..cdef37c05972 100644
--- a/lib/crypto/sha256.c
+++ b/lib/crypto/sha256.c
@@ -76,12 +76,28 @@ static void sha256_transform(u32 *state, const u8 *input, u32 *W)
int i;
/* load the input */
- for (i = 0; i < 16; i++)
- LOAD_OP(i, W, input);
+ for (i = 0; i < 16; i += 8) {
+ LOAD_OP(i + 0, W, input);
+ LOAD_OP(i + 1, W, input);
+ LOAD_OP(i + 2, W, input);
+ LOAD_OP(i + 3, W, input);
+ LOAD_OP(i + 4, W, input);
+ LOAD_OP(i + 5, W, input);
+ LOAD_OP(i + 6, W, input);
+ LOAD_OP(i + 7, W, input);
+ }
/* now blend */
- for (i = 16; i < 64; i++)
- BLEND_OP(i, W);
+ for (i = 16; i < 64; i += 8) {
+ BLEND_OP(i + 0, W);
+ BLEND_OP(i + 1, W);
+ BLEND_OP(i + 2, W);
+ BLEND_OP(i + 3, W);
+ BLEND_OP(i + 4, W);
+ BLEND_OP(i + 5, W);
+ BLEND_OP(i + 6, W);
+ BLEND_OP(i + 7, W);
+ }
/* load the state into our registers */
a = state[0]; b = state[1]; c = state[2]; d = state[3];
--
2.26.2
Powered by blists - more mailing lists