[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220209011919.493762-9-Jason@zx2c4.com>
Date: Wed, 9 Feb 2022 02:19:18 +0100
From: "Jason A. Donenfeld" <Jason@...c4.com>
To: linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: tytso@....edu, linux@...inikbrodowski.net, ebiggers@...nel.org,
"Jason A. Donenfeld" <Jason@...c4.com>
Subject: [PATCH v2 8/9] random: use hash function for crng_slow_load()
Since we have a hash function that's really fast, and the goal of
crng_slow_load() is reportedly to "touch all of the crng's state", we
can just hash the old state together with the new state and call it a
day. This way we dont need to reason about another LFSR or worry about
various attacks there. This code is only ever used at early boot and
then never again.
Cc: Theodore Ts'o <tytso@....edu>
Cc: Dominik Brodowski <linux@...inikbrodowski.net>
Signed-off-by: Jason A. Donenfeld <Jason@...c4.com>
---
drivers/char/random.c | 42 +++++++++++++++---------------------------
1 file changed, 15 insertions(+), 27 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 359fd2501c45..f7f9cbfe13f7 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -470,42 +470,30 @@ static size_t crng_fast_load(const u8 *cp, size_t len)
* all), and (2) it doesn't have the performance constraints of
* crng_fast_load().
*
- * So we do something more comprehensive which is guaranteed to touch
- * all of the primary_crng's state, and which uses a LFSR with a
- * period of 255 as part of the mixing algorithm. Finally, we do
- * *not* advance crng_init_cnt since buffer we may get may be something
- * like a fixed DMI table (for example), which might very well be
- * unique to the machine, but is otherwise unvarying.
+ * So, we simply hash the contents in with the current key. Finally,
+ * we do *not* advance crng_init_cnt since buffer we may get may be
+ * something like a fixed DMI table (for example), which might very
+ * well be unique to the machine, but is otherwise unvarying.
*/
-static int crng_slow_load(const u8 *cp, size_t len)
+static void crng_slow_load(const u8 *cp, size_t len)
{
unsigned long flags;
- static u8 lfsr = 1;
- u8 tmp;
- unsigned int i, max = sizeof(base_crng.key);
- const u8 *src_buf = cp;
- u8 *dest_buf = base_crng.key;
+ struct blake2s_state hash;
+
+ blake2s_init(&hash, sizeof(base_crng.key));
if (!spin_trylock_irqsave(&base_crng.lock, flags))
- return 0;
+ return;
if (crng_init != 0) {
spin_unlock_irqrestore(&base_crng.lock, flags);
- return 0;
- }
- if (len > max)
- max = len;
-
- for (i = 0; i < max; i++) {
- tmp = lfsr;
- lfsr >>= 1;
- if (tmp & 1)
- lfsr ^= 0xE1;
- tmp = dest_buf[i % sizeof(base_crng.key)];
- dest_buf[i % sizeof(base_crng.key)] ^= src_buf[i % len] ^ lfsr;
- lfsr += (tmp << 3) | (tmp >> 5);
+ return;
}
+
+ blake2s_update(&hash, base_crng.key, sizeof(base_crng.key));
+ blake2s_update(&hash, cp, len);
+ blake2s_final(&hash, base_crng.key);
+
spin_unlock_irqrestore(&base_crng.lock, flags);
- return 1;
}
static void crng_reseed(void)
--
2.35.0
Powered by blists - more mailing lists