[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220527084905.195699040@linuxfoundation.org>
Date: Fri, 27 May 2022 10:50:21 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Theodore Tso <tytso@....edu>,
Dominik Brodowski <linux@...inikbrodowski.net>,
"Jason A. Donenfeld" <Jason@...c4.com>
Subject: [PATCH 5.15 120/145] random: do not use batches when !crng_ready()
From: "Jason A. Donenfeld" <Jason@...c4.com>
commit cbe89e5a375a51bbb952929b93fa973416fea74e upstream.
It's too hard to keep the batches synchronized, and pointless anyway,
since in !crng_ready(), we're updating the base_crng key really often,
where batching only hurts. So instead, if the crng isn't ready, just
call into get_random_bytes(). At this stage nothing is performance
critical anyhow.
Cc: Theodore Ts'o <tytso@....edu>
Reviewed-by: Dominik Brodowski <linux@...inikbrodowski.net>
Signed-off-by: Jason A. Donenfeld <Jason@...c4.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/char/random.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -467,10 +467,8 @@ static void crng_pre_init_inject(const v
if (account) {
crng_init_cnt += min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt);
- if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
- ++base_crng.generation;
+ if (crng_init_cnt >= CRNG_INIT_CNT_THRESH)
crng_init = 1;
- }
}
spin_unlock_irqrestore(&base_crng.lock, flags);
@@ -626,6 +624,11 @@ u64 get_random_u64(void)
warn_unseeded_randomness(&previous);
+ if (!crng_ready()) {
+ _get_random_bytes(&ret, sizeof(ret));
+ return ret;
+ }
+
local_lock_irqsave(&batched_entropy_u64.lock, flags);
batch = raw_cpu_ptr(&batched_entropy_u64);
@@ -660,6 +663,11 @@ u32 get_random_u32(void)
warn_unseeded_randomness(&previous);
+ if (!crng_ready()) {
+ _get_random_bytes(&ret, sizeof(ret));
+ return ret;
+ }
+
local_lock_irqsave(&batched_entropy_u32.lock, flags);
batch = raw_cpu_ptr(&batched_entropy_u32);
Powered by blists - more mailing lists