[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220623164345.368031397@linuxfoundation.org>
Date: Thu, 23 Jun 2022 18:41:51 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Ahmed Darwish <darwish.07@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Theodore Tso <tytso@....edu>,
Nicholas Mc Guire <hofrat@...ntech.at>,
Andy Lutomirski <luto@...nel.org>,
Kees Cook <keescook@...omium.org>, Willy Tarreau <w@....eu>,
"Alexander E. Patrakov" <patrakov@...il.com>,
Lennart Poettering <mzxreary@...inter.de>,
Noah Meyerhans <noahm@...ian.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 4.14 077/237] random: try to actively add entropy rather than passively wait for it
From: Linus Torvalds <torvalds@...ux-foundation.org>
commit 50ee7529ec4500c88f8664560770a7a1b65db72b upstream.
For 5.3 we had to revert a nice ext4 IO pattern improvement, because it
caused a bootup regression due to lack of entropy at bootup together
with arguably broken user space that was asking for secure random
numbers when it really didn't need to.
See commit 72dbcf721566 (Revert "ext4: make __ext4_get_inode_loc plug").
This aims to solve the issue by actively generating entropy noise using
the CPU cycle counter when waiting for the random number generator to
initialize. This only works when you have a high-frequency time stamp
counter available, but that's the case on all modern x86 CPU's, and on
most other modern CPU's too.
What we do is to generate jitter entropy from the CPU cycle counter
under a somewhat complex load: calling the scheduler while also
guaranteeing a certain amount of timing noise by also triggering a
timer.
I'm sure we can tweak this, and that people will want to look at other
alternatives, but there's been a number of papers written on jitter
entropy, and this should really be fairly conservative by crediting one
bit of entropy for every timer-induced jump in the cycle counter. Not
because the timer itself would be all that unpredictable, but because
the interaction between the timer and the loop is going to be.
Even if (and perhaps particularly if) the timer actually happens on
another CPU, the cacheline interaction between the loop that reads the
cycle counter and the timer itself firing is going to add perturbations
to the cycle counter values that get mixed into the entropy pool.
As Thomas pointed out, with a modern out-of-order CPU, even quite simple
loops show a fair amount of hard-to-predict timing variability even in
the absense of external interrupts. But this tries to take that further
by actually having a fairly complex interaction.
This is not going to solve the entropy issue for architectures that have
no CPU cycle counter, but it's not clear how (and if) that is solvable,
and the hardware in question is largely starting to be irrelevant. And
by doing this we can at least avoid some of the even more contentious
approaches (like making the entropy waiting time out in order to avoid
the possibly unbounded waiting).
Cc: Ahmed Darwish <darwish.07@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Theodore Ts'o <tytso@....edu>
Cc: Nicholas Mc Guire <hofrat@...ntech.at>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Kees Cook <keescook@...omium.org>
Cc: Willy Tarreau <w@....eu>
Cc: Alexander E. Patrakov <patrakov@...il.com>
Cc: Lennart Poettering <mzxreary@...inter.de>
Cc: Noah Meyerhans <noahm@...ian.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/char/random.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 61 insertions(+), 1 deletion(-)
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1522,6 +1522,56 @@ void get_random_bytes(void *buf, int nby
}
EXPORT_SYMBOL(get_random_bytes);
+
+/*
+ * Each time the timer fires, we expect that we got an unpredictable
+ * jump in the cycle counter. Even if the timer is running on another
+ * CPU, the timer activity will be touching the stack of the CPU that is
+ * generating entropy..
+ *
+ * Note that we don't re-arm the timer in the timer itself - we are
+ * happy to be scheduled away, since that just makes the load more
+ * complex, but we do not want the timer to keep ticking unless the
+ * entropy loop is running.
+ *
+ * So the re-arming always happens in the entropy loop itself.
+ */
+static void entropy_timer(unsigned long data)
+{
+ credit_entropy_bits(&input_pool, 1);
+}
+
+/*
+ * If we have an actual cycle counter, see if we can
+ * generate enough entropy with timing noise
+ */
+static void try_to_generate_entropy(void)
+{
+ struct {
+ unsigned long now;
+ struct timer_list timer;
+ } stack;
+
+ stack.now = random_get_entropy();
+
+ /* Slow counter - or none. Don't even bother */
+ if (stack.now == random_get_entropy())
+ return;
+
+ __setup_timer_on_stack(&stack.timer, entropy_timer, 0, 0);
+ while (!crng_ready()) {
+ if (!timer_pending(&stack.timer))
+ mod_timer(&stack.timer, jiffies+1);
+ mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now));
+ schedule();
+ stack.now = random_get_entropy();
+ }
+
+ del_timer_sync(&stack.timer);
+ destroy_timer_on_stack(&stack.timer);
+ mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now));
+}
+
/*
* Wait for the urandom pool to be seeded and thus guaranteed to supply
* cryptographically secure random numbers. This applies to: the /dev/urandom
@@ -1536,7 +1586,17 @@ int wait_for_random_bytes(void)
{
if (likely(crng_ready()))
return 0;
- return wait_event_interruptible(crng_init_wait, crng_ready());
+
+ do {
+ int ret;
+ ret = wait_event_interruptible_timeout(crng_init_wait, crng_ready(), HZ);
+ if (ret)
+ return ret > 0 ? 0 : ret;
+
+ try_to_generate_entropy();
+ } while (!crng_ready());
+
+ return 0;
}
EXPORT_SYMBOL(wait_for_random_bytes);
Powered by blists - more mailing lists