[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YgSqAbggkgdAtkcm@owl.dominikbrodowski.net>
Date: Thu, 10 Feb 2022 07:00:33 +0100
From: Dominik Brodowski <linux@...inikbrodowski.net>
To: "Jason A. Donenfeld" <Jason@...c4.com>
Cc: linux-kernel@...r.kernel.org, Theodore Ts'o <tytso@....edu>
Subject: Re: [PATCH] random: tie batched entropy generation to base_crng
generation
Am Wed, Feb 09, 2022 at 10:54:06PM +0100 schrieb Jason A. Donenfeld:
> Now that we have an explicit base_crng generation counter, we don't need
> a separate one for batched entropy. Rather, we can just move the
> generation forward every time we change crng_init state.
>
> Cc: Dominik Brodowski <linux@...inikbrodowski.net>
> Cc: Theodore Ts'o <tytso@....edu>
> Signed-off-by: Jason A. Donenfeld <Jason@...c4.com>
> ---
> drivers/char/random.c | 28 +++++++---------------------
> 1 file changed, 7 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index 999f1d164e72..f4d432305869 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -431,8 +431,6 @@ static DEFINE_PER_CPU(struct crng, crngs) = {
>
> static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
>
> -static void invalidate_batched_entropy(void);
> -
> /*
> * crng_fast_load() can be called by code in the interrupt service
> * path. So we can't afford to dilly-dally. Returns the number of
> @@ -455,7 +453,7 @@ static size_t crng_fast_load(const void *cp, size_t len)
> src++; crng_init_cnt++; len--; ret++;
> }
> if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
> - invalidate_batched_entropy();
> + ++base_crng.generation;
> crng_init = 1;
> }
> spin_unlock_irqrestore(&base_crng.lock, flags);
This will only ever increase base_crng.generation from 0 to 1, and the
proper lock is held. The base_crng.key has changed, so it's appropriate
to state that it has reached a new generation.
> @@ -536,7 +534,6 @@ static void crng_reseed(void)
> spin_unlock_irqrestore(&base_crng.lock, flags);
> memzero_explicit(key, sizeof(key));
> if (finalize_init) {
> - invalidate_batched_entropy();
> process_random_ready_list();
> wake_up_interruptible(&crng_init_wait);
> kill_fasync(&fasync, SIGIO, POLL_IN);
In crng_reseed(), base_crng.generation is incremented above while holding
the lock, and checked that it doesn't reach ULONG_MAX. OK.
> @@ -1278,7 +1275,7 @@ int __init rand_initialize(void)
>
> extract_entropy(base_crng.key, sizeof(base_crng.key));
> if (arch_init && trust_cpu && crng_init < 2) {
> - invalidate_batched_entropy();
> + ++base_crng.generation;
> crng_init = 2;
> pr_notice("crng init done (trusting CPU's manufacturer)\n");
> }
Here we do not need to take a lock (single-threaded operation), can only be
at generation 0 or 1, and the base_crng.key has changed. Which leads me to
ask: shouldn't we increase the generation counter always (or at least if
arch_init is true)? And just make icnrementing crng_init to 2 depending on
trust_cpu?
To sum it up:
Reviewed-by: Dominik Brodowski <linux@...inikbrodowski.net>
Thanks,
Dominik
Powered by blists - more mailing lists