[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHmME9pucLWXDofvOgHEau3y-7RmdtU91_jQHSt7psuR22eXBg@mail.gmail.com>
Date: Fri, 4 Feb 2022 16:58:58 +0100
From: "Jason A. Donenfeld" <Jason@...c4.com>
To: LKML <linux-kernel@...r.kernel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
"Theodore Ts'o" <tytso@....edu>,
Sultan Alsawaf <sultan@...neltoast.com>,
Jonathan Neuschäfer <j.neuschaefer@....net>
Subject: Re: [PATCH RFC v1] random: do not take spinlocks in irq handler
FWIW, the biggest issue with this
On Fri, Feb 4, 2022 at 4:32 PM Jason A. Donenfeld <Jason@...c4.com> wrote:
> +static void mix_interrupt_randomness(struct work_struct *work)
> +{
[...]
> + if (unlikely(crng_init == 0)) {
> + if (crng_fast_load((u8 *)&fast_pool->pool, sizeof(fast_pool->pool)) > 0)
> + atomic_set(&fast_pool->count, 0);
> + else
> + atomic_and(~FAST_POOL_MIX_INFLIGHT, &fast_pool->count);
> + return;
> + }
[...]
> void add_interrupt_randomness(int irq)
> - if (unlikely(crng_init == 0)) {
> - if ((fast_pool->count >= 64) &&
> - crng_fast_load((u8 *)fast_pool->pool, sizeof(fast_pool->pool)) > 0) {
> - fast_pool->count = 0;
> - fast_pool->last = now;
> - }
> - return;
The point of crng_fast_load is to shuffle bytes into the crng as fast
as possible for very early boot usage. Deferring that to a workqueue
seems problematic. So I think at the very least _that_ part will have
to stay in the IRQ handler. That means we've still got a spinlock. But
at least it's a less problematic one than the input pool spinlock, and
perhaps we can deal with that some other way than this patch's
approach.
In other words, this approach for the calls to mix_pool_bytes, and a
different approach for that call to crng_fast_load.
Jason
Powered by blists - more mailing lists