[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YgYLnspXs5bwfOiD@linutronix.de>
Date: Fri, 11 Feb 2022 08:09:18 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: "Jason A. Donenfeld" <Jason@...c4.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Theodore Ts'o <tytso@....edu>,
Sultan Alsawaf <sultan@...neltoast.com>,
Jonathan Neuschäfer <j.neuschaefer@....net>,
Eric Biggers <ebiggers@...nel.org>
Subject: Re: [PATCH v4 2/2] random: defer fast pool mixing to worker
On 2022-02-11 01:42:56 [+0100], Jason A. Donenfeld wrote:
> Hi Sebastian,
Hi Jason,
> Thanks for pointing this out. I'll actually fix this using atomics,
> and fix another minor issue at the same time the same way, and move to
> making sure the worker is running on the right CPU like we originally
> discussed. I'm going to send that as an additional patch so that we
> can narrow in on the issue there. It's a little bit involved but not
> too bad. I'll have that for you shortly.
oki.
> > crng_fast_load() does spin_trylock_irqsave() in hardirq context. It does
> > not produce any warning on RT but is still wrong IMHO:
> > If we just could move this, too.
> > I don't know how timing critical this is but the first backtrace from
> > crng_fast_load() came (to my surprise) from hwrng_fillfn() (a kthread)
> > and added 64bytes in one go.
>
> I'll look into seeing if I can do it. On my first pass a few days ago,
> it seemed a bit too tricky, but I'll revisit after this part here
> settles. Thanks for your benchmarks, by the way. That's useful.
If you want me to do it again or another machines, just let me know.
That was from a bigger x86 machine. I added that series and the patch at
the bottom to my RT queue.
-------->8--------
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Date: Thu, 10 Feb 2022 18:22:05 +0100
Subject: [PATCH] random: Move crng_fast_load() to the worker.
crng_fast_load() is invoked from hard IRQ context and acquires a
spinlock_t via a trylock. If the lock is locked in hard IRQ context then
the following locking attempt (on another CPU) will PI-boost the wrong
task.
Move the crng_fast_load() invocation into the worker.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
---
drivers/char/random.c | 27 ++++++++++++++++++++++-----
1 file changed, 22 insertions(+), 5 deletions(-)
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1047,6 +1047,17 @@ static void mix_interrupt_randomness(str
struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
u8 pool[sizeof(fast_pool->pool)];
+ if (unlikely(crng_init == 0)) {
+ size_t ret;
+
+ ret = crng_fast_load((u8 *)fast_pool->pool, sizeof(fast_pool->pool));
+ if (ret) {
+ WRITE_ONCE(fast_pool->count, 0);
+ fast_pool->last = jiffies;
+ return;
+ }
+ }
+
/*
* Since this is the result of a trip through the scheduler, xor in
* a cycle counter. It can't hurt, and might help.
@@ -1089,11 +1100,17 @@ void add_interrupt_randomness(int irq)
new_count = ++fast_pool->count;
if (unlikely(crng_init == 0)) {
- if (new_count >= 64 &&
- crng_fast_load((u8 *)fast_pool->pool, sizeof(fast_pool->pool)) > 0) {
- fast_pool->count = 0;
- fast_pool->last = now;
- }
+ if (new_count & FAST_POOL_MIX_INFLIGHT)
+ return;
+
+ if (new_count < 64)
+ return;
+
+ fast_pool->count |= FAST_POOL_MIX_INFLIGHT;
+ if (unlikely(!fast_pool->mix.func))
+ INIT_WORK(&fast_pool->mix, mix_interrupt_randomness);
+ queue_work_on(raw_smp_processor_id(), system_highpri_wq,
+ &fast_pool->mix);
return;
}
> Jason
Sebastian
Powered by blists - more mailing lists