[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YfOqsOiNfURyvFRX@linutronix.de>
Date: Fri, 28 Jan 2022 09:34:56 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: "Jason A. Donenfeld" <Jason@...c4.com>
Cc: Jonathan Neuschäfer <j.neuschaefer@....net>,
Andy Lutomirski <luto@...capital.net>,
LKML <linux-kernel@...r.kernel.org>,
Theodore Ts'o <tytso@....edu>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>
Subject: Re: "BUG: Invalid wait context" in invalidate_batched_entropy
On 2022-01-27 23:26:32 [+0100], Jason A. Donenfeld wrote:
> Hi Jonathan,
Hi Jason,
> Thanks for the report. I'll try to reproduce this and see what's going on.
>
> I'm emailing back right away, though, so that I can CC in Andy
> Lutomirski, who I know has been sitting on a stack of patches that fix
> up (actually, remove) the locking, so this might be one path to fixing
> this.
This report is due to CONFIG_PROVE_LOCKING=y _and_
CONFIG_PROVE_RAW_LOCK_NESTING=y. It reports a nesting problem
(raw_spinlock_t -> spinlock_t lock ordering) which becomes a real
problem on PREEMPT_RT.
I've been testing my old series on top of 5.17-rc1. With blake2 the
numbers lowered a little. I'm gettin 3-6us on average and 16-26us
worstcase and with NUMA it still goes up to 40-50us.
If you still object the previous approach and neither tglx/peterz
disagree we could try making the lock raw_spinlock_t and add a mutex_t
around the userspace interface to lower the lock contention. But even
then we need to find a way to move the crng init part (crng_fast_load())
out of the hard-IRQ.
> Thanks,
> Jason
Sebastian
Powered by blists - more mailing lists