[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YfgPWatDzkn2ozhm@linutronix.de>
Date: Mon, 31 Jan 2022 17:33:29 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: "Jason A. Donenfeld" <Jason@...c4.com>
Cc: LKML <linux-kernel@...r.kernel.org>, Theodore Ts'o <tytso@....edu>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Herbert Xu <herbert@...dor.apana.org.au>
Subject: Re: [PATCH 5/5] random: Defer processing of randomness on PREEMPT_RT.
On 2022-01-30 23:55:09 [+0100], Jason A. Donenfeld wrote:
> Hey Sebastian,
Hi,
> I spent the weekend thinking about this some more. I'm actually
> warming up a bit to the general approach of the original solution
> here, though still have questions. To summarize my understanding of
> where we are:
>
> Alternative solution we've been discussing:
> - Replace spinlock_t with raw spinlocks.
> - Ratelimit userspace-triggered latency inducing ioctls with
> ratelimit() and an additional mutex of sorts.
> - Result: pretty much the same structure we have now, but with some
> added protection for PREEMPT_RT.
>
> Your original solution:
> - Absorb into the fast pool during the actual IRQ, but never dump it
> into the main pool (nor fast load into the crng directly if
> crng_init==0) from the hard irq.
> - Instead, have irq_thread() check to see if the calling CPU's fast
> pool is >= 64, and if so, dump it into the main pool (or fast load
> into the crng directly if crng_init==0).
>
> I have two questions about the implications of your original solution:
>
> 1) How often does irq_thread() run? With what we have now, we dump the
Mostly every interrupt gets threaded. After the primary handler (the
in hard irq) was invoked, the threaded handler gets woken up. The
threaded handler runs before the primary can run again.
Not every interrupt gets threaded. For instance interrupts that are not
threaded are marked as TIMER, PER_CPU, ONESHOT and NO_THREAD.
So on a system with 4 CPUs you can move all peripheral interrupts to
CPU0 leaving CPU1-3 with TIMER interrupts only. In that case, there
would be no irq_thread() invocations on CPU1-3.
> fast pool into the main pool at exactly 64 events. With what you're
> proposing, we're now in >= 64 territory. How do we conceptualize how
> far beyond 64 it's likely to grow before irq_thread() does something?
in theory on a busy RT system (on the previously mentioned one) you
could process all HW interrupts (on CPU0) and wake interrupt threads but
the threads are blocked by a user task (the user land task has a higher
priority than the interrupt thread). In that time further HW interrupts
can trigger on CPU0 for the non-threaded interrupts like the timer
interrupt.
> Is it easy to make guarantees like, "at most, probably around 17"? Or
> is it potentially unbounded? Growing beyond isn't actually necessarily
> a bad thing, but it could potentially *slow* the collection of
> entropy. That probably matters more in the crng_init==0 mode, where
> we're just desperate to get whatever we can as fast as we can. But
> depending on how large that is, it could matter for the main case too.
> Having some handle on the latency added here would be helpful for
> thinking about this.
I have a bigger system which has only network and SATA and here:
PREEMPT, unpatched
[ 10.545739] random: crng init done
[ 10.549548] random: 7 urandom warning(s) missed due to ratelimiting
PREEMPT_RT, patched
[ 11.884035] random: crng init done
[ 11.884037] random: 7 urandom warning(s) missed due to ratelimiting
just from two boots, no real testing. I wouldn't even mention this as a
problem during boot-up.
> 2) If we went with this solution, I think I'd prefer to actually do it
> unconditionally, for PREEMPT_RT=y and PREEMPT_RT=n. It's easier to
> track how this thing works if the state machine always works in one
> way instead of two. It also makes thinking about performance margins
> for the various components easier if there's only one way used. Do you
> see any downsides in doing this unconditionally?
On !PREEMPT_RT you need to specify `threadirq` on the kernel command
line to enable threaded interrupts which are otherwise force-enabled on
PREEMPT_RT. To compensate for that, we would need something as backup.
Say a time or so…
> Regards,
> Jason
Sebastian
Powered by blists - more mailing lists