[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YgU8Tdoxa0XC1oRy@linutronix.de>
Date: Thu, 10 Feb 2022 17:24:45 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: "Jason A. Donenfeld" <Jason@...c4.com>
Cc: linux-kernel@...r.kernel.org, linux-crypto@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Theodore Ts'o <tytso@....edu>,
Sultan Alsawaf <sultan@...neltoast.com>,
Jonathan Neuschäfer <j.neuschaefer@....net>,
Eric Biggers <ebiggers@...nel.org>,
Andy Lutomirski <luto@...nel.org>
Subject: Re: [PATCH v4 1/2] random: remove batched entropy locking
On 2022-02-09 13:56:43 [+0100], Jason A. Donenfeld wrote:
> Rather than use spinlocks to protect batched entropy, we can instead
> disable interrupts locally, since we're dealing with per-cpu data, and
> manage resets with a basic generation counter. At the same time, we
> can't quite do this on PREEMPT_RT, where we still want spinlocks-as-
> mutexes semantics. So we use a local_lock_t, which provides the right
> behavior for each. Because this is a per-cpu lock, that generation
> counter is still doing the necessary CPU-to-CPU communication.
>
> This should improve performance a bit. It will also fix the linked splat
> that Jonathan received with a PROVE_RAW_LOCK_NESTING=y.
>
> Suggested-by: Andy Lutomirski <luto@...nel.org>
> Reported-by: Jonathan Neuschäfer <j.neuschaefer@....net>
> Tested-by: Jonathan Neuschäfer <j.neuschaefer@....net>
> Link: https://lore.kernel.org/lkml/YfMa0QgsjCVdRAvJ@latitude/
> Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> Cc: Sultan Alsawaf <sultan@...neltoast.com>
> Signed-off-by: Jason A. Donenfeld <Jason@...c4.com>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Sebastian
Powered by blists - more mailing lists