[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMj1kXGLYYn9Fa+dbB2EWNbwJ6+PhKBz3vd=gUJDizvYwVSjfw@mail.gmail.com>
Date: Thu, 27 Nov 2025 11:32:48 +0100
From: Ard Biesheuvel <ardb@...nel.org>
To: Ard Biesheuvel <ardb+git@...gle.com>
Cc: linux-hardening@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, Kees Cook <kees@...nel.org>,
Ryan Roberts <ryan.roberts@....com>, Will Deacon <will@...nel.org>, Arnd Bergmann <arnd@...db.de>,
Jeremy Linton <jeremy.linton@....com>, Catalin Marinas <Catalin.Marinas@....com>,
Mark Rutland <mark.rutland@....com>, "Jason A. Donenfeld" <Jason@...c4.com>
Subject: Re: [RFC/RFT PATCH 4/6] random: Use a lockless fast path for get_random_uXX()
On Thu, 27 Nov 2025 at 10:22, Ard Biesheuvel <ardb+git@...gle.com> wrote:
>
> From: Ard Biesheuvel <ardb@...nel.org>
>
> Currently, the implementations of the get_random_uXX() API protect their
> critical section with a local lock and disabling interrupts, to ensure
> that the code does not race with itself when called from interrupt
> context.
>
> Given that the fast path does nothing more than read a single uXX
> quantity from a linear buffer and bump the position pointer, poking the
> hardware registers to disable and re-enable interrupts is
> disproportionately costly, and best avoided.
>
> There are two conditions under which the batched entropy buffer is
> replenished, which is what forms the critical section:
> - the buffer is exhausted
> - the base_crng generation counter has incremented.
>
> By combining the position and generation counters into a single u64, we
> can use compare and exchange to implement the fast path without taking
> the local lock or disabling interrupts. By constructing the expected and
> next values carefully, the compare and exchange will only succeed if
> - we did not race with ourselves, i.e., the compare and exchange
> increments the position counter by exactly 1;
> - the buffer is not exhausted
> - the generation counter equals the base_crng generation counter.
>
> Only if the compare and exchange fails is the original slow path taken,
> and only in that case do we take the local lock. This results in a
> considerable speedup (3-5x) when benchmarking get_random_u8() in a tight
> loop.
>
> Signed-off-by: Ard Biesheuvel <ardb@...nel.org>
> ---
> drivers/char/random.c | 44 ++++++++++++++------
> 1 file changed, 31 insertions(+), 13 deletions(-)
>
This needs the following applied to ensure correct behavior when first
called after the base_crng generation has already incremented to 1.
diff --git a/drivers/char/random.c b/drivers/char/random.c
index e8ba460c5c9c..dddbec7cf856 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -523,7 +523,7 @@ struct batch_ ##type {
\
\
static DEFINE_PER_CPU(struct batch_ ##type, batched_entropy_ ##type)
= { \
.lock = INIT_LOCAL_LOCK(batched_entropy_ ##type.lock),
\
- .position = UINT_MAX
\
+ .position = ARRAY_SIZE(batched_entropy_ ##type.entropy),
\
};
\
\
type get_random_ ##type(void)
\
Powered by blists - more mailing lists