[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YuO00GBXXNBhU4yL@alley>
Date: Fri, 29 Jul 2022 12:22:08 +0200
From: Petr Mladek <pmladek@...e.com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-kernel@...r.kernel.org,
"Jason A. Donenfeld" <Jason@...c4.com>,
Theodore Ts'o <tytso@....edu>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
John Ogness <john.ogness@...utronix.de>,
Mike Galbraith <efault@....de>,
Rasmus Villemoes <linux@...musvillemoes.dk>,
Sergey Senozhatsky <senozhatsky@...omium.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH] random: Initialize vsprintf's pointer hash once the
random core is ready.
On Fri 2022-07-29 10:52:58, Sebastian Andrzej Siewior wrote:
> The printk code invokes vnsprintf in order to compute the complete
> string before adding it into its buffer. This happens in an IRQ-off
> region which leads to a warning on PREEMPT_RT in the random code if the
> format strings contains a %p for pointer printing. This happens because
> the random core acquires locks which become sleeping locks on PREEMPT_RT
> which must not be acquired with disabled interrupts and or preemption.
> By default the pointers are hashed which requires a random value on the
> first invocation (either by printk or another user which comes first.
>
> One could argue that there is no need for printk to disable interrupts
> during the vsprintf() invocation which would fix the just mentioned
> problem. However printk itself can be invoked in a context with
> disabled interrupts which would lead to the very same problem.
>
> This late init via printk can be avoided by explicitly initializing
> vsprintf's random value once the random-core has been initialized.
>
> Remove the on demand init from __ptr_to_hashval() and keep the -EAGAIN if
> the init has not yet been performed. Move the actual init bits to
> vsprintf_init_hash_pointer() which are invoked from random-core once it
> has been initialized and get_random_bytes() is available.
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -221,10 +222,15 @@ static void crng_reseed(void)
> ++next_gen;
> WRITE_ONCE(base_crng.generation, next_gen);
> WRITE_ONCE(base_crng.birth, jiffies);
> - if (!static_branch_likely(&crng_is_ready))
> + if (!static_branch_likely(&crng_is_ready)) {
> crng_init = CRNG_READY;
> + init_hash_pointer = true;
I am not familiar with the crng code. I wonder if the following would work:
if (!static_branch_likely(&crng_is_ready) && crng_init != CRNG_READY) {
crng_init = CRNG_READY;
init_hash_pointer = true;
}
The point is that vsprintf_init_hash_pointer() will be called only by
the first caller. It would allow to remove the @filling spin lock.
> + }
> spin_unlock_irqrestore(&base_crng.lock, flags);
> memzero_explicit(key, sizeof(key));
> +
> + if (init_hash_pointer)
> + vsprintf_init_hash_pointer();
> }
>
> /*
> diff --git a/lib/vsprintf.c b/lib/vsprintf.c
> index 3c1853a9d1c09..6fa2ebb9f9b9e 100644
> --- a/lib/vsprintf.c
> +++ b/lib/vsprintf.c
> @@ -751,36 +751,30 @@ static int __init debug_boot_weak_hash_enable(char *str)
> early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);
>
> static DEFINE_STATIC_KEY_FALSE(filled_random_ptr_key);
> +static siphash_key_t ptr_key __read_mostly;
>
> -static void enable_ptr_key_workfn(struct work_struct *work)
> +void vsprintf_init_hash_pointer(void)
> {
> - static_branch_enable(&filled_random_ptr_key);
> + static DEFINE_SPINLOCK(filling);
> + unsigned long flags;
> + static bool filled;
> +
> + spin_lock_irqsave(&filling, flags);
> + if (!filled) {
> + get_random_bytes(&ptr_key, sizeof(ptr_key));
> + filled = true;
> + static_branch_enable(&filled_random_ptr_key);
This can't be called in an atomic context. Is crng_reseed() always
called in a non-atomic context?
That said, the static branch is an overkill. vsprintf() is a slow
path. It should be enough to use a simple boolean. It might require
a simple memory barrier to serialize @ptr_key and the new boolean
read&write.
> + }
> + spin_unlock_irqrestore(&filling, flags);
> }
Best Regards,
Petr
Powered by blists - more mailing lists