[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YzMnb8NWlCpqsqSG@alley>
Date: Tue, 27 Sep 2022 18:40:15 +0200
From: Petr Mladek <pmladek@...e.com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: linux-kernel@...r.kernel.org, Theodore Ts'o <tytso@....edu>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
"Jason A . Donenfeld " <Jason@...c4.com>,
John Ogness <john.ogness@...utronix.de>,
Mike Galbraith <efault@....de>,
Peter Zijlstra <peterz@...radead.org>,
Rasmus Villemoes <linux@...musvillemoes.dk>,
Sergey Senozhatsky <senozhatsky@...omium.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v4 2/2] lib/vsprintf: Initialize vsprintf's pointer hash
once the random core is ready.
On Tue 2022-09-27 12:49:12, Sebastian Andrzej Siewior wrote:
> The printk code invokes vnsprintf in order to compute the complete
> string before adding it into its buffer. This happens in an IRQ-off
> region which leads to a warning on PREEMPT_RT in the random code if the
> format strings contains a %p for pointer printing. This happens because
> the random core acquires locks which become sleeping locks on PREEMPT_RT
> which must not be acquired with disabled interrupts and or preemption
> disabled.
> By default the pointers are hashed which requires a random value on the
> first invocation (either by printk or another user which comes first.
>
> One could argue that there is no need for printk to disable interrupts
> during the vsprintf() invocation which would fix the just mentioned
> problem. However printk itself can be invoked in a context with
> disabled interrupts which would lead to the very same problem.
>
> Move the initialization of ptr_key into a worker and schedule it from
> subsys_initcall(). This happens early but after the workqueue subsystem
> is ready. Use get_random_bytes() to retrieve the random value if the RNG
> core is ready, otherwise schedule a worker in two seconds and try again.
Another advantage is that it removes a nested lock from the printk()
code path. A deadlock was partly prevented by the trylock. But there was
still a risk of a deadlock when printk() was called under base_crng.lock.
> Reported-by: Mike Galbraith <efault@....de>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> Acked-by: Jason A. Donenfeld <Jason@...c4.com>
> ---
> lib/vsprintf.c | 44 ++++++++++++++++++++++++++------------------
> 1 file changed, 26 insertions(+), 18 deletions(-)
>
> diff --git a/lib/vsprintf.c b/lib/vsprintf.c
> index bce63cbf23779..44b39ba56b796 100644
> --- a/lib/vsprintf.c
> +++ b/lib/vsprintf.c
> @@ -751,31 +751,39 @@ static int __init debug_boot_weak_hash_enable(char *str)
> early_param("debug_boot_weak_hash", debug_boot_weak_hash_enable);
>
> static bool filled_random_ptr_key __read_mostly;
> +static siphash_key_t ptr_key __read_mostly;
> +static void fill_ptr_key_workfn(struct work_struct *work);
> +static DECLARE_DELAYED_WORK(fill_ptr_key_work, fill_ptr_key_workfn);
> +
> +static void fill_ptr_key_workfn(struct work_struct *work)
> +{
> + if (!rng_is_initialized()) {
> + queue_delayed_work(system_unbound_wq, &fill_ptr_key_work, HZ * 2);
Is there any particular reason not to use system_wq, please?
The unbound workqueue should be used only in special situations:
--- cut Documentation/core-api/workqueue.rst ---
``WQ_UNBOUND``
Work items queued to an unbound wq are served by the special
worker-pools which host workers which are not bound to any
specific CPU. This makes the wq behave as a simple execution
context provider without concurrency management. The unbound
worker-pools try to start execution of work items as soon as
possible. Unbound wq sacrifices locality but is useful for
the following cases.
* Wide fluctuation in the concurrency level requirement is
expected and using bound wq may end up creating large number
of mostly unused workers across different CPUs as the issuer
hops through different CPUs.
* Long running CPU intensive workloads which can be better
managed by the system scheduler.
--- cut ---
The thing is that unbound workqueues always process work items
in parallel. They create new workers when there is no idle one.
In compare, system_wq serializes the work items. They use
another worker only when a busy one goes into a sleep.
It is perfectly fine and preferred for work items that
do not need much CPU time.
I have tried it and system_wq works well here. It actually
even initializes the hash earlier here. But it is only by chance
because it happens on the 2nd attempt instead of 3rd one.
> + return;
> + }
> +
> + get_random_bytes(&ptr_key, sizeof(ptr_key));
> +
> + /* Pairs with smp_rmb() before reading ptr_key. */
> + smp_wmb();
> + WRITE_ONCE(filled_random_ptr_key, true);
> +}
With "system_wq":
Reviewed-by: Petr Mladek <pmladek@...e.com>
I could replace "system_unbound_wq" with "system_wq" when
pushing. Is anybody against it, please?
I am sorry that I have missed it when looking at the previous
version.
Best Regards,
Petr
Powered by blists - more mailing lists