[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210311232357.GA29548@lothringen>
Date: Fri, 12 Mar 2021 00:23:57 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: paulmck@...nel.org
Cc: rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...com, mingo@...nel.org, jiangshanlai@...il.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org
Subject: Re: [PATCH tip/core/rcu 07/10] rcu: Prevent dyntick-idle until
ksoftirqd has been spawned
On Wed, Mar 03, 2021 at 04:00:16PM -0800, paulmck@...nel.org wrote:
> From: "Paul E. McKenney" <paulmck@...nel.org>
>
> After interrupts have enabled at boot but before some random point
> in early_initcall() processing, softirq processing is unreliable.
> If softirq sees a need to push softirq-handler invocation to ksoftirqd
> during this time, then those handlers can be delayed until the ksoftirqd
> kthreads have been spawned, which happens at some random point in the
> early_initcall() processing. In many cases, this delay is just fine.
> However, if the boot sequence blocks waiting for a wakeup from a softirq
> handler, this delay will result in a silent-hang deadlock.
>
> This commit therefore prevents these hangs by ensuring that the tick
> stays active until after the ksoftirqd kthreads have been spawned.
> This change causes the tick to eventually drain the backlog of delayed
> softirq handlers, breaking this deadlock.
>
> Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> ---
> kernel/rcu/tree_plugin.h | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index 2d60377..36212de 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -1255,6 +1255,11 @@ static void rcu_prepare_kthreads(int cpu)
> */
> int rcu_needs_cpu(u64 basemono, u64 *nextevt)
> {
> + /* Through early_initcall(), need tick for softirq handlers. */
> + if (!IS_ENABLED(CONFIG_HZ_PERIODIC) && !this_cpu_ksoftirqd()) {
> + *nextevt = 1;
> + return 1;
> + }
> *nextevt = KTIME_MAX;
> return !rcu_segcblist_empty(&this_cpu_ptr(&rcu_data)->cblist) &&
> !rcu_segcblist_is_offloaded(&this_cpu_ptr(&rcu_data)->cblist);
> @@ -1350,6 +1355,12 @@ int rcu_needs_cpu(u64 basemono, u64 *nextevt)
>
> lockdep_assert_irqs_disabled();
>
> + /* Through early_initcall(), need tick for softirq handlers. */
> + if (!IS_ENABLED(CONFIG_HZ_PERIODIC) && !this_cpu_ksoftirqd()) {
> + *nextevt = 1;
> + return 1;
> + }
> +
> /* If no non-offloaded callbacks, RCU doesn't need the CPU. */
> if (rcu_segcblist_empty(&rdp->cblist) ||
> rcu_segcblist_is_offloaded(&this_cpu_ptr(&rcu_data)->cblist)) {
I suspect rcutiny should be concerned as well?
In fact this patch doesn't look necessary because can_stop_idle_tick() refuse
to stop the tick when softirqs are pending.
Thanks.
Powered by blists - more mailing lists