[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20220406200728.GQ4285@paulmck-ThinkPad-P17-Gen-1>
Date: Wed, 6 Apr 2022 13:07:28 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: "Zhang, Qiang1" <qiang1.zhang@...el.com>
Cc: "frederic@...nel.org" <frederic@...nel.org>,
"rcu@...r.kernel.org" <rcu@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] rcu: Use IRQ_WORK_INIT_HARD() to initialize defer_qs_iw
on PREEMPT_RT kernel
On Wed, Apr 06, 2022 at 04:41:46AM +0000, Zhang, Qiang1 wrote:
>
> On Sun, Apr 03, 2022 at 02:14:40PM +0800, Zqiang wrote:
> > On non-PREEMPT_RT kernel, the init_irq_work() make the defer_qs_iw
> > irq-work execute in interrupt context. however, on PREEMPT_RT kernel,
> > the
> > init_irq_work() make defer_qs_iq irq-work execute in rt-fifo irq_work
> > kthreads. when system booting, and the CONFIG_RCU_STRICT_GRACE_PERIOD
> > is enabled, there are a lot of defer_qs_iw irq-work to be processed in
> > rt-fifo irq_work kthreads, it occupies boot CPU for long time and
> > cause other kthread cannot get the boot CPU, the boot process occurs
> > hang. use IRQ_WORK_INIT_HARD() to initialize defer_qs_iw irq-work, can
> > ensure the defer_qs_iw irq-work always execute in interrupt context,
> > whether PREEMPT_RT or non PREEMPT_RT kernel.
>
> This is a much better justification of the need for a change, thank you!
>
> >But it looks like I need to clarify a sentence in my previous email.
> >
> >Please note that you were using the debugging RCU_STRICT_GRACE_PERIOD Kconfig option, so this is a potential problem as opposed to an immediate bug. Yes, we must fix bugs, but it is also very important to avoid harming other workloads, which are after all the vast majority of the uses of the Linux kernel.
> >
> >And a major purpose of things like RCU_STRICT_GRACE_PERIOD is to give us advanced warning of bugs so that we can fix them properly, without hurting other workloads.
> >
> >So, does this patch guarantee exactly the same performance and scalability as before for !PREEMPT_RT systems? If so, please add an explanation to the commit log.
> >
> >Otherwise, please adjust the code to provide this guarantee.
>
> Thanks, I have been adjusted code and resend v2.
And there have been no objections, so I have queued and pushed it
for testing and further review, thank you!
Thanx, Paul
> Thanks
> Zqiang
>
> >
> > Thanx, Paul
>
> > Signed-off-by: Zqiang <qiang1.zhang@...el.com>
> > ---
> > kernel/rcu/tree_plugin.h | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index
> > 3037c2536e1f..cf7bd28af8ef 100644
> > --- a/kernel/rcu/tree_plugin.h
> > +++ b/kernel/rcu/tree_plugin.h
> > @@ -661,7 +661,7 @@ static void rcu_read_unlock_special(struct task_struct *t)
> > expboost && !rdp->defer_qs_iw_pending && cpu_online(rdp->cpu)) {
> > // Get scheduler to re-evaluate and call hooks.
> > // If !IRQ_WORK, FQS scan will eventually IPI.
> > - init_irq_work(&rdp->defer_qs_iw, rcu_preempt_deferred_qs_handler);
> > + rdp->defer_qs_iw =
> > +IRQ_WORK_INIT_HARD(rcu_preempt_deferred_qs_handler);
> > rdp->defer_qs_iw_pending = true;
> > irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
> > }
> > --
> > 2.25.1
> >
Powered by blists - more mailing lists