[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <735d16d4-f727-4cc1-91d8-16155135f550@paulmck-laptop>
Date: Fri, 23 Jan 2026 08:49:53 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Joel Fernandes <joelagnelf@...dia.com>
Cc: linux-kernel@...r.kernel.org, Boqun Feng <boqun.feng@...il.com>,
rcu@...r.kernel.org, Frederic Weisbecker <frederic@...nel.org>,
Neeraj Upadhyay <neeraj.upadhyay@...nel.org>,
Josh Triplett <josh@...htriplett.org>,
Uladzislau Rezki <urezki@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Zqiang <qiang.zhang@...ux.dev>
Subject: Re: [PATCH -next v3 2/3] rcu/nocb: Remove dead callback overload
handling
On Fri, Jan 23, 2026 at 10:30:00AM -0500, Joel Fernandes wrote:
> On Thu, Jan 22, 2026 at 09:46:58PM -0800, Paul E. McKenney wrote:
> > > Thanks. I will focus on this argument, then. I will resend with a better
> > > patch description in the morning.
> >
> > And my Reviewed-by does assume that change, so go ahead and send the
> > improved commit log with my Reviewed-by appended.
>
> Sure, will do.
>
> > > Hmm true. There is also the case that any of the kthreads in the way of the
> > > callback getting preempted by the hypervisor could also be problematic, to
> > > your point of requiring a more principled approach. I guess we did not want
> > > the reader side vCPU preemption workarounds either for similar reason.
> >
> > Well, principles only get you so far. We need both the principles and the
> > pragmatism to know when to depart from those principles when warranted.
>
> Agreed. Indeed we have to balance the cost of workarounds and in the case
> of per cpu blocked lists, I agree that perhaps the balance tipped more in
> favor of not doing it pending other more comprehensive fixes.
I would feel better about that balance if we actually had some of these
more comprehensive fixes in mind. ;-)
> > > One trick I found irrespective of virtualization, is, rcu_nocb_poll can
> > > result in grace periods completing faster. I think this could help overload
> > > situations by retiring callbacks sooner than later. I can experiment with
> > > this idea in future. Was considering a dynamic trigger to enable polling
> > > mode in overload. I guess there is one way to find out how well this will
> > > work, but initial testing does look promising. :-D.
> >
> > Careful of the effect on power consumption, especially for the world of
> > battery-powered embedded systems! ;-)
>
> Thanks, yes I was considering this argument already to be honest as one of
> the potential pitfalls, but thanks for the reminder! FWIW, my inclination
> is that if we are in an overloaded situation, we would not benefit from
> idleness anyway. To the contrary, I think we may hurt idleness and power
> if we are not able to settle the system into a quiet state due to slowness
> in alleviating the callback overload. I will profile for CPU consumption
> and maybe run turbostat to check whenever I have the prototype.
We could have one CPU flooding and the rest idle, and many other
combinations. And, if I recall correctly, polling can burn extra CPU
and cause extra wakeups even when the system is fully idle. Or has
that changed?
Thanx, Paul
Powered by blists - more mailing lists