[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190702114030.GA5429@localhost>
Date: Tue, 2 Jul 2019 11:40:30 +0000
From: Joel Fernandes <joel@...lfernandes.org>
To: "Paul E. McKenney" <paulmck@...ux.ibm.com>
Cc: linux-kernel@...r.kernel.org,
Josh Triplett <josh@...htriplett.org>, kernel-team@...roid.com,
Lai Jiangshan <jiangshanlai@...il.com>,
linux-kselftest@...r.kernel.org,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
rcu@...r.kernel.org,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Shuah Khan <shuah@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [RFC 1/3] rcu: Expedite the rcu quiescent state reporting if
help needed
On Mon, Jul 01, 2019 at 08:47:30PM -0700, Paul E. McKenney wrote:
> On Mon, Jul 01, 2019 at 12:04:13AM -0400, Joel Fernandes (Google) wrote:
> > The t->rcu_read_unlock_special union's need_qs bit can be set by the
> > scheduler tick (in rcu_flavor_sched_clock_irq) to indicate that help is
> > needed from the rcu_read_unlock path. When this help arrives however, we
> > can do better to speed up the quiescent state reporting which if
> > rcu_read_unlock_special::need_qs is set might be quite urgent. Make use
> > of this information in deciding when to do heavy-weight softirq raising
> > where possible.
> >
> > Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
>
> Cute thought, but I am going to have to pass on this one. The reason
> is that by the time that ->rcu_read_unlock_special.b.need_qs gets set,
> the grace period is already one full second old. At that point, the
> extra tick of waiting is down in the noise.
>
> Right now, we do the extra work if we really are blocking an expedited
> grace period (the first two lines of the original condition) or we are
> running on a nohz_full CPU (which might never execute a scheduling clock
> tick, thus potentially delaying forever). And expedited grace periods
> are supposed to complete in tens or maybe hundreds of microseconds,
> assuming the RCU readers are being cooperative, which is a whole
> different level of urgent.
Makes sense, I agree the patch may not be that helpful right now. I mixed up
the different levels or urgencies. No problem dropping it.
>
> Nevertheless, thank you for looking into this!
My pleasure! Will keep them coming.
- Joel
>
> Thanx, Paul
>
> > ---
> > kernel/rcu/tree_plugin.h | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> > index c588ef98efd3..bff6410fac06 100644
> > --- a/kernel/rcu/tree_plugin.h
> > +++ b/kernel/rcu/tree_plugin.h
> > @@ -622,7 +622,8 @@ static void rcu_read_unlock_special(struct task_struct *t)
> > t->rcu_read_unlock_special.b.exp_hint = false;
> > exp = (t->rcu_blocked_node && t->rcu_blocked_node->exp_tasks) ||
> > (rdp->grpmask & rnp->expmask) ||
> > - tick_nohz_full_cpu(rdp->cpu);
> > + tick_nohz_full_cpu(rdp->cpu) ||
> > + t->rcu_read_unlock_special.b.need_qs;
> > // Need to defer quiescent state until everything is enabled.
> > if (irqs_were_disabled && use_softirq &&
> > (in_interrupt() ||
> > --
> > 2.22.0.410.gd8fdbe21b5-goog
> >
>
Powered by blists - more mailing lists