[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180510131546.GN26088@linux.vnet.ibm.com>
Date: Thu, 10 May 2018 06:15:46 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Joel Fernandes <joel@...lfernandes.org>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
fweisbec@...il.com, oleg@...hat.com, joel.opensrc@...il.com,
torvalds@...ux-foundation.org, npiggin@...il.com
Subject: Re: [tip/core/rcu, 05/21] rcu: Make rcu_gp_cleanup() more accurately
predict need for new GP
On Thu, May 10, 2018 at 12:21:33AM -0700, Joel Fernandes wrote:
> Hi Paul,
>
> On Sun, Apr 22, 2018 at 08:03:28PM -0700, Paul E. McKenney wrote:
> > Currently, rcu_gp_cleanup() scans the rcu_node tree in order to reset
> > state to reflect the end of the grace period. It also checks to see
> > whether a new grace period is needed, but in a number of cases, rather
> > than directly cause the new grace period to be immediately started, it
> > instead leaves the grace-period-needed state where various fail-safes
> > can find it. This works fine, but results in higher contention on the
> > root rcu_node structure's ->lock, which is undesirable, and contention
> > on that lock has recently become noticeable.
> >
> > This commit therefore makes rcu_gp_cleanup() immediately start a new
> > grace period if there is any need for one.
> >
> > It is quite possible that it will later be necessary to throttle the
> > grace-period rate, but that can be dealt with when and if.
> >
> > Reported-by: Nicholas Piggin <npiggin@...il.com>
> > Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> > ---
> > kernel/rcu/tree.c | 16 ++++++++++------
> > kernel/rcu/tree.h | 1 -
> > kernel/rcu/tree_plugin.h | 17 -----------------
> > 3 files changed, 10 insertions(+), 24 deletions(-)
> >
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 497f139056c7..afc5e32f0da4 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -1763,14 +1763,14 @@ rcu_start_future_gp(struct rcu_node *rnp, struct rcu_data *rdp,
> > * Clean up any old requests for the just-ended grace period. Also return
> > * whether any additional grace periods have been requested.
> > */
> > -static int rcu_future_gp_cleanup(struct rcu_state *rsp, struct rcu_node *rnp)
> > +static bool rcu_future_gp_cleanup(struct rcu_state *rsp, struct rcu_node *rnp)
> > {
> > int c = rnp->completed;
> > - int needmore;
> > + bool needmore;
> > struct rcu_data *rdp = this_cpu_ptr(rsp->rda);
> >
> > need_future_gp_element(rnp, c) = 0;
> > - needmore = need_future_gp_element(rnp, c + 1);
> > + needmore = need_any_future_gp(rnp);
> > trace_rcu_future_gp(rnp, rdp, c,
> > needmore ? TPS("CleanupMore") : TPS("Cleanup"));
> > return needmore;
> > @@ -2113,7 +2113,6 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
> > {
> > unsigned long gp_duration;
> > bool needgp = false;
> > - int nocb = 0;
> > struct rcu_data *rdp;
> > struct rcu_node *rnp = rcu_get_root(rsp);
> > struct swait_queue_head *sq;
> > @@ -2152,7 +2151,7 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
> > if (rnp == rdp->mynode)
> > needgp = __note_gp_changes(rsp, rnp, rdp) || needgp;
> > /* smp_mb() provided by prior unlock-lock pair. */
> > - nocb += rcu_future_gp_cleanup(rsp, rnp);
> > + needgp = rcu_future_gp_cleanup(rsp, rnp) || needgp;
> > sq = rcu_nocb_gp_get(rnp);
> > raw_spin_unlock_irq_rcu_node(rnp);
> > rcu_nocb_gp_cleanup(sq);
> > @@ -2162,13 +2161,18 @@ static void rcu_gp_cleanup(struct rcu_state *rsp)
> > }
> > rnp = rcu_get_root(rsp);
> > raw_spin_lock_irq_rcu_node(rnp); /* Order GP before ->completed update. */
> > - rcu_nocb_gp_set(rnp, nocb);
> >
> > /* Declare grace period done. */
> > WRITE_ONCE(rsp->completed, rsp->gpnum);
> > trace_rcu_grace_period(rsp->name, rsp->completed, TPS("end"));
> > rsp->gp_state = RCU_GP_IDLE;
> > + /* Check for GP requests since above loop. */
> > rdp = this_cpu_ptr(rsp->rda);
> > + if (need_any_future_gp(rnp)) {
> > + trace_rcu_future_gp(rnp, rdp, rsp->completed - 1,
> > + TPS("CleanupMore"));
> > + needgp = true;
>
> Patch makes sense to me.
>
> I didn't get the "rsp->completed - 1" bit in the call to trace_rcu_future_gp.
> The grace period that just completed is in rsp->completed. The future one
> should be completed + 1. What is meaning of the third argument 'c' to the
> trace event?
The thought was that the grace period must have been requested while
rsp->completed was one less than it is now.
In the current code, it uses rnp->gp_seq_needed, which is instead the
grace period that is being requested.
> Also in rcu_future_gp_cleanup, we call:
> trace_rcu_future_gp(rnp, rdp, c,
> needmore ? TPS("CleanupMore") : TPS("Cleanup"));
> For this case, in the final trace event record, rnp->completed and c will be
> the same, since c is set to rnp->completed before calling
> trace_rcu_future_gp. I was thinking they should be different, do you expect
> them to be the same?
Hmmm... That does look a bit inconsistent. And it currently uses
rnp->gp_seq instead of rnp->gp_seq_needed despite having the same
"CleanupMore" name.
Looks like a review of the calls to trace_rcu_this_gp() is in order.
Or did you have suggestions for name/gp assocations for this trace
message type?
Thanx, Paul
Powered by blists - more mailing lists