[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180626104447.GG9125@tardis>
Date: Tue, 26 Jun 2018 18:44:47 +0800
From: Boqun Feng <boqun.feng@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com,
oleg@...hat.com, joel@...lfernandes.org
Subject: Re: [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0
being offline
On Tue, Jun 26, 2018 at 11:38:20AM +0200, Peter Zijlstra wrote:
> On Mon, Jun 25, 2018 at 03:43:32PM -0700, Paul E. McKenney wrote:
> > + preempt_disable();
> > + for_each_leaf_node_possible_cpu(rnp, cpu) {
> > + if (cpu_is_offline(cpu)) /* Preemption disabled. */
> > + continue;
>
> Create for_each_node_online_cpu() instead? Seems a bit pointless to
> iterate possible mask only to then check it against the online mask.
> Just iterate the online mask directly.
>
> Or better yet, write this as:
>
> preempt_disable();
> cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> if (cpu > rnp->grphi)
> cpu = WORK_CPU_UNBOUND;
> queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> preempt_enable();
>
> Which is what it appears to be doing.
>
Make sense! Thanks ;-)
Applied this and running a TREE03 rcutorture. If all go well, I will
send the updated patch.
Regards,
Boqun
> > + queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> > + rnp->exp_need_flush = true;
> > + break;
> > + }
> > + preempt_enable();
> > + if (!rnp->exp_need_flush) { /* All offline, report QSes. */
> > + queue_work(rcu_par_gp_wq, &rnp->rew.rew_work);
> > + rnp->exp_need_flush = true;
> > + }
> > }
> >
> > /* Wait for workqueue jobs (if any) to complete. */
> > --
> > 2.17.1
> >
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists