[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200817091633.GL35926@hirez.programming.kicks-ass.net>
Date: Mon, 17 Aug 2020 11:16:33 +0200
From: peterz@...radead.org
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: mingo@...nel.org, torvalds@...ux-foundation.org,
linux-kernel@...r.kernel.org, will@...nel.org, hch@....de,
axboe@...nel.dk, chris@...is-wilson.co.uk, davem@...emloft.net,
kuba@...nel.org, fweisbec@...il.com, oleg@...hat.com
Subject: Re: [RFC][PATCH 1/9] irq_work: Cleanup
On Mon, Aug 17, 2020 at 11:03:25AM +0200, peterz@...radead.org wrote:
> On Thu, Jul 23, 2020 at 09:14:11AM -0700, Paul E. McKenney wrote:
> > > --- a/kernel/rcu/tree.c
> > > +++ b/kernel/rcu/tree.c
> > > @@ -1287,8 +1287,6 @@ static int rcu_implicit_dynticks_qs(stru
> > > if (IS_ENABLED(CONFIG_IRQ_WORK) &&
> > > !rdp->rcu_iw_pending && rdp->rcu_iw_gp_seq != rnp->gp_seq &&
> > > (rnp->ffmask & rdp->grpmask)) {
> > > - init_irq_work(&rdp->rcu_iw, rcu_iw_handler);
> >
> > We are actually better off with the IRQ_WORK_INIT_HARD() here rather
> > than unconditionally at boot.
>
> Ah, but there isn't an init_irq_work() variant that does the HARD thing.
Ah you meant doing:
rdp->rcu_iw = IRQ_WORK_INIT_HARD(rcu_iw_handler)
But then it is non-obvious how that doesn't trample state. I suppose
that rcu_iw_pending thing ensures that... I'll think about it.
> > The reason for this is that we get here only if a single grace
> > period extends beyond 10.5 seconds (mainline) or beyond 30 seconds
> > (many distribution kernels). Which almost never happens. And yes,
> > rcutree_prepare_cpu() is also invoked as each CPU that comes online,
> > not that this is all that common outside of rcutorture and boot time. ;-)
>
> What do you mean 'also' ? Afaict this is CPU bringup only code (initial
> and hotplug). We really don't care about code there. It's the slowest
> possible path we have in the kernel.
>
> > > - atomic_set(&rdp->rcu_iw.flags, IRQ_WORK_HARD_IRQ);
> > > rdp->rcu_iw_pending = true;
> > > rdp->rcu_iw_gp_seq = rnp->gp_seq;
> > > irq_work_queue_on(&rdp->rcu_iw, rdp->cpu);
>
Powered by blists - more mailing lists