[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140613232208.GA4581@linux.vnet.ibm.com>
Date: Fri, 13 Jun 2014 16:22:08 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Frederic Weisbecker <fweisbec@...il.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Josh Triplett <josh@...htriplett.org>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: Re: [PATCH] rcu: Only pin GP kthread when full dynticks is actually
used
On Sat, Jun 14, 2014 at 01:13:25AM +0200, Frederic Weisbecker wrote:
> On Fri, Jun 13, 2014 at 01:49:03PM -0700, Paul E. McKenney wrote:
> > On Fri, Jun 13, 2014 at 06:21:32PM +0200, Frederic Weisbecker wrote:
> > > On Fri, Jun 13, 2014 at 09:16:30AM -0700, Paul E. McKenney wrote:
> > > > > Is it because we have dynticks CPUs staying too long in the kernel without
> > > > > taking any quiescent states? Are we perhaps missing some rcu_user_enter() or
> > > > > things?
> > > >
> > > > Sort of the former, but combined with the fact that in-kernel CPUs still
> > > > need scheduling-clock interrupts for RCU to make progress. I could
> > > > move this to RCU's context-switch hook, but that could be very bad for
> > > > workloads that do lots of context switching.
> > >
> > > Or I can restart the tick if the CPU stays in the kernel for too long without
> > > a tick. I think that's what we were doing before but we removed that because
> > > we never implemented it correctly (we sent scheduler IPI that did nothing...)
> >
> > That would work for me!
> >
> > Just out of curiosity, what would you use to determine that the CPU
> > had been in the kernel too long?
>
> I'd rather deduce that when grace periods completion go past some delay.
> I think that's the requirement for calling rcu_kick_nohz_cpu()?
OK, that does work for me. ;-)
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists