[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140114235015.GZ10038@linux.vnet.ibm.com>
Date: Tue, 14 Jan 2014 15:50:15 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
laijs@...fujitsu.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
peterz@...radead.org, rostedt@...dmis.org, dhowells@...hat.com,
edumazet@...gle.com, darren@...art.com, fweisbec@...il.com,
sbw@....edu
Subject: Re: [PATCH tip/core/timers 1/3] timers: Reduce __run_timers()
latency for empty list
On Tue, Jan 14, 2014 at 07:48:28PM +0100, Oleg Nesterov wrote:
> On 01/13, Paul E. McKenney wrote:
> >
> > The __run_timers() function currently steps through the list one jiffy at
> > a time in order to update the timer wheel. However, if the timer wheel
> > is empty, no adjustment is needed other than updating ->timer_jiffies.
>
> Yes, but ->active_timers == 0 doesn't necessarily mean "empty", it only
> counts the non-deferrable timers?
Right you are! Color me slow and stupid...
Separate counter clearly required.
> > In this case, which is likely to be common for NO_HZ_FULL kernels, the
> > kernel currently incurs a large latency for no good reason. This commit
> > therefore short-circuits this case.
> >
> > Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> > ---
> > kernel/timer.c | 15 +++++++++++++++
> > 1 file changed, 15 insertions(+)
> >
> > diff --git a/kernel/timer.c b/kernel/timer.c
> > index 6582b82fa966..21849275828f 100644
> > --- a/kernel/timer.c
> > +++ b/kernel/timer.c
> > @@ -337,6 +337,17 @@ void set_timer_slack(struct timer_list *timer, int slack_hz)
> > }
> > EXPORT_SYMBOL_GPL(set_timer_slack);
> >
> > +static bool catchup_timer_jiffies(struct tvec_base *base)
> > +{
> > +#ifdef CONFIG_NO_HZ_FULL
> > + if (!base->active_timers) {
> > + base->timer_jiffies = jiffies;
> > + return 1;
> > + }
> > +#endif /* #ifdef CONFIG_NO_HZ_FULL */
> > + return 0;
> > +}
> > +
> > static void
> > __internal_add_timer(struct tvec_base *base, struct timer_list *timer)
> > {
> > @@ -1146,6 +1157,10 @@ static inline void __run_timers(struct tvec_base *base)
> > struct timer_list *timer;
> >
> > spin_lock_irq(&base->lock);
>
> Do we really need to take base->lock before catchup_timer_jiffies() ?
> ->timer_jiffies can only be changed by us, and it seems that we do
> not care if we race with base->active_timers++.
Given that this lock should be almost always acquired by the current
CPU, the penalty for acquiring it should be low. After all, we were
acquiring it prior to this patch as many times as we are after this patch,
right?
> > + if (catchup_timer_jiffies(base)) {
> > + spin_unlock_irq(&base->lock);
> > + return;
>
> This is what I can't understand... Doesn't this mean that, unless this
> base have a non-deferrable timer, we can never run the pending deferrable
> timers even if the system/cpu is "busy" ?
It does, and that is a bug in my patch. Good catch!
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists