[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140115162531.GA11499@redhat.com>
Date: Wed, 15 Jan 2014 17:25:31 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
laijs@...fujitsu.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
peterz@...radead.org, rostedt@...dmis.org, dhowells@...hat.com,
edumazet@...gle.com, darren@...art.com, fweisbec@...il.com,
sbw@....edu
Subject: Re: [PATCH tip/core/timers 1/3] timers: Reduce __run_timers()
latency for empty list
On 01/14, Paul E. McKenney wrote:
>
> On Tue, Jan 14, 2014 at 07:48:28PM +0100, Oleg Nesterov wrote:
> > > __internal_add_timer(struct tvec_base *base, struct timer_list *timer)
> > > {
> > > @@ -1146,6 +1157,10 @@ static inline void __run_timers(struct tvec_base *base)
> > > struct timer_list *timer;
> > >
> > > spin_lock_irq(&base->lock);
> >
> > Do we really need to take base->lock before catchup_timer_jiffies() ?
> > ->timer_jiffies can only be changed by us, and it seems that we do
> > not care if we race with base->active_timers++.
>
> Given that this lock should be almost always acquired by the current
> CPU, the penalty for acquiring it should be low. After all, we were
> acquiring it prior to this patch as many times as we are after this patch,
> right?
Yes. But
if (catchup_timer_jiffies())
return;
looks a bit simpler and can save a couple of insn. I won't argue of course,
this is minor. And you already sent v2, I'll try add some comments...
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists