[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1607221453230.3906@nanos>
Date: Fri, 22 Jul 2016 15:04:02 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: "Jason A. Donenfeld" <Jason@...c4.com>
cc: LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Chris Mason <clm@...com>,
Arjan van de Ven <arjan@...radead.org>, rt@...utronix.de,
Rik van Riel <riel@...hat.com>,
George Spelvin <linux@...encehorizons.net>,
Len Brown <lenb@...nel.org>,
Josh Triplett <josh@...htriplett.org>,
Eric Dumazet <edumazet@...gle.com>
Subject: Re: [patch 4 15/22] timer: Remove slack leftovers
On Fri, 22 Jul 2016, Jason A. Donenfeld wrote:
> Thomas Gleixner <tglx@...utronix.de> writes:
> > We now have implicit batching in the timer wheel. The slack is not longer
> > used. Remove it.
> >From a brief look at timer.c, it looked like __mod_timer was rather
> expensive. So, as an optimization, I wanted the "timer_pending(timer)
> && timer->expires == expires" condition to be hit in most of the
> cases. I accomplished this by doing:
>
> set_timer_slack(timer, HZ / 4);
>
> This ensured that we'd only wind up calling __mod_timer 4 times per
> second, at most.
>
> With the removal of the slack concept, I no longer can do this. I
> haven't reviewed this series in depth, but I'm wondering if you'd
> recommend a different optimization instead. Or, have things been
Well, this really depends on the TIMEOUT value you have. The code now does
implicit batching for larger timeouts by queueing the timers into wheels with
coarse grained granularity. As long as your new TIMEOUT value ends up in the
same bucket then that's equivalent to the slack thing.
Can you give me a ballpark of your TIMEOUT value?
> reworked so much, that calling mod_timer is now always inexpensive?
When you take the slow (queueing) path, it's still expensive, not as bad as
the previous one, but not really cheap either.
Thanks,
tglx
Powered by blists - more mailing lists