[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1606201541000.5839@nanos>
Date: Mon, 20 Jun 2016 15:56:10 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Eric Dumazet <edumazet@...gle.com>
cc: LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Chris Mason <clm@...com>,
Arjan van de Ven <arjan@...radead.org>, rt@...utronix.de,
Rik van Riel <riel@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
George Spelvin <linux@...encehorizons.net>,
Len Brown <lenb@...nel.org>
Subject: Re: [patch V2 00/20] timer: Refactor the timer wheel
On Fri, 17 Jun 2016, Eric Dumazet wrote:
> To avoid increasing probability of such events we would need to have
> at least 4 ms difference between the RTO timer and delack timer.
>
> Meaning we have to increase both of them and increase P99 latencies of
> RPC workloads.
>
> Maybe a switch to hrtimer would be less risky.
> But I do not know yet if it is doable without big performance penalty.
That will be a big performance issue. So we have the following choices:
1) Increase the wheel size for HZ=1000. Doable, but utter waste of space and
obviously more pointless work when collecting expired timers.
2) Cut off at 37hrs for HZ=1000. We could make this configurable as a 1000HZ
option so datacenter folks can use this and people who don't care and want
better batching for power can use the 4ms thingy.
3) Split the wheel granularities. That would leave the first wheel with tick
granularity and the next 3 with 12.5% worst case and then for the further
out timers we'd switch to 25%.
Thoughts?
Thanks,
tglx
Powered by blists - more mailing lists