[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0804191546430.4670@asgard>
Date: Sat, 19 Apr 2008 15:49:23 -0700 (PDT)
From: david@...g.hm
To: Thomas Gleixner <tglx@...utronix.de>
cc: David Brownell <david-b@...bell.net>,
linux-pm@...ts.linux-foundation.org,
"Woodruff, Richard" <r-woodruff2@...com>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org
Subject: Re: [linux-pm] Higer latency with dynamic tick (need for an io-ondemand
govenor?)
On Sat, 19 Apr 2008, Thomas Gleixner wrote:
> On Fri, 18 Apr 2008, David Brownell wrote:
>> On Friday 18 April 2008, Woodruff, Richard wrote:
>>> When capturing some traces with dynamic tick we were noticing the
>>> interrupt latency seems to go up a good amount. If you look at the trace
>>> the gpio IRQ is now offset a good amount. Good news I guess is its
>>> pretty predictable.
>>
>> That is, about 24 usec on this CPU ... an ARM v7, which I'm guessing
>> is an OMAP34xx running fairly fast (order of 4x faster than most ARMs).
>>
>> Similar issues were noted, also using ETM trace, on an ARM920 core [1]
>> from Atmel. There, the overhead of NO_HZ was observed to be more like
>> 150 usec of per-IRQ overhead, which is enough to make NO_HZ non-viable
>> in some configurations.
>>
>>
>>> I was wondering what thoughts of optimizing this might be.
>>
>> Cutting down the math implied by jiffies updates might help.
>> The 64 bit math for ktime structs isn't cheap; purely by eyeball,
>> that was almost 1/3 the cost of that 24 usec (mostly __do_div64).
>
> Hmm, I have no real good idea to avoid the div64 in the case of a long
> idle sleep. Any brilliant patches are welcome :)
how long is 'long idle sleep'? and how common are such sleeps? is it
possibly worth the cost of a test in the hotpath to see if you need to do
the 64 bit math or can get away with 32 bit math (at least on some
platforms)
David Lang
Powered by blists - more mailing lists