[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1291840941.2909.40.camel@work-vm>
Date: Wed, 08 Dec 2010 12:42:21 -0800
From: john stultz <johnstul@...ibm.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Russell King - ARM Linux <linux@....linux.org.uk>,
Mikael Pettersson <mikpe@...uu.se>,
Venkatesh Pallipadi <venki@...gle.com>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org
Subject: Re: [BUG] 2.6.37-rc3 massive interactivity regression on ARM
On Wed, 2010-12-08 at 15:44 +0100, Peter Zijlstra wrote:
> On Wed, 2010-12-08 at 14:28 +0000, Russell King - ARM Linux wrote:
> > So, what I'm saying is that if wrapping in 4 seconds is a problem,
> > then maybe we shouldn't be providing sched_clock() at all.
>
> 4 seconds should be perfectly fine, we call it at least every scheduler
> tick (HZ) and NO_HZ will either have to limit the max sleep period or
> provide its own sleep duration (using a more expensive clock) so we can
> recover (much like GTOD already requires).
>
> > Also, if wrapping below 64-bits is also a problem, again, maybe we
> > shouldn't be providing it there either. Eg:
>
> That's mostly due to hysterical raisins and we should just fix that, the
> simplest way is to use something like kernel/sched_clock.c to use
> sched_clock() deltas to make a running u64 value.
>
> Like said, John Stultz was already looking at doing something like that
> because there's a number of architectures suffering this same problem
> and they're all already using part of the clocksource infrastructure to
> implement the sched_clock() interface simply because they only have a
> single hardware resource.
I'm not actively working on it right now, but trying to rework the
sched_clock code so its more like the generic timekeeping code is on my
list (I'm Looking to see if I can bump it up to the front in the near
future).
thanks
-john
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists