[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.58.0801091924150.5513@gandalf.stny.rr.com>
Date: Wed, 9 Jan 2008 19:25:00 -0500 (EST)
From: Steven Rostedt <rostedt@...dmis.org>
To: john stultz <johnstul@...ibm.com>
cc: LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Christoph Hellwig <hch@...radead.org>,
Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
Gregory Haskins <ghaskins@...ell.com>,
Arnaldo Carvalho de Melo <acme@...stprotocols.net>,
Thomas Gleixner <tglx@...utronix.de>,
Tim Bird <tim.bird@...sony.com>,
Sam Ravnborg <sam@...nborg.org>,
"Frank Ch. Eigler" <fche@...hat.com>,
Steven Rostedt <srostedt@...hat.com>
Subject: Re: [RFC PATCH 13/22 -v2] handle accurate time keeping over long
delays
On Wed, 9 Jan 2008, john stultz wrote:
> > Index: linux-compile-i386.git/kernel/time/timekeeping.c
> > ===================================================================
> > --- linux-compile-i386.git.orig/kernel/time/timekeeping.c 2008-01-09 14:07:34.000000000 -0500
> > +++ linux-compile-i386.git/kernel/time/timekeeping.c 2008-01-09 15:17:31.000000000 -0500
> > @@ -448,27 +449,29 @@ static void clocksource_adjust(s64 offse
> > */
> > void update_wall_time(void)
> > {
> > - cycle_t offset;
> > + cycle_t cycle_now, offset;
> >
> > /* Make sure we're fully resumed: */
> > if (unlikely(timekeeping_suspended))
> > return;
> >
> > #ifdef CONFIG_GENERIC_TIME
> > - offset = (clocksource_read(clock) - clock->cycle_last) & clock->mask;
> > + cycle_now = clocksource_read(clock);
> > #else
> > - offset = clock->cycle_interval;
> > + cycle_now = clock->cycle_last + clock->cycle_interval;
> > #endif
> > + offset = (cycle_now - clock->cycle_last) & clock->mask;
>
> It seems this offset addition was to merge against the colliding
> xtime_cache changes in mainline. However, I don't think its quite right,
> and might be causing incorrect time() or vtime() results if NO_HZ is
> enabled.
Yeah, this had a bit of clashes in its life in the RT kernel.
>
> > + clocksource_accumulate(clock, cycle_now);
> > +
> > clock->xtime_nsec += (s64)xtime.tv_nsec << clock->shift;
> >
> > /* normally this loop will run just once, however in the
> > * case of lost or late ticks, it will accumulate correctly.
> > */
> > - while (offset >= clock->cycle_interval) {
> > + while (clock->cycle_accumulated >= clock->cycle_interval) {
> > /* accumulate one interval */
> > clock->xtime_nsec += clock->xtime_interval;
> > - clock->cycle_last += clock->cycle_interval;
> > - offset -= clock->cycle_interval;
> > + clock->cycle_accumulated -= clock->cycle_interval;
> >
> > if (clock->xtime_nsec >= (u64)NSEC_PER_SEC << clock->shift) {
> > clock->xtime_nsec -= (u64)NSEC_PER_SEC << clock->shift;
> > @@ -482,7 +485,7 @@ void update_wall_time(void)
> > }
> >
> > /* correct the clock when NTP error is too big */
> > - clocksource_adjust(offset);
> > + clocksource_adjust(clock->cycle_accumulated);
>
>
> I suspect the following is needed, but haven't been able to test it yet.
Thanks, I'll pull it in and start testing it.
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists