[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1f1b08da1003081925p61a810e4v96be56640287d61@mail.gmail.com>
Date: Mon, 8 Mar 2010 19:25:07 -0800
From: john stultz <johnstul@...ibm.com>
To: Alexander Gordeev <lasaine@....cs.msu.su>
Cc: linux-kernel@...r.kernel.org, linuxpps@...enneenne.com,
"Nikita V. Youshchenko" <yoush@...msu.su>, stas@....cs.msu.su,
Rodolfo Giometti <giometti@...eenne.com>
Subject: Re: [PATCHv2 0/6] pps: time synchronization over LPT
On Wed, Feb 24, 2010 at 4:28 AM, Alexander Gordeev
<lasaine@....cs.msu.su> wrote:
> This patchset is tested against the vanilla 2.6.32.9 kernel. But we are
> actually using it on 2.6.31.12-rt20 rt-preempt kernel most of the time.
> Also there is a version which should be applied on top of LinuxPPS out
> of tree patches (i.e. all clients and low-level irq timestamps stuff).
> Those who are interested in other versions of the patchset can find
> them in my git repository:
> http://lvk.cs.msu.su/~lasaine/timesync/linux-2.6-timesync.git
>
> There is one problem however: hardpps() works bad when used on top
> of 2.6.33-rc* with CONFIG_NO_HZ enabled. The reason for this is commit
> a092ff0f90cae22b2ac8028ecd2c6f6c1a9e4601. Without it hardpps() is able
> to sync to 1us precision in about 10 seconds. With it
Uh. Not sure I see right off why the logarithmic time accumulation
would give you troubles. Its actually there to try to fix a couple of
NTP issues that cropped up when the accumulation interval was pushed
out to 2HZ with CONFIG_NO_HZ.
Do you have any extra insight here as to whats going on with your
code? The only thing I could guess would be second_overflow() is
happening closer to the actual overflow, but maybe less regularly? But
again, I'm not sure how this would be drastically different then
before with the 2HZ accumulation period.
thanks
-john
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists