[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120720073739.GA2166@netboy.at.omicron.at>
Date: Fri, 20 Jul 2012 09:37:40 +0200
From: Richard Cochran <richardcochran@...il.com>
To: Ben Hutchings <bhutchings@...arflare.com>
Cc: Stuart Hodgson <smhodgson@...arflare.com>,
David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
linux-net-drivers@...arflare.com,
Andrew Jackson <ajackson@...arflare.com>
Subject: Re: [PATCH net-next 4/7] sfc: Add support for IEEE-1588 PTP
On Thu, Jul 19, 2012 at 04:50:51PM +0100, Ben Hutchings wrote:
> On Thu, 2012-07-19 at 16:29 +0100, Stuart Hodgson wrote:
> > On 19/07/12 15:25, Richard Cochran wrote:
> [...]
> > > I am trying to purge the whole SYS thing (only blackfin is left)
> > > because there is a much better way to go about this, namely
> > > synchronizing the system time to the PHC time via an internal PPS
> > > signal.
> >
> > This may be possible in future. But leads us to another problem
> > where the PPS event that is generated by the PHC subsystem to the
> > PPS subsystem is stamped with the current system_time. That may
> > be fine for a PPS signal generated from an interrupt but not when
> > the internal PPS event has implicit jitter from the handler/event_queue
> > that we have in the driver.
> [...]
>
> We can certainly take a timestamp in the hard interrupt handler; in fact
> that's what I originally expected we would do since we have a separate
> MSI-X vector for PTP. But even hard interrupt handling can be subject
> to substantial jitter.
What kind of jitter do you see or expect?
I did a study of synching system to PHC on a PowerPC system, where the
PPS timestamps varied from about 10 usec (on average under light load)
to over 30 usec (under heavy load).
Even so, it was easy to synchronize the system clock to within about a
microsecond under light load, with heavy load producing about an
additional 6 usec error.
Thanks,
Richard
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists