[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <02874ECE860811409154E81DA85FBB5857DDFBB1@ORSMSX115.amr.corp.intel.com>
Date: Fri, 24 Mar 2017 18:52:14 +0000
From: "Keller, Jacob E" <jacob.e.keller@...el.com>
To: Denny Page <dennypage@...com>,
Miroslav Lichvar <mlichvar@...hat.com>
CC: Richard Cochran <richardcochran@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Jiri Benc <jbenc@...hat.com>,
Willem de Bruijn <willemb@...gle.com>
Subject: RE: Extending socket timestamping API for NTP
> -----Original Message-----
> From: Denny Page [mailto:dennypage@...com]
> Sent: Friday, March 24, 2017 10:18 AM
> To: Miroslav Lichvar <mlichvar@...hat.com>
> Cc: Richard Cochran <richardcochran@...il.com>; netdev@...r.kernel.org; Jiri
> Benc <jbenc@...hat.com>; Keller, Jacob E <jacob.e.keller@...el.com>; Willem
> de Bruijn <willemb@...gle.com>
> Subject: Re: Extending socket timestamping API for NTP
>
>
> > On Mar 24, 2017, at 02:45, Miroslav Lichvar <mlichvar@...hat.com> wrote:
> >
> > On Thu, Mar 23, 2017 at 10:08:00AM -0700, Denny Page wrote:
> >>> On Mar 23, 2017, at 09:21, Miroslav Lichvar <mlichvar@...hat.com> wrote:
> >>>
> >>> After becoming a bit more familiar with the code I don't think this is
> >>> a good idea anymore :). I suspect there would be a noticeable
> >>> performance impact if each timestamped packet could trigger reading of
> >>> the current link speed. If the value had to be cached it would make
> >>> more sense to do it in the application.
> >>
> >> I am very surprised at this. The application caching approach requires the
> application retrieve the value via a system call. The system call overhead is huge
> in comparison to everything else. More importantly, the application cached value
> may be wrong. If the application takes a sample every 5 seconds, there are 5
> seconds of timestamps that can be wildly wrong.
> >
> > I'm just trying to be practical and minimize the performance impact
> > and the amount of code that needs to be written, reviewed and
> > maintained.
> >
> > How common is to have link speed changing in normal operation on LAN?
>
> In my case, it’s currently every few minutes because I’m doing hw timestamp
> testing. :)
>
> But this does speak to my point. If it’s cached by the application, the application
> has to check it regularly to minimize the possibility of bad timestamps. If the link
> speed doesn’t change, every call by the application is wasted overhead. If it’s
> cached by the driver, there is no waste, and the stamps are always correct.
>
> I should have remembered this yesterday... I went and looked at my favorite
> driver, Intel's igb. Not only is the igb driver already caching link speed, it is also
> performing timestamp correction based on that link speed. It appears that all
> Intel drivers are caching link speed. I looked at a few other popular
> manufacturers, and it appears that caching link speed is common. The only one I
> quickly found that didn’t cache was realtek.
>
> I believe that timestamp correction, whether it be speed based latency, header -
> > trailer, or whatever else might be needed later down the line, are properly
> done in the driver. It’s a lot for the application to try and figure out if it should or
> should not be doing corrections and what correction to apply. The driver knows.
I also believe the right place for these corrections is in the driver.
Thanks,
Jake
Powered by blists - more mailing lists