lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <D61353C1-3CA6-48D5-AAFB-BBD3E95E1A06@me.com>
Date:   Thu, 23 Mar 2017 11:54:12 -0700
From:   Denny Page <dennypage@...com>
To:     Miroslav Lichvar <mlichvar@...hat.com>
Cc:     Richard Cochran <richardcochran@...il.com>, netdev@...r.kernel.org,
        Jiri Benc <jbenc@...hat.com>,
        "Keller, Jacob E" <jacob.e.keller@...el.com>,
        Willem de Bruijn <willemb@...gle.com>
Subject: Re: Extending socket timestamping API for NTP

[Resend as plain text for netdev]


> On Mar 23, 2017, at 09:21, Miroslav Lichvar <mlichvar@...hat.com> wrote:
> 
> After becoming a bit more familiar with the code I don't think this is
> a good idea anymore :). I suspect there would be a noticeable
> performance impact if each timestamped packet could trigger reading of
> the current link speed. If the value had to be cached it would make
> more sense to do it in the application.


I am very surprised at this. The application caching approach requires the application retrieve the value via a system call. The system call overhead is huge in comparison to everything else. More importantly, the application cached value may be wrong. If the application takes a sample every 5 seconds, there are 5 seconds of timestamps that can be wildly wrong.

At the driver level, if the speed check is done on packet receive, retrieving the link speed is a single register read which is a small overhead compared with processing the timestamp. The alternative approach of caching still makes more sense in the driver rather than the application. The driver receives an interrupt when negotiation happens, and It’s trivial to cache the value at that point. And a cached value by the driver will always be correct. Implementing it in the driver also allows for hardware to provide the functionality where available. Yes, there is only one chip that provides this currently, but if there is sufficient demand others will appear. There is no way to take advantage of this functionality unless this is handled by the driver.

I think it makes a lot of sense to leave this to the driver developer.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ