lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45E5A8AE.3030606@free.fr>
Date:	Wed, 28 Feb 2007 17:07:10 +0100
From:	John <linux.kernel@...e.fr>
To:	Eric Dumazet <dada1@...mosbay.com>
CC:	linux-net@...r.kernel.org, netdev@...r.kernel.org,
	linux.kernel@...e.fr
Subject: Re: CLOCK_MONOTONIC datagram timestamps by the kernel

Eric Dumazet wrote:
> On Wednesday 28 February 2007 15:23, John wrote:
>> Eric Dumazet wrote:
>>>> John wrote:
>>>>> I know it's possible to have Linux timestamp incoming datagrams as soon
>>>>> as they are received, then for one to retrieve this timestamp later
>>>>> with an ioctl command or a recvmsg call.
>>>> Has it ever been proposed to modify struct skb_timeval to hold
>>>> nanosecond stamps instead of just microsecond stamps? Then make the
>>>> improved precision somehow available to user space.
>>> Most modern NICS are able to delay packet delivery, in order to reduce
>>> number of interrupts and benefit from better cache hits.
>>
>> You are referring to NAPI interrupt mitigation, right?
> 
> Nope; I am referring to hardware features. NAPI is software.
> 
> See ethtool -c eth0
> 
> # ethtool -c eth0
> Coalesce parameters for eth0:
> Adaptive RX: off  TX: off
> stats-block-usecs: 1000000
> sample-interval: 0
> pkt-rate-low: 0
> pkt-rate-high: 0
> 
> rx-usecs: 300
> rx-frames: 60
> rx-usecs-irq: 300
> rx-frames-irq: 60
> 
> tx-usecs: 200
> tx-frames: 53
> tx-usecs-irq: 200
> tx-frames-irq: 53
> 
> You can see on this setup, rx interrupts can be delayed up to 300 us (up to 60
> packets might be delayed)

One can disable interrupt mitigation. Your argument that it introduces 
latency therefore becomes irrelevant.

>> POSIX is moving to nanoseconds interfaces.
>> http://www.opengroup.org/onlinepubs/009695399/functions/clock_settime.html

You snipped too much. I also wrote:

struct timeval and struct timespec take as much space (64 bits).

If the hardware can indeed manage sub-microsecond accuracy, a struct
timeval forces the kernel to discard valuable information.

> The fact that you are able to give nanosecond timestamps inside kernel is not 
> sufficient. It is necessary of course, but not sufficient. This precision is 
> OK to time locally generated events. The moment you ask a 'nanosecond' 
> timestamp, it's usually long before/after the real event.
> 
> If you rely on nanosecond precision on network packets, then something is 
> wrong with your algo. Even rt patches wont make sure your cpu caches are 
> pre-filled, or that the routers/links between your machines are not busy.
> A cache miss cost 40 ns for example. A typical interrupt handler or rx 
> processing can trigger 100 cache misses, or not at all if cache is hot.

Consider an idle Linux 2.6.20-rt8 system, equipped with a single PCI-E 
gigabit Ethernet NIC, running on a modern CPU (e.g. Core 2 Duo E6700). 
All this system does is time stamp 1000 packets per second.

Are you claiming that this platform *cannot* handle most packets within 
less than 1 microsecond of their arrival?

If there are platforms that can achieve sub-microsecond precision, and 
if it is not more expensive to support nanosecond resolution (I said 
resolution not precision), then it makes sense to support nanosecond 
resolution in Linux. Right?

> You said that rt gives highest priority to interrupt handlers :
> If you have several nics, what will happen if you receive packets on both 
> nics, or if the NIC interrupt happens in the same time than timer interrupt ? 
> One timestamp will be wrong for sure.

Again, this is irrelevant. We are discussing whether it would make sense 
to support sub-microsecond resolution. If there is one platform that can 
achieve sub-microsecond precision, there is a need for sub-microsecond 
resolution. As long as we are changing the resolution, we might as well 
use something standard like struct timespec.

> For sure we could timestamp packets with nanosecond resolution, and eventually 
> with MONOTONIC value too, but it will give you (and others) false confidence 
> on the real precision. us timestamps are already wrong...

IMHO, this is not true for all platforms.

Regards.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ