lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 20 Jul 2010 08:57:38 -0400
From:	Brian Bloniarz <bmb@...enacr.com>
To:	Tom Herbert <therbert@...gle.com>
CC:	Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org
Subject: Re: Very low latency TCP for clusters

Tom Herbert wrote:
> On Mon, Jul 19, 2010 at 3:03 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>> Le lundi 19 juillet 2010 à 11:44 -0700, Tom Herbert a écrit :
>>
>>> I see about 7 usecs as best number on loopback, so I believe this is
>>> in the ballpark.  As I mentioned above, this about "best case" latency
>>> of a single thread, so we assume any amount of pinning or other
>>> customized configuration to that purpose.
>> Well, given I get 29 us on a ping between two machines (Gb link, no
>> process involved on receiver, only softirq), I really doubt we can reach
>> 5 us on a tcp test involving a user process on both side ;)
>>
> That's pretty pokey ;-) I see numbers around 25 usecs between to
> machines, this is with TCP_NBRR.  With TCP_RR it's more like 35 usecs,
> so eliminating the scheduler is already a big reduction.  That leaves
> 18 usecs in device time, interrupt processing, network, and cache
> misses; 7 usecs in TCP processing, user space.  While 5 usecs is an
> aggressive goal, I am not ready to concede that there's an
> architectural limit in either NICs, TCP, or sockets that can't be
> overcome.

Have you toyed with the NIC's interrupt coalescing yet?
I'm wondering if any part of the 25usecs is that.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists