lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTiniRmnwG4vT-MrQQHQjrdQ4sfOn6Uxi42Dsr8nu@mail.gmail.com>
Date:	Mon, 19 Jul 2010 11:44:05 -0700
From:	Tom Herbert <therbert@...gle.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	netdev@...r.kernel.org
Subject: Re: Very low latency TCP for clusters

On Mon, Jul 19, 2010 at 10:41 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Le lundi 19 juillet 2010 à 10:05 -0700, Tom Herbert a écrit :
>> We have been looking at best case TCP latencies that might be achieved
>> within a cluster (low loss fabric).  The goal is to have latency
>> numbers roughly comparable to that which can be produced using RDMA/IB
>> in a low latency configuration  (<5 usecs round trip on netperf TCP_RR
>> test with one byte data for directly connected hosts as a starting
>> point).  This would be without changing sockets API, fabric, and
>> preferably not using TCP offload or a user space stack.
>>
>> I think there are at least two techniques that will drive down TCP
>> latency: per connection queues and polling queues.  Per connection
>> queues (supported by device) should eliminate costs of connection
>> look-up, hopefully some locking.  Polling becomes viable as core
>> counts on systems increase, and burning a few CPUs for networking
>> polling on behalf of very low-latency threads would be reasonable.
>>
>> Are there any efforts in progress to integrate per connection queues
>> in the stack or integrate polling of queues?
>
> aka "net channel" ;)
>
I don't think this is the same.  I am thinking of a device that
supports multi-queue where individual queues can be programming to
accept an exact 4-tuple, from the device's point of view I don't think
there's much beyond that and it is otherwise treated as just another
packet queue.  However, kernel may be able to use it to shortcut some
processing.  I believe such functionality is already supported in
Intel's flow director and possibly by some other vendors.

> What a nightmare...
>
I prefer to think of it as challenge, needing to resort to stateful
offload to get low latency would be the nightmare ;-)

> Anyway, 5 us roundtrip TCP_RR (including user thread work), seems a bit
> utopic right now.
>
> Even on loopback
>

I see about 7 usecs as best number on loopback, so I believe this is
in the ballpark.  As I mentioned above, this about "best case" latency
of a single thread, so we assume any amount of pinning or other
customized configuration to that purpose.

>
>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ