[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5762EB8F.60801@hpe.com>
Date: Thu, 16 Jun 2016 11:10:23 -0700
From: Rick Jones <rick.jones2@....com>
To: Tom Herbert <tom@...bertland.com>, davem@...emloft.net,
netdev@...r.kernel.org
Cc: kernel-team@...com
Subject: Re: [PATCH net-next 0/8] tou: Transports over UDP - part I
On 06/16/2016 10:51 AM, Tom Herbert wrote:
> Note that #1 is really about running a transport stack in userspace
> applications in clients, not necessarily servers. For servers we
> intend to modified the kernel stack in order to leverage existing
> implementation for building scalable serves (hence these patches).
Only if there is a v2 for other reasons... I assume that was meant to
be "scalable servers."
> Tested: Various cases of TOU with IPv4, IPv6 using TCP_STREAM and
> TCP_RR. Also, tested IPIP for comparing TOU encapsulation to IP
> tunneling.
>
> - IPv6 native
> 1 TCP_STREAM
> 8394 tps
TPS for TCP_STREAM? Is that Mbit/s?
> 200 TCP_RR
> 1726825 tps
> 100/177/361 90/95/99% latencies
To enhance the already good comprehensiveness of the numbers, a 1 TCP_RR
showing the effect on latency rather than aggregate PPS would be
goodness, as would a comparison of the service demands of the different
single-stream results.
CPU and NIC models would provide excellent context for the numbers.
happy benchmarking,
rick jones
Powered by blists - more mailing lists