[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8bae2ee1-efcc-1571-2a30-5b7779de2c88@gmail.com>
Date: Wed, 25 Apr 2018 09:29:03 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Toke Høiland-Jørgensen <toke@...e.dk>,
Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org
Cc: cake@...ts.bufferbloat.net, Dave Taht <dave.taht@...il.com>
Subject: Re: [PATCH net-next v3] Add Common Applications Kept Enhanced (cake)
qdisc
On 04/25/2018 09:06 AM, Toke Høiland-Jørgensen wrote:
> Eric Dumazet <eric.dumazet@...il.com> writes:
>
>> On 04/25/2018 08:22 AM, Toke Høiland-Jørgensen wrote:
>>> Eric Dumazet <eric.dumazet@...il.com> writes:
>>
>>>> What performance number do you get on a 10Gbit NIC for example ?
>>>
>>> Single-flow throughput through 2 hops on a 40Gbit connection (with CAKE
>>> in unlimited mode vs pfifo_fast on the router):
>>>
>>> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to testbed-40g-2 () port 0 AF_INET : demo
>>> Recv Send Send
>>> Socket Socket Message Elapsed
>>> Size Size Size Time Throughput
>>> bytes bytes bytes secs. 10^6bits/sec
>>>
>>> 87380 16384 16384 10.00 18840.40
>>>
>>> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to testbed-40g-2 () port 0 AF_INET : demo
>>> Recv Send Send
>>> Socket Socket Message Elapsed
>>> Size Size Size Time Throughput
>>> bytes bytes bytes secs. 10^6bits/sec
>>>
>>> 87380 16384 16384 10.00 24804.77
>>
>> CPU performance would be interesting here. (netperf -Cc)
>
>
> $ sudo tc qdisc replace dev ens2 root cake
> $ netperf -cC -H 10.70.2.2
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.70.2.2 () port 0 AF_INET : demo
> Recv Send Send Utilization Service Demand
> Socket Socket Message Elapsed Send Recv Send Recv
> Size Size Size Time Throughput local remote local remote
> bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
>
> 87380 16384 16384 10.00 15450.35 13.35 6.68 0.849 0.283
>
> $ sudo tc qdisc del dev ens2 root
> $ netperf -cC -H 10.70.2.2
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.70.2.2 () port 0 AF_INET : demo
> Recv Send Send Utilization Service Demand
> Socket Socket Message Elapsed Send Recv Send Recv
> Size Size Size Time Throughput local remote local remote
> bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
>
> 87380 16384 16384 10.00 36414.23 8.20 14.30 0.221 0.257
>
>
> (In this test I'm running netperf on the machine that was a router
> before, which is why the base throughput is higher; the other machine
> runs out of CPU on the sender side).
We can see here the high cost of forcing software GSO :/
Really, this should be done only :
1) If requested by the admin ( tc .... gso ....)
2) If packet size is above a threshold.
The threshold could be set by the admin, and/or based on a fraction of the bandwidth parameter.
I totally understand why you prefer to segment yourself for < 100 Mbit links.
But this makes no sense on 10Gbit+
Powered by blists - more mailing lists