lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 25 Apr 2018 18:55:26 +0200
From:   Toke Høiland-Jørgensen <toke@...e.dk>
To:     Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org
Cc:     cake@...ts.bufferbloat.net, Dave Taht <dave.taht@...il.com>
Subject: Re: [PATCH net-next v3] Add Common Applications Kept Enhanced (cake) qdisc

Eric Dumazet <eric.dumazet@...il.com> writes:

> On 04/25/2018 09:06 AM, Toke Høiland-Jørgensen wrote:
>> Eric Dumazet <eric.dumazet@...il.com> writes:
>> 
>>> On 04/25/2018 08:22 AM, Toke Høiland-Jørgensen wrote:
>>>> Eric Dumazet <eric.dumazet@...il.com> writes:
>>>
>>>>> What performance number do you get on a 10Gbit NIC for example ?
>>>>
>>>> Single-flow throughput through 2 hops on a 40Gbit connection (with CAKE
>>>> in unlimited mode vs pfifo_fast on the router):
>>>>
>>>> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to testbed-40g-2 () port 0 AF_INET : demo
>>>> Recv   Send    Send                          
>>>> Socket Socket  Message  Elapsed              
>>>> Size   Size    Size     Time     Throughput  
>>>> bytes  bytes   bytes    secs.    10^6bits/sec  
>>>>
>>>>  87380  16384  16384    10.00    18840.40   
>>>>
>>>> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to testbed-40g-2 () port 0 AF_INET : demo
>>>> Recv   Send    Send                          
>>>> Socket Socket  Message  Elapsed              
>>>> Size   Size    Size     Time     Throughput  
>>>> bytes  bytes   bytes    secs.    10^6bits/sec  
>>>>
>>>>  87380  16384  16384    10.00    24804.77   
>>>
>>> CPU performance would be interesting here.  (netperf -Cc)
>> 
>> 
>> $ sudo tc qdisc replace dev ens2 root cake
>> $ netperf -cC -H 10.70.2.2
>> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.70.2.2 () port 0 AF_INET : demo
>> Recv   Send    Send                          Utilization       Service Demand
>> Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
>> Size   Size    Size     Time     Throughput  local    remote   local   remote
>> bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB
>> 
>>  87380  16384  16384    10.00      15450.35   13.35    6.68     0.849   0.283  
>> 
>> $ sudo tc qdisc del dev ens2 root 
>> $ netperf -cC -H 10.70.2.2
>> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.70.2.2 () port 0 AF_INET : demo
>> Recv   Send    Send                          Utilization       Service Demand
>> Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
>> Size   Size    Size     Time     Throughput  local    remote   local   remote
>> bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB
>> 
>>  87380  16384  16384    10.00      36414.23   8.20     14.30    0.221   0.257  
>> 
>> 
>> (In this test I'm running netperf on the machine that was a router
>> before, which is why the base throughput is higher; the other machine
>> runs out of CPU on the sender side).
>
> We can see here the high cost of forcing software GSO :/
>
> Really, this should be done only :
> 1) If requested by the admin ( tc .... gso ....)
>
> 2) If packet size is above a threshold.
>   The threshold could be set by the admin, and/or based on a fraction of the bandwidth parameter.
>
> I totally understand why you prefer to segment yourself for < 100 Mbit links.
>
> But this makes no sense on 10Gbit+

Well, as I said, 10Gbit+ links are not really the target audience ;)

We did actually have a threshold at some point, but it was removed
because it didn't work well (I'm not sure of the details, perhaps
someone else will chime in).

However, I'm fine with adding a flag, as long as peeling defaults to on,
at least when the shaper is active (to properly account for packet
overhead we really need to see every packet that goes out on the wire).
Would that be acceptable?

-Toke

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ