[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5668348.WVIY7FqTii@natalenko.name>
Date: Sat, 17 Feb 2018 11:01:19 +0100
From: Oleksandr Natalenko <oleksandr@...alenko.name>
To: Eric Dumazet <edumazet@...gle.com>
Cc: Neal Cardwell <ncardwell@...gle.com>,
Eric Dumazet <eric.dumazet@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Netdev <netdev@...r.kernel.org>,
Yuchung Cheng <ycheng@...gle.com>,
Soheil Hassas Yeganeh <soheil@...gle.com>,
Jerry Chu <hkchu@...gle.com>, Dave Taht <dave.taht@...il.com>
Subject: Re: TCP and BBR: reproducibly low cwnd and bandwidth
Hi.
On pátek 16. února 2018 23:59:52 CET Eric Dumazet wrote:
> Well, no effect here on e1000e (1 Gbit) at least
>
> # ethtool -K eth3 sg off
> Actual changes:
> scatter-gather: off
> tx-scatter-gather: off
> tcp-segmentation-offload: off
> tx-tcp-segmentation: off [requested on]
> tx-tcp6-segmentation: off [requested on]
> generic-segmentation-offload: off [requested on]
>
> # tc qd replace dev eth3 root pfifo_fast
> # ./super_netperf 1 -H 7.7.7.84 -- -K cubic
> 941
> # ./super_netperf 1 -H 7.7.7.84 -- -K bbr
> 941
> # tc qd replace dev eth3 root fq
> # ./super_netperf 1 -H 7.7.7.84 -- -K cubic
> 941
> # ./super_netperf 1 -H 7.7.7.84 -- -K bbr
> 941
> # tc qd replace dev eth3 root fq_codel
> # ./super_netperf 1 -H 7.7.7.84 -- -K cubic
> 941
> # ./super_netperf 1 -H 7.7.7.84 -- -K bbr
> 941
> #
That really looks strange to me. I'm able to reproduce the effect caused by
disabling scatter-gather even on the VM (using iperf3, as usual):
BBR+fq_codel:
sg on: 4.23 Gbits/sec
sg off: 121 Mbits/sec
BBR+fq:
sg on: 6.38 Gbits/sec
sg off: 437 Mbits/sec
Reno+fq_codel:
sg on: 6.74 Gbits/sec
sg off: 1.37 Gbits/sec
Reno+fq:
sg on: 6.53 Gbits/sec
sg off: 1.19 Gbits/sec
Regardless of which congestion algorithm and qdisc is in use, the throughput
drops, but when BBR is in use, especially with something non-fq, it drops the
most.
Oleksandr
Powered by blists - more mailing lists