[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2562547.S27nl9fb2E@natalenko.name>
Date: Sun, 18 Feb 2018 22:49:02 +0100
From: Oleksandr Natalenko <oleksandr@...alenko.name>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Eric Dumazet <edumazet@...gle.com>,
Neal Cardwell <ncardwell@...gle.com>,
"David S. Miller" <davem@...emloft.net>,
Netdev <netdev@...r.kernel.org>,
Yuchung Cheng <ycheng@...gle.com>,
Soheil Hassas Yeganeh <soheil@...gle.com>,
Jerry Chu <hkchu@...gle.com>, Dave Taht <dave.taht@...il.com>
Subject: Re: TCP and BBR: reproducibly low cwnd and bandwidth
Hi.
On neděle 18. února 2018 22:04:27 CET Eric Dumazet wrote:
> I was able to take a look today, and I believe this is the time to
> switch TCP to GSO being always on.
>
> As a bonus, we get speed boost for cubic as well.
>
> Todays high BDP and recent TCP improvements (rtx queue as rb-tree, sack
> coalescing, TCP pacing...) all were developed/tested/maintained with
> GSO/TSO being the norm.
>
> Can you please test the following patch ?
Yes, results below:
BBR+fq:
sg on: 6.02 Gbits/sec
sg off: 1.33 Gbits/sec
BBR+pfifo_fast:
sg on: 4.13 Gbits/sec
sg off: 1.34 Gbits/sec
BBR+fq_codel:
sg on: 4.16 Gbits/sec
sg off: 1.35 Gbits/sec
Reno+fq:
sg on: 6.44 Gbits/sec
sg off: 1.39 Gbits/sec
Reno+pfifo_fast:
sg on: 6.36 Gbits/sec
sg off: 1.39 Gbits/sec
Reno+fq_codel:
sg on: 6.41 Gbits/sec
sg off: 1.38 Gbits/sec
While BBR still suffers when fq is not used, disabling sg doesn't bring
drastic throughput drop anymore. So, looks good to me, eh?
> Note that some cleanups can be done later in TCP stack, removing lots
> of legacy stuff.
>
> Also TCP internal-pacing could benefit from something similar to this
> fq patch eventually, although there is no hurry.
> https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/commit/?i
> d=fefa569a9d4bc4b7758c0fddd75bb0382c95da77
Feel free to ping me if you have something else to test then ;).
> Of course, you have to consider why SG was disabled on your device,
> this looks very pessimistic.
Dunno why that happens, but I've managed to just enable it automatically on
interface up.
Thanks.
Oleksandr
Powered by blists - more mailing lists