[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <18081951.d6t0IUddpn@natalenko.name>
Date: Fri, 16 Feb 2018 18:37:08 +0100
From: Oleksandr Natalenko <oleksandr@...alenko.name>
To: Eric Dumazet <edumazet@...gle.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Alexey Kuznetsov <kuznet@....inr.ac.ru>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Soheil Hassas Yeganeh <soheil@...gle.com>,
Neal Cardwell <ncardwell@...gle.com>,
Yuchung Cheng <ycheng@...gle.com>,
Van Jacobson <vanj@...gle.com>, Jerry Chu <hkchu@...gle.com>
Subject: Re: TCP and BBR: reproducibly low cwnd and bandwidth
Hi.
On pátek 16. února 2018 17:25:58 CET Eric Dumazet wrote:
> The way TCP pacing works, it defaults to internal pacing using a hint
> stored in the socket.
>
> If you change the qdisc while flow is alive, result could be unexpected.
I don't change a qdisc while flow is alive. Either the VM is completely
restarted, or iperf3 is restarted on both sides.
> (TCP socket remembers that one FQ was supposed to handle the pacing)
>
> What results do you have if you use standard pfifo_fast ?
Almost the same as with fq_codel (see my previous email with numbers).
> I am asking because TCP pacing relies on High resolution timers, and
> that might be weak on your VM.
Also, I've switched to measuring things on a real HW only (also see previous
email with numbers).
Thanks.
Regards,
Oleksandr
Powered by blists - more mailing lists