[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADVnQymrsENk0HUtDw3rrX0+HexhVuM2o7sqU7qcoPid7ehQsg@mail.gmail.com>
Date: Fri, 16 Feb 2018 11:45:56 -0500
From: Neal Cardwell <ncardwell@...gle.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
Oleksandr Natalenko <oleksandr@...alenko.name>,
"David S. Miller" <davem@...emloft.net>,
Netdev <netdev@...r.kernel.org>,
Yuchung Cheng <ycheng@...gle.com>,
Soheil Hassas Yeganeh <soheil@...gle.com>,
Jerry Chu <hkchu@...gle.com>
Subject: Re: TCP and BBR: reproducibly low cwnd and bandwidth
On Fri, Feb 16, 2018 at 11:43 AM, Eric Dumazet <edumazet@...gle.com> wrote:
>
> On Fri, Feb 16, 2018 at 8:33 AM, Neal Cardwell <ncardwell@...gle.com> wrote:
> > Oleksandr,
> >
> > Thanks for the detailed report! Yes, this sounds like an issue in BBR. We
> > have not run into this one in our team, but we will try to work with you to
> > fix this.
> >
> > Would you be able to take a sender-side tcpdump trace of the slow BBR
> > transfer ("v4.13 + BBR + fq_codel == Not OK")? Packet headers only would be
> > fine. Maybe something like:
> >
> > tcpdump -w /tmp/test.pcap -c1000000 -s 100 -i eth0 port $PORT
> >
> > Thanks!
> > neal
>
> On baremetal and using latest net tree, I get pretty normal results at
> least, on 40Gbit NIC,
Eric raises a good question: bare metal vs VMs.
Oleksandr, your first email mentioned KVM VMs and virtio NICs. Your
second e-mail did not seem to mention if those results were for bare
metal or a VM scenario: can you please clarify the details on your
second set of tests?
Thanks!
Powered by blists - more mailing lists