lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 16 Feb 2018 18:25:51 +0100
From:   Oleksandr Natalenko <oleksandr@...alenko.name>
To:     Neal Cardwell <ncardwell@...gle.com>
Cc:     Eric Dumazet <eric.dumazet@...il.com>,
        "David S. Miller" <davem@...emloft.net>,
        Netdev <netdev@...r.kernel.org>,
        Yuchung Cheng <ycheng@...gle.com>,
        Soheil Hassas Yeganeh <soheil@...gle.com>,
        Jerry Chu <hkchu@...gle.com>,
        Eric Dumazet <edumazet@...gle.com>,
        Dave Taht <dave.taht@...il.com>
Subject: Re: TCP and BBR: reproducibly low cwnd and bandwidth

Hi.

On pátek 16. února 2018 17:33:48 CET Neal Cardwell wrote:
> Thanks for the detailed report! Yes, this sounds like an issue in BBR. We
> have not run into this one in our team, but we will try to work with you to
> fix this.
> 
> Would you be able to take a sender-side tcpdump trace of the slow BBR
> transfer ("v4.13 + BBR + fq_codel == Not OK")? Packet headers only would be
> fine. Maybe something like:
> 
>   tcpdump -w /tmp/test.pcap -c1000000 -s 100 -i eth0 port $PORT

So, going on with two real HW hosts. They are both running latest stock Arch 
Linux kernel (4.15.3-1-ARCH, CONFIG_PREEMPT=y, CONFIG_HZ=1000) and are 
interconnected with 1 Gbps link (via switch if that matters). Using iperf3, 
running each test for 20 seconds.

Having BBR+fq_codel (or pfifo_fast, same result) on both hosts:

Client to server: 112 Mbits/sec
Server to client: 96.1 Mbits/sec

Having BBR+fq on both hosts:

Client to server: 347 Mbits/sec
Server to client: 397 Mbits/sec

Having YeAH+fq on both hosts:

Client to server: 928 Mbits/sec
Server to client: 711 Mbits/sec

(when the server generates traffic, the throughput is a little bit lower, as 
you can see, but I assume that's because I have there low-power Silvermont 
CPU, when the client has Ivy Bridge beast)

Now, to tcpdump. I've captured it 2 times, for client-to-server flow (c2s) and 
for server-to-client flow (s2c) while using BBR + pfifo_fast:

# tcpdump -w test_XXX.pcap -c1000000 -s 100 -i enp2s0 port 5201

I've uploaded both files here [1].

Thanks.

Oleksandr

[1] https://natalenko.name/myfiles/bbr/


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ