lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 7 Dec 2020 11:33:44 -0500
From:   Neal Cardwell <ncardwell@...gle.com>
To:     Eric Dumazet <edumazet@...gle.com>
Cc:     "Mohamed Abuelfotoh, Hazem" <abuehaze@...zon.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "stable@...r.kernel.org" <stable@...r.kernel.org>,
        "ycheng@...gle.com" <ycheng@...gle.com>,
        "weiwan@...gle.com" <weiwan@...gle.com>,
        "Strohman, Andy" <astroh@...zon.com>,
        "Herrenschmidt, Benjamin" <benh@...zon.com>
Subject: Re: [PATCH net-next] tcp: optimise receiver buffer autotuning
 initialisation for high latency connections

On Mon, Dec 7, 2020 at 11:23 AM Eric Dumazet <edumazet@...gle.com> wrote:
>
> On Mon, Dec 7, 2020 at 5:09 PM Mohamed Abuelfotoh, Hazem
> <abuehaze@...zon.com> wrote:
> >
> >     >Since I can not reproduce this problem with another NIC on x86, I
> >     >really wonder if this is not an issue with ENA driver on PowerPC
> >     >perhaps ?
> >
> >
> > I am able to reproduce it on x86 based EC2 instances using ENA  or  Xen netfront or Intel ixgbevf driver on the receiver so it's not specific to ENA, we were able to easily reproduce it between 2 VMs running in virtual box on the same physical host considering the environment requirements I mentioned in my first e-mail.
> >
> > What's the RTT between the sender & receiver in your reproduction? Are you using bbr on the sender side?
>
>
> 100ms RTT
>
> Which exact version of linux kernel are you using ?

Thanks for testing this, Eric. Would you be able to share the MTU
config commands you used, and the tcpdump traces you get? I'm
surprised that receive buffer autotuning would work for advmss of
around 6500 or higher.

thanks,
neal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ