lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <34197c670230376051d3830704f18e85@natalenko.name>
Date:   Tue, 20 Feb 2018 10:32:58 +0100
From:   Oleksandr Natalenko <oleksandr@...alenko.name>
To:     Eric Dumazet <edumazet@...gle.com>
Cc:     "David S . Miller" <davem@...emloft.net>,
        netdev <netdev@...r.kernel.org>,
        Neal Cardwell <ncardwell@...gle.com>,
        Yuchung Cheng <ycheng@...gle.com>,
        Soheil Hassas Yeganeh <soheil@...gle.com>,
        Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [PATCH net-next 0/6] tcp: remove non GSO code

Hi.

19.02.2018 20:56, Eric Dumazet wrote:
> Switching TCP to GSO mode, relying on core networking layers
> to perform eventual adaptation for dumb devices was overdue.
> 
> 1) Most TCP developments are done with TSO in mind.
> 2) Less high-resolution timers needs to be armed for TCP-pacing
> 3) GSO can benefit of xmit_more hint
> 4) Receiver GRO is more effective (as if TSO was used for real on 
> sender)
>    -> less ACK packets and overhead.
> 5) Write queues have less overhead (one skb holds about 64KB of 
> payload)
> 6) SACK coalescing just works. (no payload in skb->head)
> 7) rtx rb-tree contains less packets, SACK is cheaper.
> 8) Removal of legacy code. Less maintenance hassles.
> 
> Note that I have left the sendpage/zerocopy paths, but they probably 
> can
> benefit from the same strategy.
> 
> Thanks to Oleksandr Natalenko for reporting a performance issue for
> BBR/fq_codel,
> which was the main reason I worked on this patch series.

Thanks for dealing with this that fast.

Does this mean that the option to optimise internal TCP pacing is still 
an open question?

Oleksandr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ