[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d7d32ee0ce420233d641fc9fb7cef27b0ee271c3.camel@gmail.com>
Date: Tue, 10 May 2022 12:49:02 -0700
From: Alexander H Duyck <alexander.duyck@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>,
"David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>
Cc: netdev <netdev@...r.kernel.org>,
Alexander Duyck <alexanderduyck@...com>,
Coco Li <lixiaoyan@...gle.com>,
Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH v6 net-next 00/13] tcp: BIG TCP implementation
On Mon, 2022-05-09 at 20:32 -0700, Eric Dumazet wrote:
> From: Eric Dumazet <edumazet@...gle.com>
>
> This series implements BIG TCP as presented in netdev 0x15:
>
> https://netdevconf.info/0x15/session.html?BIG-TCP
>
> Jonathan Corbet made a nice summary: https://lwn.net/Articles/884104/
>
> Standard TSO/GRO packet limit is 64KB
>
> With BIG TCP, we allow bigger TSO/GRO packet sizes for IPv6 traffic.
>
> Note that this feature is by default not enabled, because it might
> break some eBPF programs assuming TCP header immediately follows IPv6 header.
>
> While tcpdump recognizes the HBH/Jumbo header, standard pcap filters
> are unable to skip over IPv6 extension headers.
>
> Reducing number of packets traversing networking stack usually improves
> performance, as shown on this experiment using a 100Gbit NIC, and 4K MTU.
>
> 'Standard' performance with current (74KB) limits.
> for i in {1..10}; do ./netperf -t TCP_RR -H iroa23 -- -r80000,80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
> 77 138 183 8542.19
> 79 143 178 8215.28
> 70 117 164 9543.39
> 80 144 176 8183.71
> 78 126 155 9108.47
> 80 146 184 8115.19
> 71 113 165 9510.96
> 74 113 164 9518.74
> 79 137 178 8575.04
> 73 111 171 9561.73
>
> Now enable BIG TCP on both hosts.
>
> ip link set dev eth0 gro_max_size 185000 gso_max_size 185000
> for i in {1..10}; do ./netperf -t TCP_RR -H iroa23 -- -r80000,80000 -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
> 57 83 117 13871.38
> 64 118 155 11432.94
> 65 116 148 11507.62
> 60 105 136 12645.15
> 60 103 135 12760.34
> 60 102 134 12832.64
> 62 109 132 10877.68
> 58 82 115 14052.93
> 57 83 124 14212.58
> 57 82 119 14196.01
>
> We see an increase of transactions per second, and lower latencies as well.
>
> v6: fix a compilation error for CONFIG_IPV6=n in
> "net: allow gso_max_size to exceed 65536", reported by kernel bots.
>
> v5: Replaced two patches (that were adding new attributes) with patches
> from Alexander Duyck. Idea is to reuse existing gso_max_size/gro_max_size
>
> v4: Rebased on top of Jakub series (Merge branch 'tso-gso-limit-split')
> max_tso_size is now family independent.
>
> v3: Fixed a typo in RFC number (Alexander)
> Added Reviewed-by: tags from Tariq on mlx4/mlx5 parts.
>
> v2: Removed the MAX_SKB_FRAGS change, this belongs to a different series.
> Addressed feedback, for Alexander and nvidia folks.
>
>
> Alexander Duyck (2):
> net: allow gso_max_size to exceed 65536
> net: allow gro_max_size to exceed 65536
>
> Coco Li (2):
> ipv6: Add hop-by-hop header to jumbograms in ip6_output
> mlx5: support BIG TCP packets
>
> Eric Dumazet (9):
> net: add IFLA_TSO_{MAX_SIZE|SEGS} attributes
> net: limit GSO_MAX_SIZE to 524280 bytes
> tcp_cubic: make hystart_ack_delay() aware of BIG TCP
> ipv6: add struct hop_jumbo_hdr definition
> ipv6/gso: remove temporary HBH/jumbo header
> ipv6/gro: insert temporary HBH/jumbo header
> net: loopback: enable BIG TCP packets
> veth: enable BIG TCP packets
> mlx4: support BIG TCP packets
Looked over the changes to my patches and they all look good (sorry for
not catching that myself). This approach addresses all the concerns I
had.
For the series:
Acked-by: Alexander Duyck <alexanderduyck@...com>
Powered by blists - more mailing lists