[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <66ea9048-3287-c0d5-6edc-bd4b7ec4bd70@kernel.org>
Date: Sat, 5 Mar 2022 09:36:51 -0700
From: David Ahern <dsahern@...nel.org>
To: Eric Dumazet <edumazet@...gle.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
"David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
netdev <netdev@...r.kernel.org>, Coco Li <lixiaoyan@...gle.com>,
Alexander Duyck <alexanderduyck@...com>,
Saeed Mahameed <saeedm@...dia.com>,
Leon Romanovsky <leon@...nel.org>
Subject: Re: [PATCH v2 net-next 14/14] mlx5: support BIG TCP packets
On 3/4/22 10:14 AM, Eric Dumazet wrote:
> On Thu, Mar 3, 2022 at 8:43 PM David Ahern <dsahern@...nel.org> wrote:
>>
>> On 3/3/22 11:16 AM, Eric Dumazet wrote:
>>> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
>>> index b2ed2f6d4a9208aebfd17fd0c503cd1e37c39ee1..1e51ce1d74486392a26568852c5068fe9047296d 100644
>>> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
>>> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
>>> @@ -4910,6 +4910,7 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
>>>
>>> netdev->priv_flags |= IFF_UNICAST_FLT;
>>>
>>> + netif_set_tso_ipv6_max_size(netdev, 512 * 1024);
>>
>>
>> How does the ConnectX hardware handle fairness for such large packet
>> sizes? For 1500 MTU this means a single large TSO can cause the H/W to
>> generate 349 MTU sized packets. Even a 4k MTU means 128 packets. This
>> has an effect on the rate of packets hitting the next hop switch for
>> example.
>
> I think ConnectX cards interleave packets from all TX queues, at least
> old CX3 have a parameter to control that.
>
> Given that we already can send at line rate, from a single TX queue, I
> do not see why presenting larger TSO packets
> would change anything on the wire ?
>
> Do you think ConnectX adds an extra gap on the wire at the end of a TSO train ?
It's not about 1 queue, my question was along several lines. e.g,
1. the inter-packet gap for TSO generated packets. With 512kB packets
the burst is 8x from what it is today.
2. the fairness within hardware as 1 queue has potentially many 512kB
packets and the impact on other queues (e.g., higher latency?) since it
will take longer to split the larger packets into MTU sized packets.
It is really about understanding the change this new default size is
going to have on users.
Powered by blists - more mailing lists