lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 4 Mar 2022 09:14:19 -0800
From:   Eric Dumazet <edumazet@...gle.com>
To:     David Ahern <dsahern@...nel.org>
Cc:     Eric Dumazet <eric.dumazet@...il.com>,
        "David S . Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        netdev <netdev@...r.kernel.org>, Coco Li <lixiaoyan@...gle.com>,
        Alexander Duyck <alexanderduyck@...com>,
        Saeed Mahameed <saeedm@...dia.com>,
        Leon Romanovsky <leon@...nel.org>
Subject: Re: [PATCH v2 net-next 14/14] mlx5: support BIG TCP packets

On Thu, Mar 3, 2022 at 8:43 PM David Ahern <dsahern@...nel.org> wrote:
>
> On 3/3/22 11:16 AM, Eric Dumazet wrote:
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> > index b2ed2f6d4a9208aebfd17fd0c503cd1e37c39ee1..1e51ce1d74486392a26568852c5068fe9047296d 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> > @@ -4910,6 +4910,7 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
> >
> >       netdev->priv_flags       |= IFF_UNICAST_FLT;
> >
> > +     netif_set_tso_ipv6_max_size(netdev, 512 * 1024);
>
>
> How does the ConnectX hardware handle fairness for such large packet
> sizes? For 1500 MTU this means a single large TSO can cause the H/W to
> generate 349 MTU sized packets. Even a 4k MTU means 128 packets. This
> has an effect on the rate of packets hitting the next hop switch for
> example.

I think ConnectX cards interleave packets from all TX queues, at least
old CX3 have a parameter to control that.

Given that we already can send at line rate, from a single TX queue, I
do not see why presenting larger TSO packets
would change anything on the wire ?

Do you think ConnectX adds an extra gap on the wire at the end of a TSO train ?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ