[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+q5rYm0QgHSHjmEH09DH3XGQ7N9uNvuiV_zu6LsE4m5w@mail.gmail.com>
Date: Thu, 3 Feb 2022 11:12:33 -0800
From: Eric Dumazet <edumazet@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
"David S . Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>, Coco Li <lixiaoyan@...gle.com>
Subject: Re: [PATCH net-next 01/15] net: add netdev->tso_ipv6_max_size attribute
On Thu, Feb 3, 2022 at 10:58 AM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Thu, 3 Feb 2022 08:56:56 -0800 Eric Dumazet wrote:
> > On Thu, Feb 3, 2022 at 8:34 AM Jakub Kicinski <kuba@...nel.org> wrote:
> > > On Wed, 2 Feb 2022 17:51:26 -0800 Eric Dumazet wrote:
> > > > From: Eric Dumazet <edumazet@...gle.com>
> > > >
> > > > Some NIC (or virtual devices) are LSOv2 compatible.
> > > >
> > > > BIG TCP plans using the large LSOv2 feature for IPv6.
> > > >
> > > > New netlink attribute IFLA_TSO_IPV6_MAX_SIZE is defined.
> > > >
> > > > Drivers should use netif_set_tso_ipv6_max_size() to advertize their limit.
> > > >
> > > > Unchanged drivers are not allowing big TSO packets to be sent.
> > >
> > > Many drivers will have a limit on how many buffer descriptors they
> > > can chain, not the size of the super frame, I'd think. Is that not
> > > the case? We can't assume all pages but the first and last are full,
> > > right?
> >
> > In our case, we have a 100Gbit Google NIC which has these limits:
> >
> > - TX descriptor has a 16bit field filled with skb->len
> > - No more than 21 frags per 'packet'
> >
> > In order to support BIG TCP on it, we had to split the bigger TCP packets
> > into smaller chunks, to satisfy both constraints (even if the second
> > constraint is hardly hit once you chop to ~60KB packets, given our 4K
> > MTU)
> >
> > ndo_features_check() might help to take care of small oddities.
>
> Makes sense, I was curious if we can do more in the core so that fewer
> changes are required in the drivers. Both so that drivers don't have to
> strip the header and so that drivers with limitations can be served
> pre-cooked smaller skbs.
I have on my plate to implement a helper to split 'big GRO/TSO' packets
into smaller chunks. I have avoided doing it in our Google NIC driver,
to avoid extra sk_buff/skb->head allocations for each BIG TCP packet.
Yes, core networking stack could use it.
> I wonder how many drivers just assumed MAX_SKB_FRAGS will never
> change :S What do you think about a device-level check in the core
> for number of frags?
I guess we could do this if the CONFIG_MAX_SKB_FRAGS > 17
Powered by blists - more mailing lists