[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0d270dc73553e5deb1a195f4ae84f2795eb1b167.camel@redhat.com>
Date: Thu, 03 Feb 2022 17:01:44 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <eric.dumazet@...il.com>,
"David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>
Cc: netdev <netdev@...r.kernel.org>,
Eric Dumazet <edumazet@...gle.com>,
Coco Li <lixiaoyan@...gle.com>
Subject: Re: [PATCH net-next 09/15] net: increase MAX_SKB_FRAGS
On Wed, 2022-02-02 at 17:51 -0800, Eric Dumazet wrote:
> From: Eric Dumazet <edumazet@...gle.com>
>
> Currently, MAX_SKB_FRAGS value is 17.
>
> For standard tcp sendmsg() traffic, no big deal because tcp_sendmsg()
> attempts order-3 allocations, stuffing 32768 bytes per frag.
>
> But with zero copy, we use order-0 pages.
>
> For BIG TCP to show its full potential, we increase MAX_SKB_FRAGS
> to be able to fit 45 segments per skb.
>
> This is also needed for BIG TCP rx zerocopy, as zerocopy currently
> does not support skbs with frag list.
>
> We have used this MAX_SKB_FRAGS value for years at Google before
> we deployed 4K MTU, with no adverse effect.
> Back then, goal was to be able to receive full size (64KB) GRO
> packets without the frag_list overhead.
IIRC, while backporting some changes to an older RHEL kernel, we had to
increase the skb overhead due to kabi issue.
That caused some measurable regressions because some drivers (e.g.
ixgbe) where not able any more to allocate multiple (skb) heads from
the same page.
All the above subject to some noise - it's a fainting memory.
I'll try to do some tests with the H/W I have handy, but it could take
a little time due to conflicting scheduling here.
Thanks,
Paolo
Powered by blists - more mailing lists