[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20250114212317.26218-1-johndale@cisco.com>
Date: Tue, 14 Jan 2025 13:23:17 -0800
From: John Daley <johndale@...co.com>
To: kuba@...nel.org
Cc: andrew+netdev@...n.ch,
benve@...co.com,
davem@...emloft.net,
edumazet@...gle.com,
johndale@...co.com,
neescoba@...co.com,
netdev@...r.kernel.org,
pabeni@...hat.com,
satishkh@...co.com
Subject: Re: [PATCH net-next v4 4/6] enic: Use the Page Pool API for RX when MTU is less than page size
>On 1/10/25, 4:38 PM, "Jakub Kicinski" kuba@...nel.org wrote:
>
>On Thu, 9 Jan 2025 20:03:02 -0800 John Daley wrote:
>> >>Good point, once fragmentation is no longer possible you can
>> >>set .max_len to the size of the fragment HW may clobber,
>> >>and .offset to the reserved headroom.
>> >
>> >Ok, testing going good so far, but need another day.
>>
>> Testing is OK, but we are concerned about extra memory usage when order
>> is greater than 0. Especially for 9000 MTU where order 2 would mean
>> allocating an extra unused page per buffer. This could impact scaled up
>> installations with memory constraints. For this reason we would like to
>> limit the use of page pool to MTU <= PAGE_SIZE for now so that order is
>> 0.
>
>And if you don't use the page pool what would be the allocation size
>for 9k MTU if you don't have scatter? I think you're allocating linear
>skbs, which IIRC will round up to the next power of 2...
Right, I now realize the linear skb allocation does round up so it is
using the same amount of memory as page pool for MTU 9000. I am spinning
a new patch set where only page pool is used since the code will be less
complicated. Thanks!
>
>> Our newer hardware supports using multiple 0 order pages for large MTUs
>> and we will submit a patch for that in the future.
>>
>> I will make a v5 patchset with the napi_free_frags and pp_alloc_error
>> changes already discussed. Thanks, John
Powered by blists - more mailing lists