lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20250110040302.14891-1-johndale@cisco.com>
Date: Thu,  9 Jan 2025 20:03:02 -0800
From: John Daley <johndale@...co.com>
To: johndale@...co.com
Cc: andrew+netdev@...n.ch,
	benve@...co.com,
	davem@...emloft.net,
	edumazet@...gle.com,
	kuba@...nel.org,
	neescoba@...co.com,
	netdev@...r.kernel.org,
	pabeni@...hat.com,
	satishkh@...co.com
Subject: Re: [PATCH net-next v4 4/6] enic: Use the Page Pool API for RX when MTU is less than page size

On 1/6/25, 7:00 PM, "John Daley" <johndale@...co.com> wrote:
>
>>> >> The Page Pool API improves bandwidth and CPU overhead by recycling
>>> >> pages instead of allocating new buffers in the driver. Make use of
>>> >> page pool fragment allocation for smaller MTUs so that multiple
>>> >> packets can share a page.  
>>> >
>>> >Why the MTU limitation? You can set page_pool_params.order
>>> >to appropriate value always use the page pool.  
>>> 
>>> I thought it might waste memory, e.g. allocating 16K for 9000 mtu.
>>> But now that you mention it, I see that the added code complexity is
>>> probably not worth it. I am unclear on what to set pp_params.max_len
>>> to when MTU > PAGE_SIZE. Order * PAGE_SIZE or MTU size? In this case
>>> the pages won't be fragmented so isn't only necessary for the MTU sized
>>> area to be DMA SYNC'ed?
>>
>>Good point, once fragmentation is no longer possible you can
>>set .max_len to the size of the fragment HW may clobber,
>>and .offset to the reserved headroom.
>
>Ok, testing going good so far, but need another day.

Testing is OK, but we are concerned about extra memory usage when order
is greater than 0. Especially for 9000 MTU where order 2 would mean
allocating an extra unused page per buffer. This could impact scaled up
installations with memory constraints. For this reason we would like to
limit the use of page pool to MTU <= PAGE_SIZE for now so that order is
0.

Our newer hardware supports using multiple 0 order pages for large MTUs
and we will submit a patch for that in the future.

I will make a v5 patchset with the napi_free_frags and pp_alloc_error
changes already discussed. Thanks, John

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ