lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250110163844.39f8efb3@kernel.org>
Date: Fri, 10 Jan 2025 16:38:44 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: John Daley <johndale@...co.com>
Cc: andrew+netdev@...n.ch, benve@...co.com, davem@...emloft.net,
 edumazet@...gle.com, neescoba@...co.com, netdev@...r.kernel.org,
 pabeni@...hat.com, satishkh@...co.com
Subject: Re: [PATCH net-next v4 4/6] enic: Use the Page Pool API for RX when
 MTU is less than page size

On Thu,  9 Jan 2025 20:03:02 -0800 John Daley wrote:
> >>Good point, once fragmentation is no longer possible you can
> >>set .max_len to the size of the fragment HW may clobber,
> >>and .offset to the reserved headroom.  
> >
> >Ok, testing going good so far, but need another day.  
> 
> Testing is OK, but we are concerned about extra memory usage when order
> is greater than 0. Especially for 9000 MTU where order 2 would mean
> allocating an extra unused page per buffer. This could impact scaled up
> installations with memory constraints. For this reason we would like to
> limit the use of page pool to MTU <= PAGE_SIZE for now so that order is
> 0.

And if you don't use the page pool what would be the allocation size
for 9k MTU if you don't have scatter? I think you're allocating linear
skbs, which IIRC will round up to the next power of 2...

> Our newer hardware supports using multiple 0 order pages for large MTUs
> and we will submit a patch for that in the future.
> 
> I will make a v5 patchset with the napi_free_frags and pp_alloc_error
> changes already discussed. Thanks, John


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ