lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250923172305.0b0a235c@kernel.org>
Date: Tue, 23 Sep 2025 17:23:05 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Dragos Tatulea <dtatulea@...dia.com>
Cc: Tariq Toukan <tariqt@...dia.com>, Eric Dumazet <edumazet@...gle.com>,
 Paolo Abeni <pabeni@...hat.com>, Andrew Lunn <andrew+netdev@...n.ch>,
 "David S. Miller" <davem@...emloft.net>, Saeed Mahameed
 <saeedm@...dia.com>, Mark Bloch <mbloch@...dia.com>, Leon Romanovsky
 <leon@...nel.org>, Jesper Dangaard Brouer <hawk@...nel.org>, Ilias
 Apalodimas <ilias.apalodimas@...aro.org>, netdev@...r.kernel.org,
 linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org, Gal Pressman
 <gal@...dia.com>
Subject: Re: [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max

On Tue, 23 Sep 2025 08:23:10 -0700 Jakub Kicinski wrote:
> On Tue, 23 Sep 2025 15:12:33 +0000 Dragos Tatulea wrote:
> > On Tue, Sep 23, 2025 at 07:23:56AM -0700, Jakub Kicinski wrote:  
> > > Please do some testing. A PP cache of 32k is just silly, you should
> > > probably use a smaller limit.    
> > You mean clamping the pool_size to a certain limit so that the page_pool
> > ring size doesn't cover a full RQ when the RQ ring size is too large?  
> 
> Yes, 8k ring will take milliseconds to drain. We don't really need
> milliseconds of page cache. By the time the driver processed the full
> ring we must have gone thru 128 NAPI cycles, and the application
> most likely already stated freeing the pages.
> 
> If my math is right at 80Gbps per ring and 9k MTU it takes more than a
> 1usec to receive a frame. So 8msec to just _receive_ a full ring worth
> of data. At Meta we mostly use large rings to cover up scheduler and
> IRQ masking latency.

On second thought, let's just clamp it to 16k in the core and remove
the error. Clearly the expectations of the API are too intricate,
most drivers just use ring size as the cache size.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ