lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251105182210.7630c19e@kernel.org>
Date: Wed, 5 Nov 2025 18:22:10 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Mina Almasry <almasrymina@...gle.com>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org, Joshua Washington
 <joshwash@...gle.com>, Harshitha Ramamurthy <hramamurthy@...gle.com>,
 Andrew Lunn <andrew+netdev@...n.ch>, "David S. Miller"
 <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Paolo Abeni
 <pabeni@...hat.com>, Jesper Dangaard Brouer <hawk@...nel.org>, Ilias
 Apalodimas <ilias.apalodimas@...aro.org>, Simon Horman <horms@...nel.org>,
 Willem de Bruijn <willemb@...gle.com>, ziweixiao@...gle.com, Vedant Mathur
 <vedantmathur@...gle.com>
Subject: Re: [PATCH net v1 2/2] gve: use max allowed ring size for ZC
 page_pools

On Wed, 5 Nov 2025 17:56:10 -0800 Mina Almasry wrote:
> On Wed, Nov 5, 2025 at 5:11 PM Jakub Kicinski <kuba@...nel.org> wrote:
> > On Wed,  5 Nov 2025 20:07:58 +0000 Mina Almasry wrote:  
> > > NCCL workloads with NCCL_P2P_PXN_LEVEL=2 or 1 are very slow with the
> > > current gve devmem tcp configuration.  
> >
> > Hardcoding the ring size because some other attribute makes you think
> > that a specific application is running is rather unclean IMO..
> 
> I did not see it this way tbh. I am thinking for devmem tcp to be as
> robust as possible to the burstiness of frag frees, we need a bit of a
> generous ring size. The specific application I'm referring to is just
> an example of how this could happen.
> 
> I was thinking maybe binding->dma_buf->size / net_iov_size (so that
> the ring is large enough to hold every single netmem if need be) would
> be the upper bound, but in practice increasing to the current max
> allowed was good enough, so I'm trying that.

Increasing cache sizes to the max seems very hacky at best.
The underlying implementation uses genpool and doesn't even
bother to do batching.

> > Do you want me to respin the per-ring config series? Or you can take it over.
> > IDK where the buffer size config is after recent discussion but IIUC
> > it will not drag in my config infra so it shouldn't conflict.
> 
> You mean this one? "[RFC net-next 00/22] net: per-queue rx-buf-len
> configuration"
> 
> I don't see the connection between rx-buf-len and the ring size,
> unless you're thinking about some netlink-configurable way to
> configure the pp->ring size?

The latter. We usually have the opposite problem - drivers configure
the cache way too large for any practical production needs and waste
memory.

> I am hoping for something backportable with fixes to make this class
> of workloads usable.

Oh, let's be clear, no way this is getting a fixes tag :/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ