lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 18 Apr 2024 18:37:06 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: "Nambiar, Amritha" <amritha.nambiar@...el.com>
Cc: <netdev@...r.kernel.org>, <davem@...emloft.net>, <edumazet@...gle.com>,
 <pabeni@...hat.com>, <ast@...nel.org>, <sdf@...gle.com>,
 <lorenzo@...nel.org>, <tariqt@...dia.com>, <daniel@...earbox.net>,
 <anthony.l.nguyen@...el.com>, <lucien.xin@...il.com>, <hawk@...nel.org>,
 <sridhar.samudrala@...el.com>
Subject: Re: [net-next,RFC PATCH 0/5] Configuring NAPI instance for a queue

On Thu, 18 Apr 2024 14:23:03 -0700 Nambiar, Amritha wrote:
> >> I am not sure of this. ethtool shows pre-set defaults and current
> >> settings, but in this case, it is tricky :(  
> > 
> > Can you say more about the use case for moving the queues around?
> > If you just want to have fewer NAPI vectors and more queues, but
> > don't care about exact mapping - we could probably come up with
> > a simpler API, no? Are the queues stack queues or also AF_XDP?
> 
> I'll try to explain. The goal is to have fewer NAPI pollers. The number 
> of NAPI pollers is the same as the number of active NAPIs (kthread per 
> NAPI). It is possible to limit the number of pollers by mapping 
> multiples queues on an interrupt vector (fewer vectors, more queues) 
> implicitly in the driver. But, we are looking for a more granular 
> approach, in our case, the queues are grouped into 
> queue-groups/rss-contexts. We would like to reduce the number of pollers 
> within certain selected queue-groups/rss-contexts (not all the 
> queue-groups), hence need the configurability.
> This would benefit our hyper-threading use case, where a single physical 
> core can be used for both network and application processing. If the 
> NAPI to queue association is known, we can pin the NAPI thread to the 
> logical core and the application thread to the corresponding sibling 
> logical core.

Could you provide a more detailed example of a desired configuration? 
I'm not sure I'm getting the point.

What's the point of having multiple queues if they end up in the same
NAPI?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ