lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <44c5024a-d533-0ae4-355a-c568b67b1964@intel.com>
Date: Fri, 28 Jul 2023 15:37:14 -0700
From: "Nambiar, Amritha" <amritha.nambiar@...el.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: <netdev@...r.kernel.org>, <davem@...emloft.net>,
	<sridhar.samudrala@...el.com>
Subject: Re: [net-next/RFC PATCH v1 1/4] net: Introduce new napi fields for
 rx/tx queues

On 7/28/2023 2:59 PM, Jakub Kicinski wrote:
> On Wed, 12 Jul 2023 16:53:26 -0700 Jakub Kicinski wrote:
>>> The napi pointer in the queue structs would give the napi<->queue
>>> mapping, I still need to walk the queues of a NAPI (when there are
>>> multiple queues for the NAPI), example:
>>> 'napi-id': 600, 'rx-queues': [7,6,5], 'tx-queues': [7,6,5]
>>>
>>> in which case I would have a list of netdev queue structs within the
>>> napi_struct (instead of the list of queue indices that I currently have)
>>> to avoid memory allocation.
>>>
>>> Does this sound right?
>>
>> yes, I think that's fine.
>>
>> If we store the NAPI pointer in the queue struct, we can still generate
>> the same dump with the time complexity of #napis * (#max_rx + #max_tx).
>> Which I don't think is too bad. Up to you.
> 
> The more I think about it the more I feel like we should dump queues
> and NAPIs separately. And the queue can list the NAPI id of the NAPI
> instance which services it.
> 
> Are you actively working on this or should I take a stab?

Hi Jakub, I have the next version of patches ready (I'll send that in a 
bit). I suggest if you could take a look at it and let me know your 
thoughts and then we can proceed from there.

About dumping queues and NAPIs separately, are you thinking about having 
both per-NAPI and per-queue instances, or do you think only one will 
suffice. The plan was to follow this work with a 'set-napi' series, 
something like,
set-napi <napi_id> queues <q_id1, q_id2, ...>
to configure the queue[s] that are to be serviced by the napi instance.

In this case, dumping the NAPIs would be beneficial especially when 
there are multiple queues on the NAPI.

WRT per-queue, are there a set of parameters that needs to exposed 
besides what's already handled by ethtool... Also, to configure a queue 
on a NAPI, set-queue <qid> <napi_id>, the existing NAPIs would have to 
be looked up from the queue parameters dumped.

-Amritha

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ