lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <32e32635-ca75-99b8-2285-1d87a29b6d89@intel.com>
Date: Mon, 31 Jul 2023 16:48:27 -0700
From: "Nambiar, Amritha" <amritha.nambiar@...el.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: <netdev@...r.kernel.org>, <davem@...emloft.net>,
	<sridhar.samudrala@...el.com>
Subject: Re: [net-next/RFC PATCH v1 1/4] net: Introduce new napi fields for
 rx/tx queues

On 7/28/2023 4:09 PM, Jakub Kicinski wrote:
> On Fri, 28 Jul 2023 15:37:14 -0700 Nambiar, Amritha wrote:
>> Hi Jakub, I have the next version of patches ready (I'll send that in a
>> bit). I suggest if you could take a look at it and let me know your
>> thoughts and then we can proceed from there.
> 
> Great, looking forward.
> 
>> About dumping queues and NAPIs separately, are you thinking about having
>> both per-NAPI and per-queue instances, or do you think only one will
>> suffice. The plan was to follow this work with a 'set-napi' series,
>> something like,
>> set-napi <napi_id> queues <q_id1, q_id2, ...>
>> to configure the queue[s] that are to be serviced by the napi instance.
>>
>> In this case, dumping the NAPIs would be beneficial especially when
>> there are multiple queues on the NAPI.
>>
>> WRT per-queue, are there a set of parameters that needs to exposed
>> besides what's already handled by ethtool...
> 
> Not much at this point, maybe memory model. Maybe stats if we want to
> put stats in the same command. But the fact that sysfs has a bunch of
> per queue attributes makes me think that sooner or later we'll want
> queue as a full object in netlink. And starting out that way makes
> the whole API cleaner, at least in my opinion.
> 
> If we have another object which wants to refer to queues (e.g. page
> pool) it's easier to express the topology when it's clear what is an
> object and what's just an attribute.
> 
>> Also, to configure a queue
>> on a NAPI, set-queue <qid> <napi_id>, the existing NAPIs would have to
>> be looked up from the queue parameters dumped.
> 
> The look up should not be much of a problem.
> 
> And don't you think that:
> 
>    set-queue queue 1 napi-id 101
>    set-queue queue 2 napi-id 101
> 
> is more natural than:
> 
>    set-napi napi-id 101 queues [1, 2]
> 
> Especially in presence of conflicts. If user tries:
> 
>    set-napi napi-id 101 queues [1, 2]
>    set-napi napi-id 102 queues [1, 2]
> 
> Do both napis now serve those queues? May seem obvious to us, but
> "philosophically" why does setting an attribute of object 102 change
> attributes of object 101?
> 

Right, I see the point. In presence of conflicts when the 
napi<->queue[s] mappings are updated, set-napi will impact other 
NAPI-IDs, while set-queue would limit the change to just the queue that 
is requested.

In both the cases, the underlying work remains the same:
1. Remove the queue from the existing napi instance it is associated with.
2. Driver updates queue[s]<->vector mapping and associates with new napi
instance.
3. Report the impacted napi/queue back to the stack.

The 'napi-get' command would list all the napis and the updated
queue[s] list.

Now, in usecases where a single poller is set to service multiple queues 
(say 8), with set-napi this can be done with a single command, while 
with set-queue this will result in 8 different requests to the driver. 
This is the trade-off I see if we go with set-queue.

> If we ever gain the ability to create queues it will be:
> 
>    create-queue napi-id xyz
> 
> which also matches set-queue more nicely than napi base API.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ