[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230712165326.71c3a8ad@kernel.org>
Date: Wed, 12 Jul 2023 16:53:26 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: "Nambiar, Amritha" <amritha.nambiar@...el.com>
Cc: <netdev@...r.kernel.org>, <davem@...emloft.net>,
<sridhar.samudrala@...el.com>
Subject: Re: [net-next/RFC PATCH v1 1/4] net: Introduce new napi fields for
rx/tx queues
On Wed, 12 Jul 2023 16:11:55 -0700 Nambiar, Amritha wrote:
> >> The idea was for netdev-genl to extract information out of
> >> netdev->napi_list->napi. For tracking queues, we build a linked list
> >> of queues for the napi and store it in the napi_struct. This would
> >> also enable updating the napi<->queue[s] association (later with the
> >> 'set' command), i.e. remove the queue[s] from the existing napi
> >> instance that it is currently associated with and map with the new
> >> napi instance, by simply deleting from one list and adding to the new
> >> list.
> >
> > Right, my point is that each queue can only be serviced by a single
> > NAPI at a time, so we have a 1:N relationship. It's easier to store
> > the state on the side that's the N, rather than 1.
> >
> > You can add list_head to the queue structs, if you prefer to be able
> > to walk queues of a NAPI more efficiently (that said the head for
> > the list is in "control path only section" of napi_struct so...
> > I think you don't?)
>
> The napi pointer in the queue structs would give the napi<->queue
> mapping, I still need to walk the queues of a NAPI (when there are
> multiple queues for the NAPI), example:
> 'napi-id': 600, 'rx-queues': [7,6,5], 'tx-queues': [7,6,5]
>
> in which case I would have a list of netdev queue structs within the
> napi_struct (instead of the list of queue indices that I currently have)
> to avoid memory allocation.
>
> Does this sound right?
yes, I think that's fine.
If we store the NAPI pointer in the queue struct, we can still generate
the same dump with the time complexity of #napis * (#max_rx + #max_tx).
Which I don't think is too bad. Up to you.
Powered by blists - more mailing lists