[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z4WKHnDG9VSMe5OD@LQ3V64L9R2>
Date: Mon, 13 Jan 2025 13:48:14 -0800
From: Joe Damato <jdamato@...tly.com>
To: Gerhard Engleder <gerhard@...leder-embedded.com>
Cc: magnus.karlsson@...el.com, andrew@...n.ch, davem@...emloft.net,
edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com,
netdev@...r.kernel.org
Subject: Re: [PATCH net-next] tsnep: Link queues to NAPIs
On Mon, Jan 13, 2025 at 09:32:23PM +0100, Gerhard Engleder wrote:
> On 13.01.25 20:59, Joe Damato wrote:
> > On Fri, Jan 10, 2025 at 11:39:39PM +0100, Gerhard Engleder wrote:
> > > Use netif_queue_set_napi() to link queues to NAPI instances so that they
> > > can be queried with netlink.
> > >
> > > $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
> > > --dump queue-get --json='{"ifindex": 11}'
> > > [{'id': 0, 'ifindex': 11, 'napi-id': 9, 'type': 'rx'},
> > > {'id': 1, 'ifindex': 11, 'napi-id': 10, 'type': 'rx'},
> > > {'id': 0, 'ifindex': 11, 'napi-id': 9, 'type': 'tx'},
> > > {'id': 1, 'ifindex': 11, 'napi-id': 10, 'type': 'tx'}]
> > >
> > > Additionally use netif_napi_set_irq() to also provide NAPI interrupt
> > > number to userspace.
> > >
> > > $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
> > > --do napi-get --json='{"id": 9}'
> > > {'defer-hard-irqs': 0,
> > > 'gro-flush-timeout': 0,
> > > 'id': 9,
> > > 'ifindex': 11,
> > > 'irq': 42,
> > > 'irq-suspend-timeout': 0}
> > >
> > > Providing information about queues to userspace makes sense as APIs like
> > > XSK provide queue specific access. Also XSK busy polling relies on
> > > queues linked to NAPIs.
> > >
> > > Signed-off-by: Gerhard Engleder <gerhard@...leder-embedded.com>
> > > ---
> > > drivers/net/ethernet/engleder/tsnep_main.c | 28 ++++++++++++++++++----
> > > 1 file changed, 23 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c
> > > index 45b9f5780902..71e950e023dc 100644
> > > --- a/drivers/net/ethernet/engleder/tsnep_main.c
> > > +++ b/drivers/net/ethernet/engleder/tsnep_main.c
> > > @@ -1984,23 +1984,41 @@ static int tsnep_queue_open(struct tsnep_adapter *adapter,
> > > static void tsnep_queue_enable(struct tsnep_queue *queue)
> > > {
> > > + struct tsnep_adapter *adapter = queue->adapter;
> > > +
> > > + netif_napi_set_irq(&queue->napi, queue->irq);
> > > napi_enable(&queue->napi);
> > > - tsnep_enable_irq(queue->adapter, queue->irq_mask);
> > > + tsnep_enable_irq(adapter, queue->irq_mask);
> > > - if (queue->tx)
> > > + if (queue->tx) {
> > > + netif_queue_set_napi(adapter->netdev, queue->tx->queue_index,
> > > + NETDEV_QUEUE_TYPE_TX, &queue->napi);
> > > tsnep_tx_enable(queue->tx);
> > > + }
> > > - if (queue->rx)
> > > + if (queue->rx) {
> > > + netif_queue_set_napi(adapter->netdev, queue->rx->queue_index,
> > > + NETDEV_QUEUE_TYPE_RX, &queue->napi);
> > > tsnep_rx_enable(queue->rx);
> > > + }
> > > }
> > > static void tsnep_queue_disable(struct tsnep_queue *queue)
> > A> {
> > > - if (queue->tx)
> > > + struct tsnep_adapter *adapter = queue->adapter;
> > > +
> > > + if (queue->rx)
> > > + netif_queue_set_napi(adapter->netdev, queue->rx->queue_index,
> > > + NETDEV_QUEUE_TYPE_RX, NULL);
> > > +
> > > + if (queue->tx) {
> > > tsnep_tx_disable(queue->tx, &queue->napi);
> > > + netif_queue_set_napi(adapter->netdev, queue->tx->queue_index,
> > > + NETDEV_QUEUE_TYPE_TX, NULL);
> > > + }
> > > napi_disable(&queue->napi);
> > > - tsnep_disable_irq(queue->adapter, queue->irq_mask);
> > > + tsnep_disable_irq(adapter, queue->irq_mask);
> > > /* disable RX after NAPI polling has been disabled, because RX can be
> > > * enabled during NAPI polling
> >
> > The changes generally look OK to me (it seems RTNL is held on all
> > paths where this code can be called from as far as I can tell), but
> > there was one thing that stood out to me.
> >
> > AFAIU, drivers avoid marking XDP queues as NETDEV_QUEUE_TYPE_RX
> > or NETDEV_QUEUE_TYPE_TX. I could be wrong, but that was my
> > understanding and I submit patches to several drivers with this
> > assumption.
> >
> > For example, in commit b65969856d4f ("igc: Link queues to NAPI
> > instances"), I unlinked/linked the NAPIs and queue IDs when XDP was
> > enabled/disabled. Likewise, in commit 64b62146ba9e ("net/mlx4: link
> > NAPI instances to queues and IRQs"), I avoided the XDP queues.
> >
> > If drivers are to avoid marking XDP queues as NETDEV_QUEUE_TYPE_RX
> > or NETDEV_QUEUE_TYPE_TX, perhaps tsnep needs to be modified
> > similarly?
>
> With 5ef44b3cb4 ("xsk: Bring back busy polling support") the linking of
> the NAPIs is required for XDP/XSK. So it is strange to me if for XDP/XSK
> the NAPIs should be unlinked. But I'm not an expert, so maybe there is
> a reason why.
>
> I added Magnus, maybe he knows if XSK queues shall still be linked to
> NAPIs.
OK, so I think I was probably just wrong?
I looked at bnxt and it seems to mark XDP queues, which means
probably my patches for igc, ena, and mlx4 need to be fixed and the
proposed patch I have for virtio_net needs to be adjusted.
I can't remember now why I thought XDP queues should be avoided. I
feel like I read that or got that as feedback at some point, but I
can't remember now. Maybe it was just one driver or something I was
working on and I accidentally thought it should be avoided
everywhere? Not sure.
Hopefully some one can give a definitive answer on this one before I
go through and try to fix all the drivers I modified :|
Powered by blists - more mailing lists