[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZxKVI_DvFWBvRMaf@LQ3V64L9R2>
Date: Fri, 18 Oct 2024 10:04:35 -0700
From: Joe Damato <jdamato@...tly.com>
To: Kurt Kanzenbach <kurt@...utronix.de>
Cc: netdev@...r.kernel.org, vinicius.gomes@...el.com,
Tony Nguyen <anthony.l.nguyen@...el.com>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
"moderated list:INTEL ETHERNET DRIVERS" <intel-wired-lan@...ts.osuosl.org>,
open list <linux-kernel@...r.kernel.org>,
"open list:XDP (eXpress Data Path)" <bpf@...r.kernel.org>
Subject: Re: [RFC net-next v2 2/2] igc: Link queues to NAPI instances
On Tue, Oct 15, 2024 at 12:27:01PM +0200, Kurt Kanzenbach wrote:
> On Mon Oct 14 2024, Joe Damato wrote:
[...]
> > diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
> > index 7964bbedb16c..59c00acfa0ed 100644
> > --- a/drivers/net/ethernet/intel/igc/igc_main.c
> > +++ b/drivers/net/ethernet/intel/igc/igc_main.c
> > @@ -4948,6 +4948,47 @@ static int igc_sw_init(struct igc_adapter *adapter)
> > return 0;
> > }
> >
> > +void igc_set_queue_napi(struct igc_adapter *adapter, int q_idx,
> > + struct napi_struct *napi)
> > +{
> > + if (adapter->flags & IGC_FLAG_QUEUE_PAIRS) {
> > + netif_queue_set_napi(adapter->netdev, q_idx,
> > + NETDEV_QUEUE_TYPE_RX, napi);
> > + netif_queue_set_napi(adapter->netdev, q_idx,
> > + NETDEV_QUEUE_TYPE_TX, napi);
> > + } else {
> > + if (q_idx < adapter->num_rx_queues) {
> > + netif_queue_set_napi(adapter->netdev, q_idx,
> > + NETDEV_QUEUE_TYPE_RX, napi);
> > + } else {
> > + q_idx -= adapter->num_rx_queues;
> > + netif_queue_set_napi(adapter->netdev, q_idx,
> > + NETDEV_QUEUE_TYPE_TX, napi);
> > + }
> > + }
> > +}
>
> In addition, to what Vinicius said. I think this can be done
> simpler. Something like this?
>
> void igc_set_queue_napi(struct igc_adapter *adapter, int vector,
> struct napi_struct *napi)
> {
> struct igc_q_vector *q_vector = adapter->q_vector[vector];
>
> if (q_vector->rx.ring)
> netif_queue_set_napi(adapter->netdev, vector, NETDEV_QUEUE_TYPE_RX, napi);
>
> if (q_vector->tx.ring)
> netif_queue_set_napi(adapter->netdev, vector, NETDEV_QUEUE_TYPE_TX, napi);
> }
I tried this suggestion but this does not result in correct output
in the case where IGC_FLAG_QUEUE_PAIRS is disabled.
The output from netlink:
$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
--dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
{'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
{'id': 0, 'ifindex': 2, 'type': 'tx'},
{'id': 1, 'ifindex': 2, 'type': 'tx'}]
Note the lack of a napi-id for the TX queues. This typically happens
when the linking is not done correctly; netif_queue_set_napi should
take a queue id as the second parameter.
I believe the suggested code above should be modified to be as
follows to use ring->queue_index:
if (q_vector->rx.ring)
netif_queue_set_napi(adapter->netdev,
q_vector->rx.ring->queue_index,
NETDEV_QUEUE_TYPE_RX, napi);
if (q_vector->tx.ring)
netif_queue_set_napi(adapter->netdev,
q_vector->tx.ring->queue_index,
NETDEV_QUEUE_TYPE_TX, napi);
Which produces correct output:
$ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
--dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
{'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
{'id': 0, 'ifindex': 2, 'napi-id': 8195, 'type': 'tx'},
{'id': 1, 'ifindex': 2, 'napi-id': 8196, 'type': 'tx'}]
I wanted to send you a note about this before I post the v3 so that
if/when you review it you'll have the context as to why the v3 code
is slightly different than what was suggested.
Powered by blists - more mailing lists