[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87h69d3bm2.fsf@kurt.kurt.home>
Date: Tue, 15 Oct 2024 12:27:01 +0200
From: Kurt Kanzenbach <kurt@...utronix.de>
To: Joe Damato <jdamato@...tly.com>, netdev@...r.kernel.org
Cc: vinicius.gomes@...el.com, Joe Damato <jdamato@...tly.com>, Tony Nguyen
<anthony.l.nguyen@...el.com>, Przemek Kitszel
<przemyslaw.kitszel@...el.com>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, Jesper Dangaard Brouer
<hawk@...nel.org>, John Fastabend <john.fastabend@...il.com>, "moderated
list:INTEL ETHERNET DRIVERS" <intel-wired-lan@...ts.osuosl.org>, open list
<linux-kernel@...r.kernel.org>, "open list:XDP (eXpress Data Path)"
<bpf@...r.kernel.org>
Subject: Re: [RFC net-next v2 2/2] igc: Link queues to NAPI instances
On Mon Oct 14 2024, Joe Damato wrote:
> Link queues to NAPI instances via netdev-genl API so that users can
> query this information with netlink. Handle a few cases in the driver:
> 1. Link/unlink the NAPIs when XDP is enabled/disabled
> 2. Handle IGC_FLAG_QUEUE_PAIRS enabled and disabled
>
> Example output when IGC_FLAG_QUEUE_PAIRS is enabled:
>
> $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
> --dump queue-get --json='{"ifindex": 2}'
>
> [{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
> {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
> {'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'rx'},
> {'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'rx'},
> {'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'tx'},
> {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'tx'},
> {'id': 2, 'ifindex': 2, 'napi-id': 8195, 'type': 'tx'},
> {'id': 3, 'ifindex': 2, 'napi-id': 8196, 'type': 'tx'}]
>
> Since IGC_FLAG_QUEUE_PAIRS is enabled, you'll note that the same NAPI ID
> is present for both rx and tx queues at the same index, for example
> index 0:
>
> {'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
> {'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'tx'},
>
> To test IGC_FLAG_QUEUE_PAIRS disabled, a test system was booted using
> the grub command line option "maxcpus=2" to force
> igc_set_interrupt_capability to disable IGC_FLAG_QUEUE_PAIRS.
>
> Example output when IGC_FLAG_QUEUE_PAIRS is disabled:
>
> $ lscpu | grep "On-line CPU"
> On-line CPU(s) list: 0,2
>
> $ ethtool -l enp86s0 | tail -5
> Current hardware settings:
> RX: n/a
> TX: n/a
> Other: 1
> Combined: 2
>
> $ cat /proc/interrupts | grep enp
> 144: [...] enp86s0
> 145: [...] enp86s0-rx-0
> 146: [...] enp86s0-rx-1
> 147: [...] enp86s0-tx-0
> 148: [...] enp86s0-tx-1
>
> 1 "other" IRQ, and 2 IRQs for each of RX and Tx, so we expect netlink to
> report 4 IRQs with unique NAPI IDs:
>
> $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
> --dump napi-get --json='{"ifindex": 2}'
> [{'id': 8196, 'ifindex': 2, 'irq': 148},
> {'id': 8195, 'ifindex': 2, 'irq': 147},
> {'id': 8194, 'ifindex': 2, 'irq': 146},
> {'id': 8193, 'ifindex': 2, 'irq': 145}]
>
> Now we examine which queues these NAPIs are associated with, expecting
> that since IGC_FLAG_QUEUE_PAIRS is disabled each RX and TX queue will
> have its own NAPI instance:
>
> $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
> --dump queue-get --json='{"ifindex": 2}'
> [{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
> {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
> {'id': 0, 'ifindex': 2, 'napi-id': 8195, 'type': 'tx'},
> {'id': 1, 'ifindex': 2, 'napi-id': 8196, 'type': 'tx'}]
>
> Signed-off-by: Joe Damato <jdamato@...tly.com>
> ---
> v2:
> - Update commit message to include tests for IGC_FLAG_QUEUE_PAIRS
> disabled
> - Refactored code to move napi queue mapping and unmapping to helper
> functions igc_set_queue_napi and igc_unset_queue_napi
> - Adjust the code to handle IGC_FLAG_QUEUE_PAIRS disabled
> - Call helpers to map/unmap queues to NAPIs in igc_up, __igc_open,
> igc_xdp_enable_pool, and igc_xdp_disable_pool
>
> drivers/net/ethernet/intel/igc/igc.h | 3 ++
> drivers/net/ethernet/intel/igc/igc_main.c | 58 +++++++++++++++++++++--
> drivers/net/ethernet/intel/igc/igc_xdp.c | 2 +
> 3 files changed, 59 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
> index eac0f966e0e4..7b1c9ea60056 100644
> --- a/drivers/net/ethernet/intel/igc/igc.h
> +++ b/drivers/net/ethernet/intel/igc/igc.h
> @@ -337,6 +337,9 @@ struct igc_adapter {
> struct igc_led_classdev *leds;
> };
>
> +void igc_set_queue_napi(struct igc_adapter *adapter, int q_idx,
> + struct napi_struct *napi);
> +void igc_unset_queue_napi(struct igc_adapter *adapter, int q_idx);
> void igc_up(struct igc_adapter *adapter);
> void igc_down(struct igc_adapter *adapter);
> int igc_open(struct net_device *netdev);
> diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
> index 7964bbedb16c..59c00acfa0ed 100644
> --- a/drivers/net/ethernet/intel/igc/igc_main.c
> +++ b/drivers/net/ethernet/intel/igc/igc_main.c
> @@ -4948,6 +4948,47 @@ static int igc_sw_init(struct igc_adapter *adapter)
> return 0;
> }
>
> +void igc_set_queue_napi(struct igc_adapter *adapter, int q_idx,
> + struct napi_struct *napi)
> +{
> + if (adapter->flags & IGC_FLAG_QUEUE_PAIRS) {
> + netif_queue_set_napi(adapter->netdev, q_idx,
> + NETDEV_QUEUE_TYPE_RX, napi);
> + netif_queue_set_napi(adapter->netdev, q_idx,
> + NETDEV_QUEUE_TYPE_TX, napi);
> + } else {
> + if (q_idx < adapter->num_rx_queues) {
> + netif_queue_set_napi(adapter->netdev, q_idx,
> + NETDEV_QUEUE_TYPE_RX, napi);
> + } else {
> + q_idx -= adapter->num_rx_queues;
> + netif_queue_set_napi(adapter->netdev, q_idx,
> + NETDEV_QUEUE_TYPE_TX, napi);
> + }
> + }
> +}
In addition, to what Vinicius said. I think this can be done
simpler. Something like this?
void igc_set_queue_napi(struct igc_adapter *adapter, int vector,
struct napi_struct *napi)
{
struct igc_q_vector *q_vector = adapter->q_vector[vector];
if (q_vector->rx.ring)
netif_queue_set_napi(adapter->netdev, vector, NETDEV_QUEUE_TYPE_RX, napi);
if (q_vector->tx.ring)
netif_queue_set_napi(adapter->netdev, vector, NETDEV_QUEUE_TYPE_TX, napi);
}
Thanks,
Kurt
Download attachment "signature.asc" of type "application/pgp-signature" (862 bytes)
Powered by blists - more mailing lists