[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250117154149.GQ6206@kernel.org>
Date: Fri, 17 Jan 2025 15:41:49 +0000
From: Simon Horman <horms@...nel.org>
To: Michael Chan <michael.chan@...adcom.com>
Cc: davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com,
kuba@...nel.org, pabeni@...hat.com, andrew+netdev@...n.ch,
pavan.chebbi@...adcom.com, andrew.gospodarek@...adcom.com,
michal.swiatkowski@...ux.intel.com, helgaas@...nel.org,
Manoj Panicker <manoj.panicker2@....com>,
Somnath Kotur <somnath.kotur@...adcom.com>,
Wei Huang <wei.huang2@....com>,
Ajit Khaparde <ajit.khaparde@...adcom.com>
Subject: Re: [PATCH net-next v2 10/10] bnxt_en: Add TPH support in BNXT driver
On Thu, Jan 16, 2025 at 11:23:43AM -0800, Michael Chan wrote:
> From: Manoj Panicker <manoj.panicker2@....com>
>
> Add TPH support to the Broadcom BNXT device driver. This allows the
> driver to utilize TPH functions for retrieving and configuring Steering
> Tags when changing interrupt affinity. With compatible NIC firmware,
> network traffic will be tagged correctly with Steering Tags, resulting
> in significant memory bandwidth savings and other advantages as
> demonstrated by real network benchmarks on TPH-capable platforms.
>
> Co-developed-by: Somnath Kotur <somnath.kotur@...adcom.com>
> Signed-off-by: Somnath Kotur <somnath.kotur@...adcom.com>
> Co-developed-by: Wei Huang <wei.huang2@....com>
> Signed-off-by: Wei Huang <wei.huang2@....com>
> Signed-off-by: Manoj Panicker <manoj.panicker2@....com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@...adcom.com>
> Reviewed-by: Andy Gospodarek <andrew.gospodarek@...adcom.com>
> Signed-off-by: Michael Chan <michael.chan@...adcom.com>
> ---
> Cc: Bjorn Helgaas <helgaas@...nel.org>
>
> Previous driver series fixing rtnl_lock and empty release function:
>
> https://lore.kernel.org/netdev/20241115200412.1340286-1-wei.huang2@amd.com/
>
> v5 of the PCI series using netdev_rx_queue_restart():
>
> https://lore.kernel.org/netdev/20240916205103.3882081-5-wei.huang2@amd.com/
>
> v1 of the PCI series using open/close:
>
> https://lore.kernel.org/netdev/20240509162741.1937586-9-wei.huang2@amd.com/
> ---
> drivers/net/ethernet/broadcom/bnxt/bnxt.c | 105 ++++++++++++++++++++++
> drivers/net/ethernet/broadcom/bnxt/bnxt.h | 7 ++
> 2 files changed, 112 insertions(+)
>
> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> index 0a10a4cffcc8..8c24642b8812 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> @@ -55,6 +55,8 @@
> #include <net/page_pool/helpers.h>
> #include <linux/align.h>
> #include <net/netdev_queues.h>
> +#include <net/netdev_rx_queue.h>
> +#include <linux/pci-tph.h>
>
> #include "bnxt_hsi.h"
> #include "bnxt.h"
Hi Manoj, Michael, all,
Modpost complains that:
WARNING: modpost: module bnxt_en uses symbol netdev_rx_queue_restart from namespace NETDEV_INTERNAL, but does not import it.
And looking into this I see:
* netdev: define NETDEV_INTERNAL
https://git.kernel.org/netdev/net-next/c/0b7bdc7fab57
Which adds the following text to Documentation/networking/netdevices.rst
NETDEV_INTERNAL symbol namespace
================================
Symbols exported as NETDEV_INTERNAL can only be used in networking
core and drivers which exclusively flow via the main networking list and trees.
Note that the inverse is not true, most symbols outside of NETDEV_INTERNAL
are not expected to be used by random code outside netdev either.
Symbols may lack the designation because they predate the namespaces,
or simply due to an oversight.
Which I think is satisfied here. So I think this problem can be
addressed by adding the following about here (completely untested!).
MODULE_IMPORT_NS("NETDEV_INTERNAL");
> @@ -11330,6 +11332,83 @@ static int bnxt_tx_queue_start(struct bnxt *bp, int idx)
> return 0;
> }
>
> +static void bnxt_irq_affinity_notify(struct irq_affinity_notify *notify,
> + const cpumask_t *mask)
> +{
> + struct bnxt_irq *irq;
> + u16 tag;
> + int err;
> +
> + irq = container_of(notify, struct bnxt_irq, affinity_notify);
> +
> + if (!irq->bp->tph_mode)
> + return;
> +
> + cpumask_copy(irq->cpu_mask, mask);
> +
> + if (irq->ring_nr >= irq->bp->rx_nr_rings)
> + return;
> +
> + if (pcie_tph_get_cpu_st(irq->bp->pdev, TPH_MEM_TYPE_VM,
> + cpumask_first(irq->cpu_mask), &tag))
> + return;
> +
> + if (pcie_tph_set_st_entry(irq->bp->pdev, irq->msix_nr, tag))
> + return;
> +
> + rtnl_lock();
> + if (netif_running(irq->bp->dev)) {
> + err = netdev_rx_queue_restart(irq->bp->dev, irq->ring_nr);
> + if (err)
> + netdev_err(irq->bp->dev,
> + "RX queue restart failed: err=%d\n", err);
> + }
> + rtnl_unlock();
> +}
...
Powered by blists - more mailing lists