lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHS8izMcJ2am+ay1xLxZsZFRpaor-ZKPuVPM+FXdnn4FBpC2Fw@mail.gmail.com>
Date: Thu, 2 May 2024 10:23:58 -0700
From: Mina Almasry <almasrymina@...gle.com>
To: David Wei <dw@...idwei.uk>
Cc: netdev@...r.kernel.org, Michael Chan <michael.chan@...adcom.com>, 
	Pavan Chebbi <pavan.chebbi@...adcom.com>, 
	Andy Gospodarek <andrew.gospodarek@...adcom.com>, 
	Adrian Alvarado <adrian.alvarado@...adcom.com>, Shailend Chand <shailend@...gle.com>, 
	Jakub Kicinski <kuba@...nel.org>, "David S. Miller" <davem@...emloft.net>, 
	Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>
Subject: Re: [RFC PATCH net-next v2 2/9] bnxt: implement queue api

On Wed, May 1, 2024 at 9:54 PM David Wei <dw@...idwei.uk> wrote:
>
> Implement the bare minimum queue API for bnxt. I'm essentially breaking
> up the existing bnxt_rx_ring_reset() into two steps:
>
> 1. bnxt_queue_stop()
> 2. bnxt_queue_start()
>
> The other two ndos are left as no-ops for now, so the queue mem is
> allocated after the queue has been stopped. Doing this before stopping
> the queue is a lot more work, so I'm looking for some feedback first.
>
> Signed-off-by: David Wei <dw@...idwei.uk>
> ---
>  drivers/net/ethernet/broadcom/bnxt/bnxt.c | 62 +++++++++++++++++++++++
>  1 file changed, 62 insertions(+)
>
> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> index 06b7a963bbbd..788c87271eb1 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> @@ -14810,6 +14810,67 @@ static const struct netdev_stat_ops bnxt_stat_ops = {
>         .get_base_stats         = bnxt_get_base_stats,
>  };
>
> +static void *bnxt_queue_mem_alloc(struct net_device *dev, int idx)
> +{
> +       struct bnxt *bp = netdev_priv(dev);
> +
> +       return &bp->rx_ring[idx];
> +}
> +
> +static void bnxt_queue_mem_free(struct net_device *dev, void *qmem)
> +{
> +}
> +
> +static int bnxt_queue_start(struct net_device *dev, int idx, void *qmem)
> +{
> +       struct bnxt_rx_ring_info *rxr = qmem;
> +       struct bnxt *bp = netdev_priv(dev);
> +
> +       bnxt_alloc_one_rx_ring(bp, idx);
> +
> +       if (bp->flags & BNXT_FLAG_AGG_RINGS)
> +               bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod);
> +       bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod);
> +
> +       if (bp->flags & BNXT_FLAG_TPA)
> +               bnxt_set_tpa(bp, true);
> +
> +       return 0;
> +}
> +
> +static int bnxt_queue_stop(struct net_device *dev, int idx, void **out_qmem)
> +{
> +       struct bnxt *bp = netdev_priv(dev);
> +       struct bnxt_rx_ring_info *rxr;
> +       struct bnxt_cp_ring_info *cpr;
> +       int rc;
> +
> +       rc = bnxt_hwrm_rx_ring_reset(bp, idx);
> +       if (rc)
> +               return rc;
> +
> +       bnxt_free_one_rx_ring_skbs(bp, idx);
> +       rxr = &bp->rx_ring[idx];
> +       rxr->rx_prod = 0;
> +       rxr->rx_agg_prod = 0;
> +       rxr->rx_sw_agg_prod = 0;
> +       rxr->rx_next_cons = 0;
> +
> +       cpr = &rxr->bnapi->cp_ring;
> +       cpr->sw_stats.rx.rx_resets++;
> +
> +       *out_qmem = rxr;

Oh, I'm not sure you can do this, no?

rxr is a stack variable, it goes away after the function returns, no?

You have to kzalloc sizeof(struct bnext_rx_ring_info), no?

Other than that, the code looks very similar to what we do for GVE,
and good to me.

> +
> +       return 0;
> +}
> +
> +static const struct netdev_queue_mgmt_ops bnxt_queue_mgmt_ops = {
> +       .ndo_queue_mem_alloc    = bnxt_queue_mem_alloc,
> +       .ndo_queue_mem_free     = bnxt_queue_mem_free,
> +       .ndo_queue_start        = bnxt_queue_start,
> +       .ndo_queue_stop         = bnxt_queue_stop,
> +};
> +
>  static void bnxt_remove_one(struct pci_dev *pdev)
>  {
>         struct net_device *dev = pci_get_drvdata(pdev);
> @@ -15275,6 +15336,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
>         dev->stat_ops = &bnxt_stat_ops;
>         dev->watchdog_timeo = BNXT_TX_TIMEOUT;
>         dev->ethtool_ops = &bnxt_ethtool_ops;
> +       dev->queue_mgmt_ops = &bnxt_queue_mgmt_ops;
>         pci_set_drvdata(pdev, dev);
>
>         rc = bnxt_alloc_hwrm_resources(bp);
> --
> 2.43.0
>


-- 
Thanks,
Mina

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ