[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190708184925.GH87269@C02RW35GFVH8.dhcp.broadcom.net>
Date: Mon, 8 Jul 2019 14:49:25 -0400
From: Andy Gospodarek <andrew.gospodarek@...adcom.com>
To: Michael Chan <michael.chan@...adcom.com>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>
Cc: davem@...emloft.net, netdev@...r.kernel.org, hawk@...nel.org,
ast@...nel.org
Subject: Re: [PATCH net-next 4/4] bnxt_en: add page_pool support
On Sat, Jul 06, 2019 at 03:36:18AM -0400, Michael Chan wrote:
> From: Andy Gospodarek <gospo@...adcom.com>
>
> This removes contention over page allocation for XDP_REDIRECT actions by
> adding page_pool support per queue for the driver. The performance for
> XDP_REDIRECT actions scales linearly with the number of cores performing
> redirect actions when using the page pools instead of the standard page
> allocator.
>
> Signed-off-by: Andy Gospodarek <gospo@...adcom.com>
> Signed-off-by: Michael Chan <michael.chan@...adcom.com>
> ---
> drivers/net/ethernet/broadcom/Kconfig | 1 +
> drivers/net/ethernet/broadcom/bnxt/bnxt.c | 40 +++++++++++++++++++++++----
> drivers/net/ethernet/broadcom/bnxt/bnxt.h | 3 ++
> drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 3 +-
> 4 files changed, 41 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> index d8f0846..b6777e5 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
[...]
> @@ -2530,12 +2555,17 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp)
>
> ring = &rxr->rx_ring_struct;
>
> + rc = bnxt_alloc_rx_page_pool(bp, rxr);
> + if (rc)
> + return rc;
> +
> rc = xdp_rxq_info_reg(&rxr->xdp_rxq, bp->dev, i);
> if (rc < 0)
> return rc;
>
> rc = xdp_rxq_info_reg_mem_model(&rxr->xdp_rxq,
> - MEM_TYPE_PAGE_SHARED, NULL);
> + MEM_TYPE_PAGE_POOL,
> + rxr->page_pool);
> if (rc) {
> xdp_rxq_info_unreg(&rxr->xdp_rxq);
> return rc;
I think we want to amend and the chunk above to be:
@@ -2530,14 +2557,24 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp)
ring = &rxr->rx_ring_struct;
+ rc = bnxt_alloc_rx_page_pool(bp, rxr);
+ if (rc)
+ return rc;
+
rc = xdp_rxq_info_reg(&rxr->xdp_rxq, bp->dev, i);
- if (rc < 0)
+ if (rc < 0) {
+ page_pool_free(rxr->page_pool);
+ rxr->page_pool = NULL;
return rc;
+ }
rc = xdp_rxq_info_reg_mem_model(&rxr->xdp_rxq,
- MEM_TYPE_PAGE_SHARED, NULL);
+ MEM_TYPE_PAGE_POOL,
+ rxr->page_pool);
if (rc) {
xdp_rxq_info_unreg(&rxr->xdp_rxq);
+ page_pool_free(rxr->page_pool);
+ rxr->page_pool = NULL;
return rc;
}
That should take care of the freeing of the page_pool that is allocated
but there is a failure during xdp_rxq_info_reg() or
xdp_rxq_info_reg_mem_model().
I agree that we do not need to call page_pool_free in the normal
shutdown case.
Powered by blists - more mailing lists