[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240625195522.2974466-1-dw@davidwei.uk>
Date: Tue, 25 Jun 2024 12:55:20 -0700
From: David Wei <dw@...idwei.uk>
To: Michael Chan <michael.chan@...adcom.com>,
Andy Gospodarek <andrew.gospodarek@...adcom.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
netdev@...r.kernel.org
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>
Subject: [PATCH net-next v1 0/2] page_pool: bnxt_en: unlink old page pool in queue api using helper
56ef27e3 unexported page_pool_unlink_napi() and renamed it to
page_pool_disable_direct_recycling(). This is because there was no
in-tree user of page_pool_unlink_napi().
Since then Rx queue API and an implementation in bnxt got merged. In the
bnxt implementation, it broadly follows the following steps: allocate
new queue memory + page pool, stop old rx queue, swap, then destroy old
queue memory + page pool. The existing NAPI instance is re-used.
The page pool to be destroyed is still linked to the re-used NAPI
instance. Freeing it as-is will trigger warnings in
page_pool_disable_direct_recycling(). In my initial patches I unlinked
very directly by setting pp.napi to NULL.
Instead, bring back page_pool_unlink_napi() and use that instead of
having a driver touch a core struct directly.
David Wei (2):
page_pool: reintroduce page_pool_unlink_napi()
bnxt_en: unlink page pool when stopping Rx queue
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 6 +-----
include/net/page_pool/types.h | 5 +++++
net/core/page_pool.c | 6 ++++++
3 files changed, 12 insertions(+), 5 deletions(-)
--
2.43.0
Powered by blists - more mailing lists