[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240625195522.2974466-2-dw@davidwei.uk>
Date: Tue, 25 Jun 2024 12:55:21 -0700
From: David Wei <dw@...idwei.uk>
To: Michael Chan <michael.chan@...adcom.com>,
Andy Gospodarek <andrew.gospodarek@...adcom.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
netdev@...r.kernel.org
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>
Subject: [PATCH net-next v1 1/2] page_pool: reintroduce page_pool_unlink_napi()
56ef27e3 unexported page_pool_unlink_napi() and renamed it to
page_pool_disable_direct_recycling(). This is because there was no
in-tree user of page_pool_unlink_napi().
Since then Rx queue API and an implementation in bnxt got merged. In the
bnxt implementation, it broadly follows the following steps: allocate
new queue memory + page pool, stop old rx queue, swap, then destroy old
queue memory + page pool. The existing NAPI instance is re-used.
The page pool to be destroyed is still linked to the re-used NAPI
instance. Freeing it as-is will trigger warnings in
page_pool_disable_direct_recycling(). In my initial patches I unlinked
very directly by setting pp.napi to NULL.
Instead, bring back page_pool_unlink_napi() and use that instead of
having a driver touch a core struct directly.
Suggested-by: Jakub Kicinski <kuba@...nel.org>
Signed-off-by: David Wei <dw@...idwei.uk>
---
include/net/page_pool/types.h | 5 +++++
net/core/page_pool.c | 6 ++++++
2 files changed, 11 insertions(+)
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index 7e8477057f3d..aa3e615f1ae6 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -229,12 +229,17 @@ struct page_pool *page_pool_create_percpu(const struct page_pool_params *params,
struct xdp_mem_info;
#ifdef CONFIG_PAGE_POOL
+void page_pool_unlink_napi(struct page_pool *pool);
void page_pool_destroy(struct page_pool *pool);
void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *),
const struct xdp_mem_info *mem);
void page_pool_put_page_bulk(struct page_pool *pool, void **data,
int count);
#else
+static inline void page_pool_unlink_napi(struct page_pool *pool)
+{
+}
+
static inline void page_pool_destroy(struct page_pool *pool)
{
}
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 3927a0a7fa9a..ec274dde0e32 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -1021,6 +1021,11 @@ static void page_pool_disable_direct_recycling(struct page_pool *pool)
*/
WRITE_ONCE(pool->cpuid, -1);
+ page_pool_unlink_napi(pool);
+}
+
+void page_pool_unlink_napi(struct page_pool *pool)
+{
if (!pool->p.napi)
return;
@@ -1032,6 +1037,7 @@ static void page_pool_disable_direct_recycling(struct page_pool *pool)
WRITE_ONCE(pool->p.napi, NULL);
}
+EXPORT_SYMBOL(page_pool_unlink_napi);
void page_pool_destroy(struct page_pool *pool)
{
--
2.43.0
Powered by blists - more mailing lists