[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <tencent_76E62F6A47A7C7E818FC7C74A6B02772F308@qq.com>
Date: Thu, 19 Dec 2024 11:11:38 +0800
From: Guowei Dang <guowei.dang@...mail.com>
To: linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org,
netdev@...r.kernel.org
Cc: Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>,
Jonathan Corbet <corbet@....net>,
Guowei Dang <guowei.dang@...mail.com>,
Yunsheng Lin <linyunsheng@...wei.com>,
Furong Xu <0x1207@...il.com>
Subject: [PATCH net-next v1] net: page_pool: add page_pool_put_page_nosync()
Add page_pool_put_page_nosync() to respond to dma_sync_size being 0.
The purpose of this is to make the semantics more obvious and may
enable removing some checkings in the future.
And in the long term, treating the nosync scenario separately provides
more flexibility for the user and enable removing of the
PP_FLAG_DMA_SYNC_DEV in the future.
Since we do have a page_pool_put_full_page(), adding a variant for
the nosync seems reasonable.
Suggested-by: Yunsheng Lin <linyunsheng@...wei.com>
Acked-by: Furong Xu <0x1207@...il.com>
Signed-off-by: Guowei Dang <guowei.dang@...mail.com>
---
Documentation/networking/page_pool.rst | 5 ++++-
include/net/page_pool/helpers.h | 17 +++++++++++++++++
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/Documentation/networking/page_pool.rst b/Documentation/networking/page_pool.rst
index 9d958128a57c..a83f7c071132 100644
--- a/Documentation/networking/page_pool.rst
+++ b/Documentation/networking/page_pool.rst
@@ -62,7 +62,8 @@ a page will cause no race conditions is enough.
:identifiers: struct page_pool_params
.. kernel-doc:: include/net/page_pool/helpers.h
- :identifiers: page_pool_put_page page_pool_put_full_page
+ :identifiers: page_pool_put_page
+ page_pool_put_page_nosync page_pool_put_full_page
page_pool_recycle_direct page_pool_free_va
page_pool_dev_alloc_pages page_pool_dev_alloc_frag
page_pool_dev_alloc page_pool_dev_alloc_va
@@ -93,6 +94,8 @@ much of the page needs to be synced (starting at ``offset``).
When directly freeing pages in the driver (page_pool_put_page())
the ``dma_sync_size`` argument specifies how much of the buffer needs
to be synced.
+If the ``dma_sync_size`` argument is 0, page_pool_put_page_nosync() should be
+used instead of page_pool_put_page().
If in doubt set ``offset`` to 0, ``max_len`` to ``PAGE_SIZE`` and
pass -1 as ``dma_sync_size``. That combination of arguments is always
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index e555921e5233..5cc68d48624a 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -340,12 +340,14 @@ static inline void page_pool_put_netmem(struct page_pool *pool,
* the allocator owns the page and will try to recycle it in one of the pool
* caches. If PP_FLAG_DMA_SYNC_DEV is set, the page will be synced for_device
* using dma_sync_single_range_for_device().
+ * page_pool_put_page_nosync() should be used if dma_sync_size is 0.
*/
static inline void page_pool_put_page(struct page_pool *pool,
struct page *page,
unsigned int dma_sync_size,
bool allow_direct)
{
+ DEBUG_NET_WARN_ON_ONCE(!dma_sync_size);
page_pool_put_netmem(pool, page_to_netmem(page), dma_sync_size,
allow_direct);
}
@@ -372,6 +374,21 @@ static inline void page_pool_put_full_page(struct page_pool *pool,
page_pool_put_netmem(pool, page_to_netmem(page), -1, allow_direct);
}
+/**
+ * page_pool_put_page_nosync() - release a reference on a page pool page
+ * @pool: pool from which page was allocated
+ * @page: page to release a reference on
+ * @allow_direct: released by the consumer, allow lockless caching
+ *
+ * Similar to page_pool_put_page(), but will not DMA sync the memory area.
+ */
+static inline void page_pool_put_page_nosync(struct page_pool *pool,
+ struct page *page,
+ bool allow_direct)
+{
+ page_pool_put_netmem(pool, page_to_netmem(page), 0, allow_direct);
+}
+
/**
* page_pool_recycle_direct() - release a reference on a page pool page
* @pool: pool from which page was allocated
--
2.34.1
Powered by blists - more mailing lists