[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230814125643.59334-6-linyunsheng@huawei.com>
Date: Mon, 14 Aug 2023 20:56:42 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: <davem@...emloft.net>, <kuba@...nel.org>, <pabeni@...hat.com>
CC: <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
Yunsheng Lin <linyunsheng@...wei.com>,
Lorenzo Bianconi <lorenzo@...nel.org>,
Alexander Duyck <alexander.duyck@...il.com>,
Liang Chen <liangchen.linux@...il.com>,
Alexander Lobakin <aleksander.lobakin@...el.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Eric Dumazet <edumazet@...gle.com>,
Jonathan Corbet <corbet@....net>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>,
<linux-doc@...r.kernel.org>, <bpf@...r.kernel.org>
Subject: [PATCH net-next v6 5/6] page_pool: update document about frag API
As more drivers begin to use the frag API, update the
document about how to decide which API to use for the
driver author.
Signed-off-by: Yunsheng Lin <linyunsheng@...wei.com>
CC: Lorenzo Bianconi <lorenzo@...nel.org>
CC: Alexander Duyck <alexander.duyck@...il.com>
CC: Liang Chen <liangchen.linux@...il.com>
CC: Alexander Lobakin <aleksander.lobakin@...el.com>
---
Documentation/networking/page_pool.rst | 4 +-
include/net/page_pool/helpers.h | 58 +++++++++++++++++++++++---
2 files changed, 55 insertions(+), 7 deletions(-)
diff --git a/Documentation/networking/page_pool.rst b/Documentation/networking/page_pool.rst
index 215ebc92752c..0c0705994f51 100644
--- a/Documentation/networking/page_pool.rst
+++ b/Documentation/networking/page_pool.rst
@@ -58,7 +58,9 @@ a page will cause no race conditions is enough.
.. kernel-doc:: include/net/page_pool/helpers.h
:identifiers: page_pool_put_page page_pool_put_full_page
- page_pool_recycle_direct page_pool_dev_alloc_pages
+ page_pool_recycle_direct page_pool_cache_free
+ page_pool_dev_alloc_pages page_pool_dev_alloc_frag
+ page_pool_dev_alloc page_pool_dev_cache_alloc
page_pool_get_dma_addr page_pool_get_dma_dir
.. kernel-doc:: net/core/page_pool.c
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index b920224f6584..0f1eaa2986f9 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -8,13 +8,28 @@
/**
* DOC: page_pool allocator
*
- * The page_pool allocator is optimized for the XDP mode that
- * uses one frame per-page, but it can fallback on the
- * regular page allocator APIs.
+ * The page_pool allocator is optimized for recycling page or page frag used by
+ * skb packet and xdp frame.
*
- * Basic use involves replacing alloc_pages() calls with the
- * page_pool_alloc_pages() call. Drivers should use
- * page_pool_dev_alloc_pages() replacing dev_alloc_pages().
+ * Basic use involves replacing napi_alloc_frag() and alloc_pages() calls with
+ * page_pool_cache_alloc() and page_pool_alloc(), which allocate memory with or
+ * without page splitting depending on the requested memory size.
+ *
+ * If the driver knows that it always requires full pages or its allocates are
+ * always smaller than half a page, it can use one of the more specific API
+ * calls:
+ *
+ * 1. page_pool_alloc_pages(): allocate memory without page splitting when
+ * driver knows that the memory it need is always bigger than half of the page
+ * allocated from page pool. There is no cache line dirtying for 'struct page'
+ * when a page is recycled back to the page pool.
+ *
+ * 2. page_pool_alloc_frag(): allocate memory with page splitting when driver
+ * knows that the memory it need is always smaller than or equal to half of the
+ * page allocated from page pool. Page splitting enables memory saving and thus
+ * avoid TLB/cache miss for data access, but there also is some cost to
+ * implement page splitting, mainly some cache line dirtying/bouncing for
+ * 'struct page' and atomic operation for page->pp_frag_count.
*
* API keeps track of in-flight pages, in order to let API user know
* when it is safe to free a page_pool object. Thus, API users
@@ -100,6 +115,14 @@ static inline struct page *page_pool_alloc_frag(struct page_pool *pool,
return __page_pool_alloc_frag(pool, offset, size, gfp);
}
+/**
+ * page_pool_dev_alloc_frag() - allocate a page frag.
+ * @pool[in] pool from which to allocate
+ * @offset[out] offset to the allocated page
+ * @size[in] requested size
+ *
+ * Get a page frag from the page allocator or page_pool caches.
+ */
static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool,
unsigned int *offset,
unsigned int size)
@@ -143,6 +166,14 @@ static inline struct page *page_pool_alloc(struct page_pool *pool,
return page;
}
+/**
+ * page_pool_dev_alloc() - allocate a page or a page frag.
+ * @pool[in]: pool from which to allocate
+ * @offset[out]: offset to the allocated page
+ * @size[in, out]: in as the requested size, out as the allocated size
+ *
+ * Get a page or a page frag from the page allocator or page_pool caches.
+ */
static inline struct page *page_pool_dev_alloc(struct page_pool *pool,
unsigned int *offset,
unsigned int *size)
@@ -165,6 +196,13 @@ static inline void *page_pool_cache_alloc(struct page_pool *pool,
return page_address(page) + offset;
}
+/**
+ * page_pool_dev_cache_alloc() - allocate a cache.
+ * @pool[in]: pool from which to allocate
+ * @size[in, out]: in as the requested size, out as the allocated size
+ *
+ * Get a cache from the page allocator or page_pool caches.
+ */
static inline void *page_pool_dev_cache_alloc(struct page_pool *pool,
unsigned int *size)
{
@@ -316,6 +354,14 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
page_pool_put_full_page(pool, page, true);
}
+/**
+ * page_pool_cache_free() - free a cache into the page_pool
+ * @pool[in]: pool from which cache was allocated
+ * @data[in]: cache to free
+ * @allow_direct[in]: freed by the consumer, allow lockless caching
+ *
+ * Free a cache allocated from page_pool_dev_cache_alloc().
+ */
static inline void page_pool_cache_free(struct page_pool *pool, void *data,
bool allow_direct)
{
--
2.33.0
Powered by blists - more mailing lists