[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <161662169926.940814.10878534922009676003.stgit@firesoul>
Date: Wed, 24 Mar 2021 22:34:59 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Mel Gorman <mgorman@...hsingularity.net>, linux-mm@...ck.org
Cc: Jesper Dangaard Brouer <brouer@...hat.com>, chuck.lever@...cle.com,
Alexander Duyck <alexander.duyck@...il.com>,
netdev@...r.kernel.org, linux-nfs@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH mel-git 3/3] net: page_pool: convert to use
alloc_pages_bulk_array variant
Using the API variant alloc_pages_bulk_array from page_pool
was done in a separate patch to ease benchmarking the
variants separately. Maintainers can squash patch if preferred.
Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
---
include/net/page_pool.h | 2 +-
net/core/page_pool.c | 22 ++++++++++++++++------
2 files changed, 17 insertions(+), 7 deletions(-)
diff --git a/include/net/page_pool.h b/include/net/page_pool.h
index b5b195305346..6d517a37c18b 100644
--- a/include/net/page_pool.h
+++ b/include/net/page_pool.h
@@ -65,7 +65,7 @@
#define PP_ALLOC_CACHE_REFILL 64
struct pp_alloc_cache {
u32 count;
- void *cache[PP_ALLOC_CACHE_SIZE];
+ struct page *cache[PP_ALLOC_CACHE_SIZE];
};
struct page_pool_params {
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 3bf6e7f5fc89..9ec1aa9640ad 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -233,24 +233,34 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
const int bulk = PP_ALLOC_CACHE_REFILL;
unsigned int pp_flags = pool->p.flags;
unsigned int pp_order = pool->p.order;
- struct page *page, *next;
- LIST_HEAD(page_list);
+ struct page *page;
+ int i, nr_pages;
/* Don't support bulk alloc for high-order pages */
if (unlikely(pp_order))
return __page_pool_alloc_page_order(pool, gfp);
- if (unlikely(!alloc_pages_bulk_list(gfp, bulk, &page_list)))
+ /* Unnecessary as alloc cache is empty, but guarantees zero count */
+ if (unlikely(pool->alloc.count > 0))
+ return pool->alloc.cache[--pool->alloc.count];
+
+ /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */
+ memset(&pool->alloc.cache, 0, sizeof(void *) * bulk);
+
+ nr_pages = alloc_pages_bulk_array(gfp, bulk, pool->alloc.cache);
+ if (unlikely(!nr_pages))
return NULL;
- list_for_each_entry_safe(page, next, &page_list, lru) {
- list_del(&page->lru);
+ /* Pages have been filled into alloc.cache array, but count is zero and
+ * page element have not been (possibly) DMA mapped.
+ */
+ for (i = 0; i < nr_pages; i++) {
+ page = pool->alloc.cache[i];
if ((pp_flags & PP_FLAG_DMA_MAP) &&
unlikely(!page_pool_dma_map(pool, page))) {
put_page(page);
continue;
}
- /* Alloc cache have room as it is empty on function call */
pool->alloc.cache[pool->alloc.count++] = page;
/* Track how many pages are held 'in-flight' */
pool->pages_state_hold_cnt++;
Powered by blists - more mailing lists