[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230221165400.1595247-1-kbusch@meta.com>
Date: Tue, 21 Feb 2023 08:54:00 -0800
From: Keith Busch <kbusch@...a.com>
To: Andrew Morton <akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
CC: Keith Busch <kbusch@...nel.org>,
Bryan O'Donoghue <bryan.odonoghue@...aro.org>
Subject: [PATCH] dmapool: push new blocks in ascending order
From: Keith Busch <kbusch@...nel.org>
Some users of the dmapool need their allocations to happen in ascending
order. The recent optimizations pushed the blocks in reverse order, so
restore the previous behavior by linking the next available block from
low-to-high.
Fixes: ced6d06a81fb69 ("dmapool: link blocks across pages")
Reported-by: Bryan O'Donoghue <bryan.odonoghue@...aro.org>
Signed-off-by: Keith Busch <kbusch@...nel.org>
---
mm/dmapool.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/mm/dmapool.c b/mm/dmapool.c
index 1920890ff8d3d..a151a21e571b7 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -300,7 +300,7 @@ EXPORT_SYMBOL(dma_pool_create);
static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
{
unsigned int next_boundary = pool->boundary, offset = 0;
- struct dma_block *block;
+ struct dma_block *block, *first = NULL, *last = NULL;
pool_init_page(pool, page);
while (offset + pool->size <= pool->allocation) {
@@ -311,11 +311,22 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
}
block = page->vaddr + offset;
- pool_block_push(pool, block, page->dma + offset);
+ block->dma = page->dma + offset;
+ block->next_block = NULL;
+
+ if (last)
+ last->next_block = block;
+ else
+ first = block;
+ last = block;
+
offset += pool->size;
pool->nr_blocks++;
}
+ last->next_block = pool->next_block;
+ pool->next_block = first;
+
list_add(&page->page_list, &pool->page_list);
pool->nr_pages++;
}
--
2.30.2
Powered by blists - more mailing lists