[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251014144513.445a370d@mordecai.tesarici.cz>
Date: Tue, 14 Oct 2025 14:45:13 +0200
From: Petr Tesarik <ptesarik@...e.com>
To: "zhaoyang.huang" <zhaoyang.huang@...soc.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, David Hildenbrand
<david@...hat.com>, Matthew Wilcox <willy@...radead.org>, Mel Gorman
<mgorman@...hsingularity.net>, Vlastimil Babka <vbabka@...e.cz>, Sumit
Semwal <sumit.semwal@...aro.org>, Benjamin Gaignard
<benjamin.gaignard@...labora.com>, Brian Starkey <Brian.Starkey@....com>,
John Stultz <jstultz@...gle.com>, "T . J . Mercier" <tjmercier@...gle.com>,
Christian König <christian.koenig@....com>,
<linux-media@...r.kernel.org>, <dri-devel@...ts.freedesktop.org>,
<linaro-mm-sig@...ts.linaro.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, Zhaoyang Huang <huangzhaoyang@...il.com>,
<steve.kang@...soc.com>
Subject: Re: [PATCH 2/2] driver: dma-buf: use alloc_pages_bulk_list for
order-0 allocation
On Tue, 14 Oct 2025 16:32:30 +0800
"zhaoyang.huang" <zhaoyang.huang@...soc.com> wrote:
> From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
>
> The size of once dma-buf allocation could be dozens MB or much more
> which introduce a loop of allocating several thousands of order-0 pages.
> Furthermore, the concurrent allocation could have dma-buf allocation enter
> direct-reclaim during the loop. This commit would like to eliminate the
> above two affections by introducing alloc_pages_bulk_list in dma-buf's
> order-0 allocation. This patch is proved to be conditionally helpful
> in 18MB allocation as decreasing the time from 24604us to 6555us and no
> harm when bulk allocation can't be done(fallback to single page
> allocation)
>
> Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> ---
> drivers/dma-buf/heaps/system_heap.c | 36 +++++++++++++++++++----------
> 1 file changed, 24 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> index bbe7881f1360..71b028c63bd8 100644
> --- a/drivers/dma-buf/heaps/system_heap.c
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -300,8 +300,8 @@ static const struct dma_buf_ops system_heap_buf_ops = {
> .release = system_heap_dma_buf_release,
> };
>
> -static struct page *alloc_largest_available(unsigned long size,
> - unsigned int max_order)
> +static void alloc_largest_available(unsigned long size,
> + unsigned int max_order, unsigned int *num_pages, struct list_head *list)
This interface feels weird. Maybe you could return the number of pages
instead of making this a void function and passing a pointer to get that
number?
> {
> struct page *page;
> int i;
> @@ -312,12 +312,19 @@ static struct page *alloc_largest_available(unsigned long size,
> if (max_order < orders[i])
> continue;
>
> - page = alloc_pages(order_flags[i], orders[i]);
> - if (!page)
> + if (orders[i]) {
> + page = alloc_pages(order_flags[i], orders[i]);
nitpick: Since the lowest order is special-cased now, you can simply
use HIGH_ORDER_GFP here and remove order_flags[] entirely.
> + if (page) {
> + list_add(&page->lru, list);
> + *num_pages = 1;
> + }
> + } else
> + *num_pages = alloc_pages_bulk_list(LOW_ORDER_GFP, size / PAGE_SIZE, list);
> +
> + if (list_empty(list))
> continue;
> - return page;
> + return;
> }
> - return NULL;
> }
>
> static struct dma_buf *system_heap_allocate(struct dma_heap *heap,
> @@ -335,6 +342,8 @@ static struct dma_buf *system_heap_allocate(struct dma_heap *heap,
> struct list_head pages;
> struct page *page, *tmp_page;
> int i, ret = -ENOMEM;
> + unsigned int num_pages;
> + LIST_HEAD(head);
>
> buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
> if (!buffer)
> @@ -348,6 +357,8 @@ static struct dma_buf *system_heap_allocate(struct dma_heap *heap,
> INIT_LIST_HEAD(&pages);
> i = 0;
> while (size_remaining > 0) {
> + num_pages = 0;
> + INIT_LIST_HEAD(&head);
> /*
> * Avoid trying to allocate memory if the process
> * has been killed by SIGKILL
> @@ -357,14 +368,15 @@ static struct dma_buf *system_heap_allocate(struct dma_heap *heap,
> goto free_buffer;
> }
>
> - page = alloc_largest_available(size_remaining, max_order);
> - if (!page)
> + alloc_largest_available(size_remaining, max_order, &num_pages, &head);
> + if (!num_pages)
> goto free_buffer;
>
> - list_add_tail(&page->lru, &pages);
> - size_remaining -= page_size(page);
> - max_order = compound_order(page);
> - i++;
> + list_splice_tail(&head, &pages);
> + max_order = folio_order(lru_to_folio(&head));
> + size_remaining -= PAGE_SIZE * (num_pages << max_order);
This looks complicated. What about changing alloc_largest_available()
to return the total number of pages and using PAGE_SIZE * num_page?
Ah, you still have to look at the folio order to determine the new
value of max_order, so no big win. Hm. You could pass a pointer to
max_order down to alloc_largest_available(), but at that point I think
it's a matter of taste (aka bikeshedding).
Petr T
> + i += num_pages;
> +
> }
>
> table = &buffer->sg_table;
Powered by blists - more mailing lists