[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <bd8c2f5c-464d-44ab-b607-390a87ea4cd5@huawei.com>
Date: Wed, 25 Dec 2024 20:36:04 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Luiz Capitulino <luizcap@...hat.com>, <linux-mm@...ck.org>,
<mgorman@...hsingularity.net>, <willy@...radead.org>
CC: <david@...hat.com>, <linux-kernel@...r.kernel.org>,
<lcapitulino@...il.com>
Subject: Re: [PATCH v2 1/2] mm: alloc_pages_bulk_noprof: drop page_list
argument
On 2024/12/24 6:00, Luiz Capitulino wrote:
> /*
> - * __alloc_pages_bulk - Allocate a number of order-0 pages to a list or array
> + * __alloc_pages_bulk - Allocate a number of order-0 pages to an array
> * @gfp: GFP flags for the allocation
> * @preferred_nid: The preferred NUMA node ID to allocate from
> * @nodemask: Set of nodes to allocate from, may be NULL
> - * @nr_pages: The number of pages desired on the list or array
> - * @page_list: Optional list to store the allocated pages
> - * @page_array: Optional array to store the pages
> + * @nr_pages: The number of pages desired in the array
> + * @page_array: Array to store the pages
> *
> * This is a batched version of the page allocator that attempts to
> - * allocate nr_pages quickly. Pages are added to page_list if page_list
> - * is not NULL, otherwise it is assumed that the page_array is valid.
> + * allocate nr_pages quickly. Pages are added to the page_array.
> *
> - * For lists, nr_pages is the number of pages that should be allocated.
> - *
> - * For arrays, only NULL elements are populated with pages and nr_pages
> + * Note that only NULL elements are populated with pages and nr_pages
It is not really related to this patch, but while we are at this, the above
seems like an odd behavior. By roughly looking at all the callers of that
API, it seems like only the below callers rely on that?
fs/erofs/zutil.c: z_erofs_gbuf_growsize()
fs/xfs/xfs_buf.c: xfs_buf_alloc_pages()
It seems it is quite straight forward to change the above callers to not
rely on the above behavior, and we might be able to avoid more checking
by removing the above behavior?
> * is the maximum number of pages that will be stored in the array.
> *
> - * Returns the number of pages on the list or array.
> + * Returns the number of pages in the array.
> */
> unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
> nodemask_t *nodemask, int nr_pages,
> - struct list_head *page_list,
> struct page **page_array)
> {
> struct page *page;
> @@ -4570,7 +4565,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
> * Skip populated array elements to determine if any pages need
> * to be allocated before disabling IRQs.
> */
> - while (page_array && nr_populated < nr_pages && page_array[nr_populated])
> + while (nr_populated < nr_pages && page_array[nr_populated])
> nr_populated++;
The above might be avoided as mentioned above.
>
> /* No pages requested? */
> @@ -4578,7 +4573,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
> goto out;
>
> /* Already populated array? */
> - if (unlikely(page_array && nr_pages - nr_populated == 0))
> + if (unlikely(nr_pages - nr_populated == 0))
> goto out;
>
> /* Bulk allocator does not support memcg accounting. */
> @@ -4660,7 +4655,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
> while (nr_populated < nr_pages) {
>
> /* Skip existing pages */
> - if (page_array && page_array[nr_populated]) {
> + if (page_array[nr_populated]) {
Similar here.
> nr_populated++;
> continue;
> }
Powered by blists - more mailing lists