[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7c520bbb-efd7-7cad-95df-610000832a67@suse.cz>
Date: Fri, 19 Mar 2021 19:18:32 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Mel Gorman <mgorman@...hsingularity.net>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Chuck Lever <chuck.lever@...cle.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Christoph Hellwig <hch@...radead.org>,
Alexander Duyck <alexander.duyck@...il.com>,
Matthew Wilcox <willy@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux-Net <netdev@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
Linux-NFS <linux-nfs@...r.kernel.org>
Subject: Re: [PATCH 3/7] mm/page_alloc: Add a bulk page allocator
On 3/12/21 4:43 PM, Mel Gorman wrote:
> This patch adds a new page allocator interface via alloc_pages_bulk,
> and __alloc_pages_bulk_nodemask. A caller requests a number of pages
> to be allocated and added to a list. They can be freed in bulk using
> free_pages_bulk().
>
> The API is not guaranteed to return the requested number of pages and
> may fail if the preferred allocation zone has limited free memory, the
> cpuset changes during the allocation or page debugging decides to fail
> an allocation. It's up to the caller to request more pages in batch
> if necessary.
>
> Note that this implementation is not very efficient and could be improved
> but it would require refactoring. The intent is to make it available early
> to determine what semantics are required by different callers. Once the
> full semantics are nailed down, it can be refactored.
>
> Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
Although maybe premature, if it changes significantly due to the users'
performance feedback, let's see :)
Some nits below:
...
> @@ -4963,6 +4978,107 @@ static inline bool prepare_alloc_pages(gfp_t gfp, unsigned int order,
> return true;
> }
>
> +/*
> + * This is a batched version of the page allocator that attempts to
> + * allocate nr_pages quickly from the preferred zone and add them to list.
> + *
> + * Returns the number of pages allocated.
> + */
> +int __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
> + nodemask_t *nodemask, int nr_pages,
> + struct list_head *alloc_list)
> +{
> + struct page *page;
> + unsigned long flags;
> + struct zone *zone;
> + struct zoneref *z;
> + struct per_cpu_pages *pcp;
> + struct list_head *pcp_list;
> + struct alloc_context ac;
> + gfp_t alloc_gfp;
> + unsigned int alloc_flags;
> + int allocated = 0;
> +
> + if (WARN_ON_ONCE(nr_pages <= 0))
> + return 0;
> +
> + if (nr_pages == 1)
> + goto failed;
> +
> + /* May set ALLOC_NOFRAGMENT, fragmentation will return 1 page. */
> + if (!prepare_alloc_pages(gfp, 0, preferred_nid, nodemask, &ac,
> + &alloc_gfp, &alloc_flags))
Unusual identation here.
> + return 0;
> + gfp = alloc_gfp;
> +
> + /* Find an allowed local zone that meets the high watermark. */
> + for_each_zone_zonelist_nodemask(zone, z, ac.zonelist, ac.highest_zoneidx, ac.nodemask) {
> + unsigned long mark;
> +
> + if (cpusets_enabled() && (alloc_flags & ALLOC_CPUSET) &&
> + !__cpuset_zone_allowed(zone, gfp)) {
> + continue;
> + }
> +
> + if (nr_online_nodes > 1 && zone != ac.preferred_zoneref->zone &&
> + zone_to_nid(zone) != zone_to_nid(ac.preferred_zoneref->zone)) {
> + goto failed;
> + }
> +
> + mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK) + nr_pages;
> + if (zone_watermark_fast(zone, 0, mark,
> + zonelist_zone_idx(ac.preferred_zoneref),
> + alloc_flags, gfp)) {
> + break;
> + }
> + }
> + if (!zone)
> + return 0;
Why not also "goto failed;" here?
Powered by blists - more mailing lists