[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <163184741776.29351.3565418361661850328.stgit@noble.brown>
Date: Fri, 17 Sep 2021 12:56:57 +1000
From: NeilBrown <neilb@...e.de>
To: Andrew Morton <akpm@...ux-foundation.org>,
Theodore Ts'o <tytso@....edu>,
Andreas Dilger <adilger.kernel@...ger.ca>,
"Darrick J. Wong" <djwong@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Mel Gorman <mgorman@...e.de>, Michal Hocko <mhocko@...e.com>,
". Dave Chinner" <david@...morbit.com>,
Jonathan Corbet <corbet@....net>
Cc: linux-xfs@...r.kernel.org, linux-ext4@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-nfs@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org
Subject: [PATCH 1/6] MM: Support __GFP_NOFAIL in alloc_pages_bulk_*() and
improve doco
When alloc_pages_bulk_array() is called on an array that is partially
allocated, the level of effort to get a single page is less than when
the array was completely unallocated. This behaviour is inconsistent,
but now fixed. One effect if this is that __GFP_NOFAIL will not ensure
at least one page is allocated.
Also clarify the expected success rate. __alloc_pages_bulk() will
allocated one page according to @gfp, and may allocate more if that can
be done cheaply. It is assumed that the caller values cheap allocation
where possible and may decide to use what it has got, or to call again
to get more.
Acked-by: Mel Gorman <mgorman@...e.com>
Fixes: 0f87d9d30f21 ("mm/page_alloc: add an array-based interface to the bulk page allocator")
Signed-off-by: NeilBrown <neilb@...e.de>
---
mm/page_alloc.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b37435c274cf..aa51016e49c5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5191,6 +5191,11 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
* is the maximum number of pages that will be stored in the array.
*
* Returns the number of pages on the list or array.
+ *
+ * At least one page will be allocated if that is possible while
+ * remaining consistent with @gfp. Extra pages up to the requested
+ * total will be allocated opportunistically when doing so is
+ * significantly cheaper than having the caller repeat the request.
*/
unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
nodemask_t *nodemask, int nr_pages,
@@ -5292,7 +5297,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
pcp, pcp_list);
if (unlikely(!page)) {
/* Try and get at least one page */
- if (!nr_populated)
+ if (!nr_account)
goto failed_irq;
break;
}
Powered by blists - more mailing lists