[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210121175502.274391-3-minchan@kernel.org>
Date: Thu, 21 Jan 2021 09:55:00 -0800
From: Minchan Kim <minchan@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
hyesoo.yu@...sung.com, david@...hat.com, mhocko@...e.com,
surenb@...gle.com, pullip.cho@...sung.com, joaodias@...gle.com,
hridya@...gle.com, john.stultz@...aro.org, sumit.semwal@...aro.org,
linux-media@...r.kernel.org, devicetree@...r.kernel.org,
hch@...radead.org, robh+dt@...nel.org,
linaro-mm-sig@...ts.linaro.org, Minchan Kim <minchan@...nel.org>
Subject: [PATCH v4 2/4] mm: failfast mode with __GFP_NORETRY in alloc_contig_range
Contiguous memory allocation can be stalled due to waiting
on page writeback and/or page lock which causes unpredictable
delay. It's a unavoidable cost for the requestor to get *big*
contiguous memory but it's expensive for *small* contiguous
memory(e.g., order-4) because caller could retry the request
in different range where would have easy migratable pages
without stalling.
This patch introduce __GFP_NORETRY as compaction gfp_mask in
alloc_contig_range so it will fail fast without blocking
when it encounters pages needed waiting.
Signed-off-by: Minchan Kim <minchan@...nel.org>
---
mm/page_alloc.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b031a5ae0bd5..1cdc3ee0b22e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8491,12 +8491,16 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
unsigned int nr_reclaimed;
unsigned long pfn = start;
unsigned int tries = 0;
+ unsigned int max_tries = 5;
int ret = 0;
struct migration_target_control mtc = {
.nid = zone_to_nid(cc->zone),
.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
};
+ if (cc->alloc_contig && cc->mode == MIGRATE_ASYNC)
+ max_tries = 1;
+
migrate_prep();
while (pfn < end || !list_empty(&cc->migratepages)) {
@@ -8513,7 +8517,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
break;
}
tries = 0;
- } else if (++tries == 5) {
+ } else if (++tries == max_tries) {
ret = ret < 0 ? ret : -EBUSY;
break;
}
@@ -8564,7 +8568,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
.nr_migratepages = 0,
.order = -1,
.zone = page_zone(pfn_to_page(start)),
- .mode = MIGRATE_SYNC,
+ .mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC,
.ignore_skip_hint = true,
.no_set_skip_hint = true,
.gfp_mask = current_gfp_context(gfp_mask),
--
2.30.0.296.g2bfb1c46d8-goog
Powered by blists - more mailing lists