[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210113012143.1201105-3-minchan@kernel.org>
Date: Tue, 12 Jan 2021 17:21:41 -0800
From: Minchan Kim <minchan@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
hyesoo.yu@...sung.com, david@...hat.com, mhocko@...e.com,
surenb@...gle.com, pullip.cho@...sung.com, joaodias@...gle.com,
hridya@...gle.com, john.stultz@...aro.org, sumit.semwal@...aro.org,
linux-media@...r.kernel.org, devicetree@...r.kernel.org,
hch@...radead.org, robh+dt@...nel.org,
linaro-mm-sig@...ts.linaro.org, Minchan Kim <minchan@...nel.org>
Subject: [PATCH v3 2/4] mm: failfast mode with __GFP_NORETRY in alloc_contig_range
Contiguous memory allocation can be stalled due to waiting
on page writeback and/or page lock which causes unpredictable
delay. It's a unavoidable cost for the requestor to get *big*
contiguous memory but it's expensive for *small* contiguous
memory(e.g., order-4) because caller could retry the request
in diffrent range where would have easy migratable pages
without stalling.
This patch introduce __GFP_NORETRY as compaction gfp_mask in
alloc_contig_range so it will fail fast without blocking
when it encounters pages needed waitting.
Signed-off-by: Minchan Kim <minchan@...nel.org>
---
mm/page_alloc.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5b3923db9158..ff41ceb4db51 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8489,12 +8489,16 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
unsigned int nr_reclaimed;
unsigned long pfn = start;
unsigned int tries = 0;
+ unsigned int max_tries = 5;
int ret = 0;
struct migration_target_control mtc = {
.nid = zone_to_nid(cc->zone),
.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
};
+ if (cc->alloc_contig && cc->mode == MIGRATE_ASYNC)
+ max_tries = 1;
+
migrate_prep();
while (pfn < end || !list_empty(&cc->migratepages)) {
@@ -8511,7 +8515,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
break;
}
tries = 0;
- } else if (++tries == 5) {
+ } else if (++tries == max_tries) {
ret = ret < 0 ? ret : -EBUSY;
break;
}
@@ -8562,7 +8566,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
.nr_migratepages = 0,
.order = -1,
.zone = page_zone(pfn_to_page(start)),
- .mode = MIGRATE_SYNC,
+ .mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC,
.ignore_skip_hint = true,
.no_set_skip_hint = true,
.gfp_mask = current_gfp_context(gfp_mask),
--
2.30.0.284.gd98b1dd5eaa7-goog
Powered by blists - more mailing lists