[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <58BE8C91.20600@huawei.com>
Date: Tue, 7 Mar 2017 18:33:53 +0800
From: Xishi Qiu <qiuxishi@...wei.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Mel Gorman <mgorman@...hsingularity.net>,
Minchan Kim <minchan@...nel.org>,
Michal Hocko <mhocko@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>
CC: Yisheng Xie <xieyisheng1@...wei.com>,
Linux MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: [RFC][PATCH 1/2] mm: use MIGRATE_HIGHATOMIC as late as possible
MIGRATE_HIGHATOMIC page blocks are reserved for an atomic
high-order allocation, so use it as late as possible.
Signed-off-by: Xishi Qiu <qiuxishi@...wei.com>
---
mm/page_alloc.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 40d79a6..2331840 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2714,14 +2714,12 @@ struct page *rmqueue(struct zone *preferred_zone,
spin_lock_irqsave(&zone->lock, flags);
do {
- page = NULL;
- if (alloc_flags & ALLOC_HARDER) {
+ page = __rmqueue(zone, order, migratetype);
+ if (!page && alloc_flags & ALLOC_HARDER) {
page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
if (page)
trace_mm_page_alloc_zone_locked(page, order, migratetype);
}
- if (!page)
- page = __rmqueue(zone, order, migratetype);
} while (page && check_new_pages(page, order));
spin_unlock(&zone->lock);
if (!page)
--
1.8.3.1
Powered by blists - more mailing lists