[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1506415604-4310-3-git-send-email-zhuhui@xiaomi.com>
Date: Tue, 26 Sep 2017 16:46:44 +0800
From: Hui Zhu <zhuhui@...omi.com>
To: <akpm@...ux-foundation.org>, <mhocko@...e.com>, <vbabka@...e.cz>,
<mgorman@...hsingularity.net>, <hillf.zj@...baba-inc.com>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>
CC: <teawater@...il.com>, Hui Zhu <zhuhui@...omi.com>
Subject: [RFC 2/2] Change limit of HighAtomic from 1% to 10%
After "Try to use HighAtomic if try to alloc umovable page that order
is not 0". The result is still not very well because the the limit of
HighAtomic make kernel cannot reserve more pageblock to HighAtomic.
The patch change max_managed from 1% to 10% make HighAtomic can get more
pageblocks.
Signed-off-by: Hui Zhu <zhuhui@...omi.com>
---
mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b54e94a..9322458 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2101,7 +2101,7 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
* Limit the number reserved to 1 pageblock or roughly 1% of a zone.
* Check is race-prone but harmless.
*/
- max_managed = (zone->managed_pages / 100) + pageblock_nr_pages;
+ max_managed = (zone->managed_pages / 10) + pageblock_nr_pages;
if (zone->nr_reserved_highatomic >= max_managed)
return;
--
1.9.1
Powered by blists - more mailing lists