[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210611063834.11871-1-chengkaitao@didiglobal.com>
Date: Fri, 11 Jun 2021 14:38:34 +0800
From: chengkaitao <pilgrimtao@...il.com>
To: akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, smcdef@...il.com,
chengkaitao <pilgrimtao@...il.com>
Subject: [PATCH] mm: delete duplicate order checking, when stealing whole pageblock
From: chengkaitao <pilgrimtao@...il.com>
1. Already has (order >= pageblock_order / 2) here, we don't neet
(order >= pageblock_order)
2. set function can_steal_fallback to inline
Signed-off-by: chengkaitao <pilgrimtao@...il.com>
---
mm/page_alloc.c | 12 +-----------
1 file changed, 1 insertion(+), 11 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ded02d867491..180081fe711b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2619,18 +2619,8 @@ static void change_pageblock_range(struct page *pageblock_page,
* is worse than movable allocations stealing from unmovable and reclaimable
* pageblocks.
*/
-static bool can_steal_fallback(unsigned int order, int start_mt)
+static inline bool can_steal_fallback(unsigned int order, int start_mt)
{
- /*
- * Leaving this order check is intended, although there is
- * relaxed order check in next check. The reason is that
- * we can actually steal whole pageblock if this condition met,
- * but, below check doesn't guarantee it and that is just heuristic
- * so could be changed anytime.
- */
- if (order >= pageblock_order)
- return true;
-
if (order >= pageblock_order / 2 ||
start_mt == MIGRATE_RECLAIMABLE ||
start_mt == MIGRATE_UNMOVABLE ||
--
2.14.1
Powered by blists - more mailing lists