[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230912162815.440749-4-zi.yan@sent.com>
Date: Tue, 12 Sep 2023 12:28:14 -0400
From: Zi Yan <zi.yan@...t.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: Zi Yan <ziy@...dia.com>, Ryan Roberts <ryan.roberts@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
"Yin, Fengwei" <fengwei.yin@...el.com>,
Yu Zhao <yuzhao@...gle.com>, Vlastimil Babka <vbabka@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Kemeng Shi <shikemeng@...weicloud.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Rohan Puri <rohan.puri15@...il.com>,
Mcgrof Chamberlain <mcgrof@...nel.org>,
Adam Manzanares <a.manzanares@...sung.com>,
John Hubbard <jhubbard@...dia.com>
Subject: [RFC PATCH 3/4] mm/compaction: optimize >0 order folio compaction by sorting source pages.
From: Zi Yan <ziy@...dia.com>
It should maximize high order free page use and minimize free page splits.
It might be useful before free page merging is implemented.
Signed-off-by: Zi Yan <ziy@...dia.com>
---
mm/compaction.c | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/mm/compaction.c b/mm/compaction.c
index 45747ab5f380..4300d877b824 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -145,6 +145,38 @@ static void sort_free_pages(struct list_head *src, struct free_list *dst)
}
}
+static void sort_folios_by_order(struct list_head *pages)
+{
+ struct free_list page_list[MAX_ORDER + 1];
+ int order;
+ struct folio *folio, *next;
+
+ for (order = 0; order <= MAX_ORDER; order++) {
+ INIT_LIST_HEAD(&page_list[order].pages);
+ page_list[order].nr_free = 0;
+ }
+
+ list_for_each_entry_safe(folio, next, pages, lru) {
+ order = folio_order(folio);
+
+ if (order > MAX_ORDER)
+ continue;
+
+ list_move(&folio->lru, &page_list[order].pages);
+ page_list[order].nr_free++;
+ }
+
+ for (order = MAX_ORDER; order >= 0; order--) {
+ if (page_list[order].nr_free) {
+
+ list_for_each_entry_safe(folio, next,
+ &page_list[order].pages, lru) {
+ list_move_tail(&folio->lru, pages);
+ }
+ }
+ }
+}
+
#ifdef CONFIG_COMPACTION
bool PageMovable(struct page *page)
{
@@ -2636,6 +2668,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
pageblock_start_pfn(cc->migrate_pfn - 1));
}
+ sort_folios_by_order(&cc->migratepages);
+
err = migrate_pages(&cc->migratepages, compaction_alloc,
compaction_free, (unsigned long)cc, cc->mode,
MR_COMPACTION, &nr_succeeded);
--
2.40.1
Powered by blists - more mailing lists