[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231113170157.280181-4-zi.yan@sent.com>
Date: Mon, 13 Nov 2023 12:01:56 -0500
From: Zi Yan <zi.yan@...t.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: Zi Yan <ziy@...dia.com>, "Huang, Ying" <ying.huang@...el.com>,
Ryan Roberts <ryan.roberts@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
"Yin, Fengwei" <fengwei.yin@...el.com>,
Yu Zhao <yuzhao@...gle.com>, Vlastimil Babka <vbabka@...e.cz>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Johannes Weiner <hannes@...xchg.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Kemeng Shi <shikemeng@...weicloud.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Rohan Puri <rohan.puri15@...il.com>,
Mcgrof Chamberlain <mcgrof@...nel.org>,
Adam Manzanares <a.manzanares@...sung.com>,
"Vishal Moola (Oracle)" <vishal.moola@...il.com>
Subject: [PATCH v1 3/4] mm/compaction: optimize >0 order folio compaction with free page split.
From: Zi Yan <ziy@...dia.com>
During migration in a memory compaction, free pages are placed in an array
of page lists based on their order. But the desired free page order (i.e.,
the order of a source page) might not be always present, thus leading to
migration failures. Split a high order free pages when source migration
page has a lower order to increase migration successful rate.
Note: merging free pages when a migration fails and a lower order free
page is returned via compaction_free() is possible, but there is too much
work. Since the free pages are not buddy pages, it is hard to identify
these free pages using existing PFN-based page merging algorithm.
Signed-off-by: Zi Yan <ziy@...dia.com>
---
mm/compaction.c | 40 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 39 insertions(+), 1 deletion(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index ec6b5cc7e907..9c083e6b399a 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1806,9 +1806,46 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
struct compact_control *cc = (struct compact_control *)data;
struct folio *dst;
int order = folio_order(src);
+ bool has_isolated_pages = false;
+again:
if (!cc->freepages[order].nr_pages) {
- isolate_freepages(cc);
+ int i;
+
+ for (i = order + 1; i <= MAX_ORDER; i++) {
+ if (cc->freepages[i].nr_pages) {
+ struct page *freepage =
+ list_first_entry(&cc->freepages[i].pages,
+ struct page, lru);
+
+ int start_order = i;
+ unsigned long size = 1 << start_order;
+
+ list_del(&freepage->lru);
+ cc->freepages[i].nr_pages--;
+
+ while (start_order > order) {
+ start_order--;
+ size >>= 1;
+
+ list_add(&freepage[size].lru,
+ &cc->freepages[start_order].pages);
+ cc->freepages[start_order].nr_pages++;
+ set_page_private(&freepage[size], start_order);
+ }
+ post_alloc_hook(freepage, order, __GFP_MOVABLE);
+ if (order)
+ prep_compound_page(freepage, order);
+ dst = page_folio(freepage);
+ goto done;
+ }
+ }
+ if (!has_isolated_pages) {
+ isolate_freepages(cc);
+ has_isolated_pages = true;
+ goto again;
+ }
+
if (!cc->freepages[order].nr_pages)
return NULL;
}
@@ -1819,6 +1856,7 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
post_alloc_hook(&dst->page, order, __GFP_MOVABLE);
if (order)
prep_compound_page(&dst->page, order);
+done:
cc->nr_freepages -= 1 << order;
return page_rmappable_folio(&dst->page);
}
--
2.42.0
Powered by blists - more mailing lists