[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <84dfedc4-a0a2-4e02-9be4-2cffc6e9fd06@suse.cz>
Date: Fri, 9 Feb 2024 19:43:23 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Zi Yan <ziy@...dia.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: "Huang, Ying" <ying.huang@...el.com>, Ryan Roberts
<ryan.roberts@....com>, Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
David Hildenbrand <david@...hat.com>, "Yin, Fengwei"
<fengwei.yin@...el.com>, Yu Zhao <yuzhao@...gle.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Johannes Weiner <hannes@...xchg.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Kemeng Shi <shikemeng@...weicloud.com>,
Mel Gorman <mgorman@...hsingularity.net>, Rohan Puri
<rohan.puri15@...il.com>, Mcgrof Chamberlain <mcgrof@...nel.org>,
Adam Manzanares <a.manzanares@...sung.com>,
"Vishal Moola (Oracle)" <vishal.moola@...il.com>
Subject: Re: [PATCH v3 3/3] mm/compaction: optimize >0 order folio compaction
with free page split.
On 2/2/24 17:15, Zi Yan wrote:
> From: Zi Yan <ziy@...dia.com>
>
> During migration in a memory compaction, free pages are placed in an array
> of page lists based on their order. But the desired free page order (i.e.,
> the order of a source page) might not be always present, thus leading to
> migration failures and premature compaction termination. Split a high
> order free pages when source migration page has a lower order to increase
> migration successful rate.
>
> Note: merging free pages when a migration fails and a lower order free
> page is returned via compaction_free() is possible, but there is too much
> work. Since the free pages are not buddy pages, it is hard to identify
> these free pages using existing PFN-based page merging algorithm.
>
> Signed-off-by: Zi Yan <ziy@...dia.com>
> ---
> mm/compaction.c | 37 ++++++++++++++++++++++++++++++++++++-
> 1 file changed, 36 insertions(+), 1 deletion(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 58a4e3fb72ec..fa9993c8a389 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1832,9 +1832,43 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
> struct compact_control *cc = (struct compact_control *)data;
> struct folio *dst;
> int order = folio_order(src);
> + bool has_isolated_pages = false;
>
> +again:
> if (!cc->freepages[order].nr_pages) {
> - isolate_freepages(cc);
> + int i;
> +
> + for (i = order + 1; i < NR_PAGE_ORDERS; i++) {
You could probably just start with a loop that finds the start_order (and do
the isolate_freepages() attempt if there's none) and then handle the rest
outside of the loop. No need to separately handle the case where you have
the exact order available?
> + if (cc->freepages[i].nr_pages) {
> + struct page *freepage =
> + list_first_entry(&cc->freepages[i].pages,
> + struct page, lru);
> +
> + int start_order = i;
> + unsigned long size = 1 << start_order;
> +
> + list_del(&freepage->lru);
> + cc->freepages[i].nr_pages--;
> +
> + while (start_order > order) {
With exact order available this while loop will just be skipped and that's
all the difference to it?
> + start_order--;
> + size >>= 1;
> +
> + list_add(&freepage[size].lru,
> + &cc->freepages[start_order].pages);
> + cc->freepages[start_order].nr_pages++;
> + set_page_private(&freepage[size], start_order);
> + }
> + dst = (struct folio *)freepage;
> + goto done;
> + }
> + }
> + if (!has_isolated_pages) {
> + isolate_freepages(cc);
> + has_isolated_pages = true;
> + goto again;
> + }
> +
> if (!cc->freepages[order].nr_pages)
> return NULL;
> }
> @@ -1842,6 +1876,7 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
> dst = list_first_entry(&cc->freepages[order].pages, struct folio, lru);
> cc->freepages[order].nr_pages--;
> list_del(&dst->lru);
> +done:
> post_alloc_hook(&dst->page, order, __GFP_MOVABLE);
> if (order)
> prep_compound_page(&dst->page, order);
Powered by blists - more mailing lists