lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e460ab6f-3ebe-441f-8f55-ce8d50bf078f@arm.com>
Date:   Wed, 22 Nov 2023 10:26:29 +0000
From:   Ryan Roberts <ryan.roberts@....com>
To:     Zi Yan <ziy@...dia.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Cc:     "Huang, Ying" <ying.huang@...el.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "Matthew Wilcox (Oracle)" <willy@...radead.org>,
        David Hildenbrand <david@...hat.com>,
        "Yin, Fengwei" <fengwei.yin@...el.com>,
        Yu Zhao <yuzhao@...gle.com>, Vlastimil Babka <vbabka@...e.cz>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Baolin Wang <baolin.wang@...ux.alibaba.com>,
        Kemeng Shi <shikemeng@...weicloud.com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Rohan Puri <rohan.puri15@...il.com>,
        Mcgrof Chamberlain <mcgrof@...nel.org>,
        Adam Manzanares <a.manzanares@...sung.com>,
        "Vishal Moola (Oracle)" <vishal.moola@...il.com>
Subject: Re: [PATCH v1 3/4] mm/compaction: optimize >0 order folio compaction
 with free page split.

On 13/11/2023 17:01, Zi Yan wrote:
> From: Zi Yan <ziy@...dia.com>
> 
> During migration in a memory compaction, free pages are placed in an array
> of page lists based on their order. But the desired free page order (i.e.,
> the order of a source page) might not be always present, thus leading to
> migration failures. Split a high order free pages when source migration
> page has a lower order to increase migration successful rate.
> 
> Note: merging free pages when a migration fails and a lower order free
> page is returned via compaction_free() is possible, but there is too much
> work. Since the free pages are not buddy pages, it is hard to identify
> these free pages using existing PFN-based page merging algorithm.
> 
> Signed-off-by: Zi Yan <ziy@...dia.com>
> ---
>  mm/compaction.c | 40 +++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 39 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index ec6b5cc7e907..9c083e6b399a 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1806,9 +1806,46 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
>  	struct compact_control *cc = (struct compact_control *)data;
>  	struct folio *dst;
>  	int order = folio_order(src);
> +	bool has_isolated_pages = false;
>  
> +again:
>  	if (!cc->freepages[order].nr_pages) {
> -		isolate_freepages(cc);
> +		int i;
> +
> +		for (i = order + 1; i <= MAX_ORDER; i++) {
> +			if (cc->freepages[i].nr_pages) {
> +				struct page *freepage =
> +					list_first_entry(&cc->freepages[i].pages,
> +							 struct page, lru);
> +
> +				int start_order = i;
> +				unsigned long size = 1 << start_order;
> +
> +				list_del(&freepage->lru);
> +				cc->freepages[i].nr_pages--;
> +
> +				while (start_order > order) {
> +					start_order--;
> +					size >>= 1;
> +
> +					list_add(&freepage[size].lru,
> +						&cc->freepages[start_order].pages);
> +					cc->freepages[start_order].nr_pages++;
> +					set_page_private(&freepage[size], start_order);
> +				}
> +				post_alloc_hook(freepage, order, __GFP_MOVABLE);
> +				if (order)
> +					prep_compound_page(freepage, order);
> +				dst = page_folio(freepage);
> +				goto done;

Perhaps just do:

dst = (struct folio *)freepage;
goto done;

then move done: up a couple of statements below, so that post_alloc_hook() and
prep_compound_page() are always done below in common path? Although perhaps the
cast is frowned upon, you're already making the assumption that page and folio
are interchangable the way you call list_first_entry().

> +			}
> +		}
> +		if (!has_isolated_pages) {
> +			isolate_freepages(cc);
> +			has_isolated_pages = true;
> +			goto again;
> +		}
> +
>  		if (!cc->freepages[order].nr_pages)
>  			return NULL;
>  	}
> @@ -1819,6 +1856,7 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
>  	post_alloc_hook(&dst->page, order, __GFP_MOVABLE);
>  	if (order)
>  		prep_compound_page(&dst->page, order);
> +done:
>  	cc->nr_freepages -= 1 << order;
>  	return page_rmappable_folio(&dst->page);
>  }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ