[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <416d1450-6480-4113-b778-689a8f1d4e42@redhat.com>
Date: Tue, 20 Feb 2024 10:03:13 +0100
From: David Hildenbrand <david@...hat.com>
To: Zi Yan <ziy@...dia.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: "Huang, Ying" <ying.huang@...el.com>, Ryan Roberts
<ryan.roberts@....com>, Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
"Yin, Fengwei" <fengwei.yin@...el.com>, Yu Zhao <yuzhao@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Johannes Weiner <hannes@...xchg.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Kemeng Shi <shikemeng@...weicloud.com>,
Mel Gorman <mgorman@...hsingularity.net>, Rohan Puri
<rohan.puri15@...il.com>, Mcgrof Chamberlain <mcgrof@...nel.org>,
Adam Manzanares <a.manzanares@...sung.com>,
"Vishal Moola (Oracle)" <vishal.moola@...il.com>
Subject: Re: [PATCH v6 2/4] mm/compaction: enable compacting >0 order folios.
On 16.02.24 18:04, Zi Yan wrote:
> From: Zi Yan <ziy@...dia.com>
>
> migrate_pages() supports >0 order folio migration and during compaction,
> even if compaction_alloc() cannot provide >0 order free pages,
> migrate_pages() can split the source page and try to migrate the base
> pages from the split. It can be a baseline and start point for adding
> support for compacting >0 order folios.
>
> Signed-off-by: Zi Yan <ziy@...dia.com>
> Suggested-by: Huang Ying <ying.huang@...el.com>
> Reviewed-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
> Tested-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> Tested-by: Yu Zhao <yuzhao@...gle.com>
> Cc: Adam Manzanares <a.manzanares@...sung.com>
> Cc: David Hildenbrand <david@...hat.com>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Kemeng Shi <shikemeng@...weicloud.com>
> Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> Cc: Luis Chamberlain <mcgrof@...nel.org>
> Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
> Cc: Mel Gorman <mgorman@...hsingularity.net>
> Cc: Ryan Roberts <ryan.roberts@....com>
> Cc: Vishal Moola (Oracle) <vishal.moola@...il.com>
> Cc: Vlastimil Babka <vbabka@...e.cz>
> Cc: Yin Fengwei <fengwei.yin@...el.com>
> ---
> mm/compaction.c | 66 ++++++++++++++++++++++++++++++++++++++-----------
> 1 file changed, 52 insertions(+), 14 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index cc801ce099b4..aa6aad805c4d 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -816,6 +816,21 @@ static bool too_many_isolated(struct compact_control *cc)
> return too_many;
> }
>
> +/*
Can't you add these comments to the respective checks? Like
static bool skip_isolation_on_order(int order, int target_order)
{
/*
* Unless we are performing global compaction (targert_order <
* 0), skip any folios that are larger than the target order: we
* wouldn't be here if we'd have a free folio with the desired
* target_order, so migrating this folio would likely fail
* later.
*/
if (target_order != -1 && order >= target_order)
return true;
/*
* We limit memory compaction to pageblocks and won't try
* creating free blocks of memory that are larger than that.
*/
return order >= pageblock_order;
}
Then, add a simple expressive function documentation (if really
required) that doesn't contain all these details.
> + * 1. if the page order is larger than or equal to target_order (i.e.,
> + * cc->order and when it is not -1 for global compaction), skip it since
> + * target_order already indicates no free page with larger than target_order
> + * exists and later migrating it will most likely fail;
> + *
> + * 2. compacting > pageblock_order pages does not improve memory fragmentation,
I'm pretty sure you meant "reduce" ?
> + * skip them;
> + */
> +static bool skip_isolation_on_order(int order, int target_order)
> +{
> + return (target_order != -1 && order >= target_order) ||
> + order >= pageblock_order;
> +}
> +
> /**
> * isolate_migratepages_block() - isolate all migrate-able pages within
> * a single pageblock
> @@ -947,7 +962,22 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> valid_page = page;
> }
>
> - if (PageHuge(page) && cc->alloc_contig) {
> + if (PageHuge(page)) {
> + /*
> + * skip hugetlbfs if we are not compacting for pages
> + * bigger than its order. THPs and other compound pages
> + * are handled below.
> + */
> + if (!cc->alloc_contig) {
> + const unsigned int order = compound_order(page);
> +
> + if (order <= MAX_PAGE_ORDER) {
> + low_pfn += (1UL << order) - 1;
> + nr_scanned += (1UL << order) - 1;
> + }
> + goto isolate_fail;
> + }
> + /* for alloc_contig case */
> if (locked) {
> unlock_page_lruvec_irqrestore(locked, flags);
> locked = NULL;
> @@ -1008,21 +1038,24 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> }
>
> /*
> - * Regardless of being on LRU, compound pages such as THP and
> - * hugetlbfs are not to be compacted unless we are attempting
> - * an allocation much larger than the huge page size (eg CMA).
> - * We can potentially save a lot of iterations if we skip them
> - * at once. The check is racy, but we can consider only valid
> - * values and the only danger is skipping too much.
> + * Regardless of being on LRU, compound pages such as THP
> + * (hugetlbfs is handled above) are not to be compacted unless
> + * we are attempting an allocation larger than the compound
> + * page size. We can potentially save a lot of iterations if we
> + * skip them at once. The check is racy, but we can consider
> + * only valid values and the only danger is skipping too much.
> */
> if (PageCompound(page) && !cc->alloc_contig) {
> const unsigned int order = compound_order(page);
>
> - if (likely(order <= MAX_PAGE_ORDER)) {
> - low_pfn += (1UL << order) - 1;
> - nr_scanned += (1UL << order) - 1;
> + /* Skip based on page order and compaction target order. */
> + if (skip_isolation_on_order(order, cc->order)) {
> + if (order <= MAX_PAGE_ORDER) {
> + low_pfn += (1UL << order) - 1;
> + nr_scanned += (1UL << order) - 1;
> + }
> + goto isolate_fail;
> }
> - goto isolate_fail;
> }
>
> /*
> @@ -1165,10 +1198,11 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> }
>
> /*
> - * folio become large since the non-locked check,
> - * and it's on LRU.
> + * Check LRU folio order under the lock
> */
> - if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) {
> + if (unlikely(skip_isolation_on_order(folio_order(folio),
> + cc->order) &&
> + !cc->alloc_contig)) {
> low_pfn += folio_nr_pages(folio) - 1;
> nr_scanned += folio_nr_pages(folio) - 1;
> folio_set_lru(folio);
> @@ -1788,6 +1822,10 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
> struct compact_control *cc = (struct compact_control *)data;
> struct folio *dst;
>
> + /* this makes migrate_pages() split the source page and retry */
> + if (folio_test_large(src) > 0)
> + return NULL;
Why the "> 0 " check ? Either it's large or it isn't.
Apart from that LGTM, but I am no compaction expert.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists