[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <258932e0-a2a6-7f17-014c-05676bfad456@suse.cz>
Date: Wed, 15 Mar 2023 16:54:31 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>,
akpm@...ux-foundation.org
Cc: mgorman@...hsingularity.net, osalvador@...e.de,
william.lam@...edance.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: compaction: consider the number of scanning
compound pages in isolate fail path
On 3/13/23 11:37, Baolin Wang wrote:
> The commit b717d6b93b54 ("mm: compaction: include compound page count
> for scanning in pageblock isolation") had added compound page statistics
> for scanning in pageblock isolation, to make sure the number of scanned
> pages are always larger than the number of isolated pages when isolating
> mirgratable or free pageblock.
>
> However, when failed to isolate the pages when scanning the mirgratable or
> free pageblock, the isolation failure path did not consider the scanning
> statistics of the compound pages, which can show the incorrect number of
> scanned pages in tracepoints or the vmstats to make people confusing about
> the page scanning pressure in memory compaction.
>
> Thus we should take into account the number of scanning pages when failed
> to isolate the compound pages to make the statistics accurate.
>
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> ---
> mm/compaction.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 5a9501e0ae01..c9d9ad958e2a 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -587,6 +587,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
> blockpfn += (1UL << order) - 1;
> cursor += (1UL << order) - 1;
> }
> + nr_scanned += (1UL << order) - 1;
I'd rather put it in the block above that tests order < MAX_ORDER. Otherwise
as the comments say, the value can be bogus as it's racy.
> goto isolate_fail;
> }
>
> @@ -873,9 +874,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> cond_resched();
> }
>
> - nr_scanned++;
> -
> page = pfn_to_page(low_pfn);
> + nr_scanned += compound_nr(page);
For the same reason, I'd rather leave the nr_scanned adjustment by order in
the specific code blocks where we know/think we have a compound or huge page
and have sanity checked the order/nr_pages, and not add an unchecked
compound_nr() here.
Thanks.
>
> /*
> * Check if the pageblock has already been marked skipped.
> @@ -1077,6 +1077,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> */
> if (unlikely(PageCompound(page) && !cc->alloc_contig)) {
> low_pfn += compound_nr(page) - 1;
> + nr_scanned += compound_nr(page) - 1;
> SetPageLRU(page);
> goto isolate_fail_put;
> }
> @@ -1097,7 +1098,6 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> isolate_success_no_list:
> cc->nr_migratepages += compound_nr(page);
> nr_isolated += compound_nr(page);
> - nr_scanned += compound_nr(page) - 1;
>
> /*
> * Avoid isolating too much unless this block is being
Powered by blists - more mailing lists