[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ecb315f9-a5cd-4fb3-bae6-eb94a08eccb3@linux.alibaba.com>
Date: Tue, 15 Aug 2023 16:38:43 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Kemeng Shi <shikemeng@...weicloud.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
mgorman@...hsingularity.net, david@...hat.com
Subject: Re: [PATCH 4/9] mm/compaction: simplify pfn iteration in
isolate_freepages_range
On 8/5/2023 7:07 PM, Kemeng Shi wrote:
> We call isolate_freepages_block in strict mode, continuous pages in
> pageblock will be isolated if isolate_freepages_block successed.
> Then pfn + isolated will point to start of next pageblock to scan
> no matter how many pageblocks are isolated in isolate_freepages_block.
> Use pfn + isolated as start of next pageblock to scan to simplify the
> iteration.
IIUC, the isolate_freepages_block() can isolate high-order free pages,
which means the pfn + isolated can be larger than the block_end_pfn. So
in your patch, the 'block_start_pfn' and 'block_end_pfn' can be in
different pageblocks, that will break pageblock_pfn_to_page().
>
> Signed-off-by: Kemeng Shi <shikemeng@...weicloud.com>
> ---
> mm/compaction.c | 14 ++------------
> 1 file changed, 2 insertions(+), 12 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 684f6e6cd8bc..8d7d38073d30 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -733,21 +733,11 @@ isolate_freepages_range(struct compact_control *cc,
> block_end_pfn = pageblock_end_pfn(pfn);
>
> for (; pfn < end_pfn; pfn += isolated,
> - block_start_pfn = block_end_pfn,
> - block_end_pfn += pageblock_nr_pages) {
> + block_start_pfn = pfn,
> + block_end_pfn = pfn + pageblock_nr_pages) {
> /* Protect pfn from changing by isolate_freepages_block */
> unsigned long isolate_start_pfn = pfn;
>
> - /*
> - * pfn could pass the block_end_pfn if isolated freepage
> - * is more than pageblock order. In this case, we adjust
> - * scanning range to right one.
> - */
> - if (pfn >= block_end_pfn) {
> - block_start_pfn = pageblock_start_pfn(pfn);
> - block_end_pfn = pageblock_end_pfn(pfn);
> - }
> -
> block_end_pfn = min(block_end_pfn, end_pfn);
>
> if (!pageblock_pfn_to_page(block_start_pfn,
Powered by blists - more mailing lists