[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <33e6c72e-5479-e503-99ec-5025fafd46f6@suse.cz>
Date: Mon, 29 Mar 2021 17:31:09 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Liu Shixin <liushixin2@...wei.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Stephen Rothwell <sfr@...b.auug.org.au>,
Michal Hocko <mhocko@...e.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Kefeng Wang <wangkefeng.wang@...wei.com>
Subject: Re: [PATCH -next] mm, page_alloc: avoid page_to_pfn() in
move_freepages()
On 3/23/21 2:12 PM, Liu Shixin wrote:
> From: Kefeng Wang <wangkefeng.wang@...wei.com>
>
> The start_pfn and end_pfn are already available in move_freepages_block(),
> there is no need to go back and forth between page and pfn in move_freepages
> and move_freepages_block, and pfn_valid_within() should validate pfn first
> before touching the page.
>
> Signed-off-by: Kefeng Wang <wangkefeng.wang@...wei.com>
> Signed-off-by: Liu Shixin <liushixin2@...wei.com>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
Agreed with Matthew's suggestion, also:
> @@ -2468,25 +2469,22 @@ static int move_freepages(struct zone *zone,
> int move_freepages_block(struct zone *zone, struct page *page,
> int migratetype, int *num_movable)
> {
> - unsigned long start_pfn, end_pfn;
> - struct page *start_page, *end_page;
> + unsigned long start_pfn, end_pfn, pfn;
>
> if (num_movable)
> *num_movable = 0;
>
> - start_pfn = page_to_pfn(page);
> - start_pfn = start_pfn & ~(pageblock_nr_pages-1);
> - start_page = pfn_to_page(start_pfn);
> - end_page = start_page + pageblock_nr_pages - 1;
> + pfn = page_to_pfn(page);
> + start_pfn = pfn & ~(pageblock_nr_pages - 1);
Since you touch this already, consider pageblock_start_pfn()
> end_pfn = start_pfn + pageblock_nr_pages - 1;
I'd also drop the "- 1" here and instead adjust the for loop's ending condition
from "pfn <= end_pfn" to "pfn < end_pfn" as that's more common way of doing it.
Thanks.
>
> /* Do not cross zone boundaries */
> if (!zone_spans_pfn(zone, start_pfn))
> - start_page = page;
> + start_pfn = pfn;
> if (!zone_spans_pfn(zone, end_pfn))
> return 0;
>
> - return move_freepages(zone, start_page, end_page, migratetype,
> + return move_freepages(zone, start_pfn, end_pfn, migratetype,
> num_movable);
> }
>
>
Powered by blists - more mailing lists