[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2ab0ddbb-f9a4-8963-b066-d9a93b5f01b3@huawei.com>
Date: Mon, 8 May 2023 15:27:42 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: "Huang, Ying" <ying.huang@...el.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Mike Rapoport <rppt@...nel.org>, <linux-mm@...ck.org>,
David Hildenbrand <david@...hat.com>,
Oscar Salvador <osalvador@...e.de>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Pavel Machek <pavel@....cz>, Len Brown <len.brown@...el.com>,
Luis Chamberlain <mcgrof@...nel.org>,
Kees Cook <keescook@...omium.org>,
Iurii Zaikin <yzaikin@...gle.com>,
<linux-kernel@...r.kernel.org>, <linux-pm@...r.kernel.org>,
<linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH 03/12] mm: page_alloc: move set_zone_contiguous() into
mm_init.c
On 2023/5/8 15:12, Huang, Ying wrote:
> Kefeng Wang <wangkefeng.wang@...wei.com> writes:
>
>> set_zone_contiguous() is only used in mm init/hotplug, and
>> clear_zone_contiguous() only used in hotplug, move them from
>> page_alloc.c to the more appropriate file.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@...wei.com>
>> ---
>> include/linux/memory_hotplug.h | 3 --
>> mm/internal.h | 7 +++
>> mm/mm_init.c | 74 +++++++++++++++++++++++++++++++
>> mm/page_alloc.c | 79 ----------------------------------
>> 4 files changed, 81 insertions(+), 82 deletions(-)
>>
...
>>
>> +/*
>> + * Check that the whole (or subset of) a pageblock given by the interval of
>> + * [start_pfn, end_pfn) is valid and within the same zone, before scanning it
>> + * with the migration of free compaction scanner.
>> + *
>> + * Return struct page pointer of start_pfn, or NULL if checks were not passed.
>> + *
>> + * It's possible on some configurations to have a setup like node0 node1 node0
>> + * i.e. it's possible that all pages within a zones range of pages do not
>> + * belong to a single zone. We assume that a border between node0 and node1
>> + * can occur within a single pageblock, but not a node0 node1 node0
>> + * interleaving within a single pageblock. It is therefore sufficient to check
>> + * the first and last page of a pageblock and avoid checking each individual
>> + * page in a pageblock.
>> + *
>> + * Note: the function may return non-NULL struct page even for a page block
>> + * which contains a memory hole (i.e. there is no physical memory for a subset
>> + * of the pfn range). For example, if the pageblock order is MAX_ORDER, which
>> + * will fall into 2 sub-sections, and the end pfn of the pageblock may be hole
>> + * even though the start pfn is online and valid. This should be safe most of
>> + * the time because struct pages are still initialized via init_unavailable_range()
>> + * and pfn walkers shouldn't touch any physical memory range for which they do
>> + * not recognize any specific metadata in struct pages.
>> + */
>> +struct page *__pageblock_pfn_to_page(unsigned long start_pfn,
>> + unsigned long end_pfn, struct zone *zone)
>
> __pageblock_pfn_to_page() is also called by compaction code too (e.g.,
> isolate_freepages_range() -> pageblock_pfn_to_page() ->
> __pageblock_pfn_to_page()).
>
> So, it is used not only by initialization and hotplug?
>
I should drop the move of this function, thanks for your reminder.
> Best Regards,
> Huang, Ying
Powered by blists - more mailing lists