[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8473ed80-8a1b-45f6-9950-29e6d960f2b3@intel.com>
Date: Tue, 18 Nov 2025 18:31:35 +0800
From: "Li, Tianyou" <tianyou.li@...el.com>
To: Mike Rapoport <rppt@...nel.org>
CC: David Hildenbrand <david@...hat.com>, Oscar Salvador <osalvador@...e.de>,
<linux-mm@...ck.org>, Yong Hu <yong.hu@...el.com>, Nanhai Zou
<nanhai.zou@...el.com>, Yuan Liu <yuan1.liu@...el.com>, Tim Chen
<tim.c.chen@...ux.intel.com>, Qiuxu Zhuo <qiuxu.zhuo@...el.com>, Yu C Chen
<yu.c.chen@...el.com>, Pan Deng <pan.deng@...el.com>, Chen Zhang
<zhangchen.kidd@...com>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm/memory hotplug/unplug: Optimize zone->contiguous
update when move pfn range
On 11/18/2025 5:35 PM, Li, Tianyou wrote:
> Thanks for your comments Mike. Appreciated.
>
>
> On 11/18/2025 1:13 PM, Mike Rapoport wrote:
>> On Mon, Nov 17, 2025 at 11:30:52AM +0800, Tianyou Li wrote:
>>> When invoke move_pfn_range_to_zone, it will update the
>>> zone->contiguous by
>>> checking the new zone's pfn range from the beginning to the end,
>>> regardless
>>> the previous state of the old zone. When the zone's pfn range is
>>> large, the
>>> cost of traversing the pfn range to update the zone->contiguous
>>> could be
>>> significant.
>>>
>>> Add fast paths to quickly detect cases where zone is definitely not
>>> contiguous without scanning the new zone. The cases are: when the
>>> new range
>>> did not overlap with previous range, the contiguous should be false;
>>> if the
>>> new range adjacent with the previous range, just need to check the new
>>> range; if the new added pages could not fill the hole of previous
>>> zone, the
>>> contiguous should be false.
>>>
>>> The following test cases of memory hotplug for a VM [1], tested in the
>>> environment [2], show that this optimization can significantly
>>> reduce the
>>> memory hotplug time [3].
>>>
>>> +----------------+------+---------------+--------------+----------------+
>>>
>>> | | Size | Time (before) | Time (after) | Time
>>> Reduction |
>>> | +------+---------------+--------------+----------------+
>>> | Memory Hotplug | 256G | 10s | 3s | 70% |
>>> | +------+---------------+--------------+----------------+
>>> | | 512G | 33s | 8s | 76% |
>>> +----------------+------+---------------+--------------+----------------+
>>>
>>>
>>> [1] Qemu commands to hotplug 512G memory for a VM:
>>> object_add memory-backend-ram,id=hotmem0,size=512G,share=on
>>> device_add virtio-mem-pci,id=vmem1,memdev=hotmem0,bus=port1
>>> qom-set vmem1 requested-size 512G
>>>
>>> [2] Hardware : Intel Icelake server
>>> Guest Kernel : v6.18-rc2
>>> Qemu : v9.0.0
>>>
>>> Launch VM :
>>> qemu-system-x86_64 -accel kvm -cpu host \
>>> -drive file=./Centos10_cloud.qcow2,format=qcow2,if=virtio \
>>> -drive file=./seed.img,format=raw,if=virtio \
>>> -smp 3,cores=3,threads=1,sockets=1,maxcpus=3 \
>>> -m 2G,slots=10,maxmem=2052472M \
>>> -device
>>> pcie-root-port,id=port1,bus=pcie.0,slot=1,multifunction=on \
>>> -device pcie-root-port,id=port2,bus=pcie.0,slot=2 \
>>> -nographic -machine q35 \
>>> -nic user,hostfwd=tcp::3000-:22
>>>
>>> Guest kernel auto-onlines newly added memory blocks:
>>> echo online > /sys/devices/system/memory/auto_online_blocks
>>>
>>> [3] The time from typing the QEMU commands in [1] to when the output of
>>> 'grep MemTotal /proc/meminfo' on Guest reflects that all
>>> hotplugged
>>> memory is recognized.
>>>
>>> Reported-by: Nanhai Zou <nanhai.zou@...el.com>
>>> Reported-by: Chen Zhang <zhangchen.kidd@...com>
>>> Tested-by: Yuan Liu <yuan1.liu@...el.com>
>>> Reviewed-by: Tim Chen <tim.c.chen@...ux.intel.com>
>>> Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@...el.com>
>>> Reviewed-by: Yu C Chen <yu.c.chen@...el.com>
>>> Reviewed-by: Pan Deng <pan.deng@...el.com>
>>> Reviewed-by: Nanhai Zou <nanhai.zou@...el.com>
>>> Signed-off-by: Tianyou Li <tianyou.li@...el.com>
>>> ---
>>> mm/internal.h | 3 +++
>>> mm/memory_hotplug.c | 48
>>> ++++++++++++++++++++++++++++++++++++++++++++-
>>> mm/mm_init.c | 31 ++++++++++++++++++++++-------
>>> 3 files changed, 74 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/mm/internal.h b/mm/internal.h
>>> index 1561fc2ff5b8..734caae6873c 100644
>>> --- a/mm/internal.h
>>> +++ b/mm/internal.h
>>> @@ -734,6 +734,9 @@ void set_zone_contiguous(struct zone *zone);
>>> bool pfn_range_intersects_zones(int nid, unsigned long start_pfn,
>>> unsigned long nr_pages);
>>> +bool check_zone_contiguous(struct zone *zone, unsigned long
>>> start_pfn,
>>> + unsigned long nr_pages);
>>> +
>>> static inline void clear_zone_contiguous(struct zone *zone)
>>> {
>>> zone->contiguous = false;
>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>> index 0be83039c3b5..96c003271b8e 100644
>>> --- a/mm/memory_hotplug.c
>>> +++ b/mm/memory_hotplug.c
>>> @@ -723,6 +723,47 @@ static void __meminit resize_pgdat_range(struct
>>> pglist_data *pgdat, unsigned lon
>>> }
>>> +static void __meminit update_zone_contiguous(struct zone *zone,
>>> + bool old_contiguous, unsigned long old_start_pfn,
>>> + unsigned long old_nr_pages, unsigned long
>>> old_absent_pages,
>>> + unsigned long new_start_pfn, unsigned long new_nr_pages)
>>> +{
>>> + unsigned long old_end_pfn = old_start_pfn + old_nr_pages;
>>> + unsigned long new_end_pfn = new_start_pfn + new_nr_pages;
>>> + unsigned long new_filled_pages = 0;
>>> +
>>> + /*
>>> + * If the moved pfn range does not intersect with the old zone
>>> span,
>>> + * the contiguous property is surely false.
>>> + */
>>> + if (new_end_pfn < old_start_pfn || new_start_pfn > old_end_pfn)
>>> + return;
>>> +
>>> + /*
>>> + * If the moved pfn range is adjacent to the old zone span,
>>> + * check the range to the left or to the right
>>> + */
>>> + if (new_end_pfn == old_start_pfn || new_start_pfn ==
>>> old_end_pfn) {
>>> + zone->contiguous = old_contiguous &&
>>> + check_zone_contiguous(zone, new_start_pfn, new_nr_pages);
>>> + return;
>> The check for adjacency of the new range to the zone can be moved to the
>> beginning of move_pfn_range_to_zone() and it will already optimize the
>> common case when we hotplug memory to a contiguous zone.
>
>
> Do you mean we can separate the update_zone_contiguous logic into two
> parts, one for fast path at the beginning of the
> move_pfn_range_to_zone, and the other for slow path after the
> memmep_init_range?
>
Re-think your idea, it's doable consider the check_zone_contiguous is
not necessary. We can have a function check_zone_contiguous_fast, which
need to pass the zone, start_pfn and nr_pages, return a boolean value to
indicate the fast path or not. The code changes minimized. Will send the
patch v2 soon.
>
>>> + }
>>> +
>>> + /*
>>> + * If old zone's hole larger than the new filled pages, the
>>> contiguous
>>> + * property is surely false.
>>> + */
>>> + new_filled_pages = new_end_pfn - old_start_pfn;
>>> + if (new_start_pfn > old_start_pfn)
>>> + new_filled_pages -= new_start_pfn - old_start_pfn;
>>> + if (new_end_pfn > old_end_pfn)
>>> + new_filled_pages -= new_end_pfn - old_end_pfn;
>>> + if (new_filled_pages < old_absent_pages)
>>> + return;
>> Let's just check that we don't add enough pages to cover the hole
>>
>> if (nr_new_pages < old_absent_pages)
>> return;
>>
>> and if we do go to the slow path and walk the pageblocks.
>
>
> I'd like to avoid of the slow path as much as possible. The check 'if
> (nr_new_pages < old_absent_pages)' is more strict if overlap happens.
> I am OK to simplify it if there is no overlap cases or to reduce the
> maintaining efforts.
>
>
> Thanks & Regards,
>
> Tianyou
>
>
>>> +
>>> + set_zone_contiguous(zone);
>>> +}
>>> +
Powered by blists - more mailing lists