lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <79801dc3-234f-4de1-bd26-56d3cd7b731c@intel.com>
Date: Tue, 18 Nov 2025 17:07:09 +0800
From: "Li, Tianyou" <tianyou.li@...el.com>
To: "David Hildenbrand (Red Hat)" <david@...nel.org>, Oscar Salvador
	<osalvador@...e.de>
CC: <linux-mm@...ck.org>, Yong Hu <yong.hu@...el.com>, Nanhai Zou
	<nanhai.zou@...el.com>, Yuan Liu <yuan1.liu@...el.com>, Tim Chen
	<tim.c.chen@...ux.intel.com>, Qiuxu Zhuo <qiuxu.zhuo@...el.com>, Yu C Chen
	<yu.c.chen@...el.com>, Pan Deng <pan.deng@...el.com>, Chen Zhang
	<zhangchen.kidd@...com>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm/memory hotplug/unplug: Optimize zone->contiguous
 update when move pfn range

Very appreciated for your timely review and insightful comments, David.


On 11/17/2025 7:57 PM, David Hildenbrand (Red Hat) wrote:
> On 17.11.25 04:30, Tianyou Li wrote:
>
> Sorry for the late review!
>
>> When invoke move_pfn_range_to_zone, it will update the 
>> zone->contiguous by
>> checking the new zone's pfn range from the beginning to the end, 
>> regardless
>> the previous state of the old zone. When the zone's pfn range is 
>> large, the
>> cost of traversing the pfn range to update the zone->contiguous could be
>> significant.
>
> Right, unfortunately we have to iterate pageblocks.
>
> We know that hotplugged sections always belong to the same zone, so we 
> could optimize for them as well. Only early section have to walk 
> pageblocks.
>
>     if (early_section(__pfn_to_section(pfn)))
>
> We could also walk memory blocks I guess (for_each_memory_block). If 
> mem->zone != NULL, we know the whole block spans a single zone.
>
>
> Memory blocks are as small as 128 MiB on x86-64, with pageblocks being 
> 2 MiB we would walk 64 pageblocks.
>
> (I think we can also walk MAX_PAGE_ORDER chunks instead of pageblock 
> chunks)
>

This actually point to another optimization opportunity that reduce the 
memory access even further to get better performance beyond this patch. 
This patch is to avoid the contiguous check as much as possible if we 
can deduce the result of contiguous with given conditions. I must 
confess I did not know that we can walk through memory blocks in some 
situations. Allow me to think through the idea and create another patch 
to optimize the slow path. At same time, would you mind to review the 
patch v2 for the fast paths situation separately?


>
>>
>> Add fast paths to quickly detect cases where zone is definitely not
>> contiguous without scanning the new zone. The cases are: when the new 
>> range
>> did not overlap with previous range, the contiguous should be false; 
>> if the
>> new range adjacent with the previous range, just need to check the new
>> range; if the new added pages could not fill the hole of previous 
>> zone, the
>> contiguous should be false.
>>
>> The following test cases of memory hotplug for a VM [1], tested in the
>> environment [2], show that this optimization can significantly reduce 
>> the
>> memory hotplug time [3].
>>
>> +----------------+------+---------------+--------------+----------------+ 
>>
>> |                | Size | Time (before) | Time (after) | Time 
>> Reduction |
>> | +------+---------------+--------------+----------------+
>> | Memory Hotplug | 256G |      10s      |      3s      | 70%      |
>> | +------+---------------+--------------+----------------+
>> |                | 512G |      33s      |      8s      | 76%      |
>> +----------------+------+---------------+--------------+----------------+ 
>>
>
> Did not expect that to be the most expensive part, nice!
>

Thanks.


>>
>> [1] Qemu commands to hotplug 512G memory for a VM:
>>      object_add memory-backend-ram,id=hotmem0,size=512G,share=on
>>      device_add virtio-mem-pci,id=vmem1,memdev=hotmem0,bus=port1
>>      qom-set vmem1 requested-size 512G
>>
>> [2] Hardware     : Intel Icelake server
>>      Guest Kernel : v6.18-rc2
>>      Qemu         : v9.0.0
>>
>>      Launch VM    :
>>      qemu-system-x86_64 -accel kvm -cpu host \
>>      -drive file=./Centos10_cloud.qcow2,format=qcow2,if=virtio \
>>      -drive file=./seed.img,format=raw,if=virtio \
>>      -smp 3,cores=3,threads=1,sockets=1,maxcpus=3 \
>>      -m 2G,slots=10,maxmem=2052472M \
>>      -device 
>> pcie-root-port,id=port1,bus=pcie.0,slot=1,multifunction=on \
>>      -device pcie-root-port,id=port2,bus=pcie.0,slot=2 \
>>      -nographic -machine q35 \
>>      -nic user,hostfwd=tcp::3000-:22
>>
>>      Guest kernel auto-onlines newly added memory blocks:
>>      echo online > /sys/devices/system/memory/auto_online_blocks
>>
>> [3] The time from typing the QEMU commands in [1] to when the output of
>>      'grep MemTotal /proc/meminfo' on Guest reflects that all hotplugged
>>      memory is recognized.
>>
>> Reported-by: Nanhai Zou <nanhai.zou@...el.com>
>> Reported-by: Chen Zhang <zhangchen.kidd@...com>
>> Tested-by: Yuan Liu <yuan1.liu@...el.com>
>> Reviewed-by: Tim Chen <tim.c.chen@...ux.intel.com>
>> Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@...el.com>
>> Reviewed-by: Yu C Chen <yu.c.chen@...el.com>
>> Reviewed-by: Pan Deng <pan.deng@...el.com>
>> Reviewed-by: Nanhai Zou <nanhai.zou@...el.com>
>> Signed-off-by: Tianyou Li <tianyou.li@...el.com>
>> ---
>>   mm/internal.h       |  3 +++
>>   mm/memory_hotplug.c | 48 ++++++++++++++++++++++++++++++++++++++++++++-
>>   mm/mm_init.c        | 31 ++++++++++++++++++++++-------
>>   3 files changed, 74 insertions(+), 8 deletions(-)
>>
>> diff --git a/mm/internal.h b/mm/internal.h
>> index 1561fc2ff5b8..734caae6873c 100644
>> --- a/mm/internal.h
>> +++ b/mm/internal.h
>> @@ -734,6 +734,9 @@ void set_zone_contiguous(struct zone *zone);
>>   bool pfn_range_intersects_zones(int nid, unsigned long start_pfn,
>>                  unsigned long nr_pages);
>>   +bool check_zone_contiguous(struct zone *zone, unsigned long 
>> start_pfn,
>> +               unsigned long nr_pages);
>> +
>>   static inline void clear_zone_contiguous(struct zone *zone)
>>   {
>>       zone->contiguous = false;
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index 0be83039c3b5..96c003271b8e 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -723,6 +723,47 @@ static void __meminit resize_pgdat_range(struct 
>> pglist_data *pgdat, unsigned lon
>>     }
>>   +static void __meminit update_zone_contiguous(struct zone *zone,
>> +            bool old_contiguous, unsigned long old_start_pfn,
>> +            unsigned long old_nr_pages, unsigned long old_absent_pages,
>> +            unsigned long new_start_pfn, unsigned long new_nr_pages)
>
> Is "old" the old zone range and "new", the new part we are adding?
>
> In that case, old vs. new is misleading, could be interpreted as "old 
> spanned zone range" and "new spanned zone range".
>
>

Agreed, the naming struggles. Could I rename the 'old' to 'origin' that 
indicates the original spanned zone range and removes the 'new_' to 
indicate the added/moved pfn range? Will that be more descriptive to fit 
in this situation?


> Why are we passing in old_absent_pages and not simply calculate it 
> based on zone->present pages in here?


Will do in patch v2. Previously I am a bit conservative to use 
zone->present_pages directly in the update_zone_contiguous function 
because implicitly it creates a dependency that zone->present_pages did 
not get updated in any of the functions before it. Allow me to send the 
patch v2 for your review.


>
>> +{
>> +    unsigned long old_end_pfn = old_start_pfn + old_nr_pages;
>> +    unsigned long new_end_pfn = new_start_pfn + new_nr_pages;
>
> Can both be const.
>

Thanks, will do in patch v2.


>> +    unsigned long new_filled_pages = 0;
>> +
>> +    /*
>> +     * If the moved pfn range does not intersect with the old zone 
>> span,
>> +     * the contiguous property is surely false.
>> +     */
>> +    if (new_end_pfn < old_start_pfn || new_start_pfn > old_end_pfn)
>> +        return;
>> +
>> +    /*
>> +     * If the moved pfn range is adjacent to the old zone span,
>> +     * check the range to the left or to the right
>> +     */
>> +    if (new_end_pfn == old_start_pfn || new_start_pfn == old_end_pfn) {
>> +        zone->contiguous = old_contiguous &&
>> +            check_zone_contiguous(zone, new_start_pfn, new_nr_pages);
>
> It's sufficient to check that a single pageblock at the old start/end 
> (depending where we're adding) has the same zone already.
>
> Why are we checking the new range we are adding? That doesn't make 
> sense unless I am missing something. We know that that one is contiguous.
>

You are right. The memmap_init_range makes the new ranges contiguous 
with the same zone as the original zone span, because it passed with the 
nid and zone_idx from the original zone. In that case, should we just 
inherited the original contiguous property, probably even need not to 
check additional pageblocks?

A quick test with your idea shows significant improvements, for 256G 
configuration, the time reduced from 3s to 2s, and for 512G 
configuration, the time reduced from 8s to 6s. The new result as below:

+----------------+------+---------------+--------------+----------------+
|                | Size | Time (before) | Time (after) | Time Reduction |
| +------+---------------+--------------+----------------+
| Memory Hotplug | 256G |      10s      |      2s      |  80%      |
| +------+---------------+--------------+----------------+
|                | 512G |      33s      |      6s      |  81%      |
+----------------+------+---------------+--------------+----------------+


>> +        return;
>> +    }
>> +
>> +    /*
>> +     * If old zone's hole larger than the new filled pages, the 
>> contiguous
>> +     * property is surely false.
>> +     */
>> +    new_filled_pages = new_end_pfn - old_start_pfn;
>> +    if (new_start_pfn > old_start_pfn)
>> +        new_filled_pages -= new_start_pfn - old_start_pfn;
>> +    if (new_end_pfn > old_end_pfn)
>> +        new_filled_pages -= new_end_pfn - old_end_pfn;
>> +    if (new_filled_pages < old_absent_pages)
>> +        return;
>
> I don't quite like the dependence on present pages here. But I guess 
> there is no other simple way to just detect that there is a large hole 
> in there that cannot possibly get closed.
>

Yes, I did not find a better solution to cover this situation simply, 
and I'd like to cover most of cases include inside/overlap/span of the 
original zone.


>> +
>> +    set_zone_contiguous(zone);
>> +}
>> +
>>   #ifdef CONFIG_ZONE_DEVICE
>>   static void section_taint_zone_device(unsigned long pfn)
>>   {
>> @@ -752,6 +793,10 @@ void move_pfn_range_to_zone(struct zone *zone, 
>> unsigned long start_pfn,
>>   {
>>       struct pglist_data *pgdat = zone->zone_pgdat;
>>       int nid = pgdat->node_id;
>> +    bool old_contiguous = zone->contiguous;
>> +    unsigned long old_start_pfn = zone->zone_start_pfn;
>> +    unsigned long old_nr_pages = zone->spanned_pages;
>> +    unsigned long old_absent_pages = zone->spanned_pages - 
>> zone->present_pages;
>>         clear_zone_contiguous(zone);
>>   @@ -783,7 +828,8 @@ void move_pfn_range_to_zone(struct zone *zone, 
>> unsigned long start_pfn,
>>                MEMINIT_HOTPLUG, altmap, migratetype,
>>                isolate_pageblock);
>>   -    set_zone_contiguous(zone);
>> +    update_zone_contiguous(zone, old_contiguous, old_start_pfn, 
>> old_nr_pages,
>> +                old_absent_pages, start_pfn, nr_pages);
>>   }
>>     struct auto_movable_stats {
>> diff --git a/mm/mm_init.c b/mm/mm_init.c
>> index 7712d887b696..04fdd949fe49 100644
>> --- a/mm/mm_init.c
>> +++ b/mm/mm_init.c
>> @@ -2263,26 +2263,43 @@ void __init init_cma_pageblock(struct page 
>> *page)
>>   }
>>   #endif
>>   -void set_zone_contiguous(struct zone *zone)
>> +/*
>> + * Check if all pageblocks in the given PFN range belong to the 
>> given zone.
>> + * The given range is expected to be within the zone's pfn range, 
>> otherwise
>> + * false is returned.
>> + */
>> +bool check_zone_contiguous(struct zone *zone, unsigned long start_pfn,
>> +                unsigned long nr_pages)
>>   {
>> -    unsigned long block_start_pfn = zone->zone_start_pfn;
>> +    unsigned long end_pfn = start_pfn + nr_pages;
>> +    unsigned long block_start_pfn = start_pfn;
>
> Can be const.
>
Yes, will do in patch v2.


Thanks & Regards,

Tianyou



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ