lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251119114252.oykrczprf3ecd7ak@master>
Date: Wed, 19 Nov 2025 11:42:52 +0000
From: Wei Yang <richard.weiyang@...il.com>
To: Tianyou Li <tianyou.li@...el.com>
Cc: David Hildenbrand <david@...hat.com>,
	Oscar Salvador <osalvador@...e.de>, Mike Rapoport <rppt@...nel.org>,
	linux-mm@...ck.org, Yong Hu <yong.hu@...el.com>,
	Nanhai Zou <nanhai.zou@...el.com>, Yuan Liu <yuan1.liu@...el.com>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Qiuxu Zhuo <qiuxu.zhuo@...el.com>, Yu C Chen <yu.c.chen@...el.com>,
	Pan Deng <pan.deng@...el.com>, Chen Zhang <zhangchen.kidd@...com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm/memory hotplug/unplug: Optimize zone->contiguous
 update when move pfn range

On Wed, Nov 19, 2025 at 12:07:18PM +0800, Tianyou Li wrote:
>When invoke move_pfn_range_to_zone, it will update the zone->contiguous by
>checking the new zone's pfn range from the beginning to the end, regardless
>the previous state of the old zone. When the zone's pfn range is large, the
>cost of traversing the pfn range to update the zone->contiguous could be
>significant.
>
>Add fast paths to quickly detect cases where zone is definitely not
>contiguous without scanning the new zone. The cases are: when the new range
>did not overlap with previous range, the contiguous should be false; if the
>new range adjacent with the previous range, just need to check the new
>range; if the new added pages could not fill the hole of previous zone, the
>contiguous should be false.
>
>The following test cases of memory hotplug for a VM [1], tested in the
>environment [2], show that this optimization can significantly reduce the
>memory hotplug time [3].
>
>+----------------+------+---------------+--------------+----------------+
>|                | Size | Time (before) | Time (after) | Time Reduction |
>|                +------+---------------+--------------+----------------+
>| Memory Hotplug | 256G |      10s      |      2s      |       80%      |
>|                +------+---------------+--------------+----------------+
>|                | 512G |      33s      |      6s      |       81%      |
>+----------------+------+---------------+--------------+----------------+
>

Nice

>[1] Qemu commands to hotplug 512G memory for a VM:
>    object_add memory-backend-ram,id=hotmem0,size=512G,share=on
>    device_add virtio-mem-pci,id=vmem1,memdev=hotmem0,bus=port1
>    qom-set vmem1 requested-size 512G
>
>[2] Hardware     : Intel Icelake server
>    Guest Kernel : v6.18-rc2
>    Qemu         : v9.0.0
>
>    Launch VM    :
>    qemu-system-x86_64 -accel kvm -cpu host \
>    -drive file=./Centos10_cloud.qcow2,format=qcow2,if=virtio \
>    -drive file=./seed.img,format=raw,if=virtio \
>    -smp 3,cores=3,threads=1,sockets=1,maxcpus=3 \
>    -m 2G,slots=10,maxmem=2052472M \
>    -device pcie-root-port,id=port1,bus=pcie.0,slot=1,multifunction=on \
>    -device pcie-root-port,id=port2,bus=pcie.0,slot=2 \
>    -nographic -machine q35 \
>    -nic user,hostfwd=tcp::3000-:22
>
>    Guest kernel auto-onlines newly added memory blocks:
>    echo online > /sys/devices/system/memory/auto_online_blocks
>
>[3] The time from typing the QEMU commands in [1] to when the output of
>    'grep MemTotal /proc/meminfo' on Guest reflects that all hotplugged
>    memory is recognized.
>
>Reported-by: Nanhai Zou <nanhai.zou@...el.com>
>Reported-by: Chen Zhang <zhangchen.kidd@...com>
>Tested-by: Yuan Liu <yuan1.liu@...el.com>
>Reviewed-by: Tim Chen <tim.c.chen@...ux.intel.com>
>Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@...el.com>
>Reviewed-by: Yu C Chen <yu.c.chen@...el.com>
>Reviewed-by: Pan Deng <pan.deng@...el.com>
>Reviewed-by: Nanhai Zou <nanhai.zou@...el.com>
>Reviewed-by: Yuan Liu <yuan1.liu@...el.com>
>Signed-off-by: Tianyou Li <tianyou.li@...el.com>
>---
> mm/memory_hotplug.c | 57 ++++++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 54 insertions(+), 3 deletions(-)
>
>diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>index 0be83039c3b5..8f126f20ca47 100644
>--- a/mm/memory_hotplug.c
>+++ b/mm/memory_hotplug.c
>@@ -723,6 +723,57 @@ static void __meminit resize_pgdat_range(struct pglist_data *pgdat, unsigned lon
> 
> }
> 
>+static bool __meminit check_zone_contiguous_fast(struct zone *zone,
>+			unsigned long start_pfn, unsigned long nr_pages)
>+{
>+	const unsigned long end_pfn = start_pfn + nr_pages;
>+	unsigned long nr_filled_pages;
>+
>+	/*
>+	 * Given the moved pfn range's contiguous property is always true,
>+	 * under the conditional of empty zone, the contiguous property should
>+	 * be true.
>+	 */
>+	if (zone_is_empty(zone)) {
>+		zone->contiguous = true;
>+		return true;
>+	}
>+
>+	/*
>+	 * If the moved pfn range does not intersect with the original zone span,
>+	 * the contiguous property is surely false.
>+	 */
>+	if (end_pfn < zone->zone_start_pfn || start_pfn > zone_end_pfn(zone)) {
>+		zone->contiguous = false;
>+		return true;
>+	}
>+
>+	/*
>+	 * If the moved pfn range is adjacent to the original zone span, given
>+	 * the moved pfn range's contiguous property is always true, the zone's
>+	 * contiguous property inherited from the original value.
>+	 */
>+	if (end_pfn == zone->zone_start_pfn || start_pfn == zone_end_pfn(zone))
>+		return true;
>+
>+	/*
>+	 * If the original zone's hole larger than the new filled pages, the
>+	 * contiguous property is surely false.
>+	 */
>+	nr_filled_pages = end_pfn - zone->zone_start_pfn;
>+	if (start_pfn > zone->zone_start_pfn)
>+		nr_filled_pages -= start_pfn - zone->zone_start_pfn;
>+	if (end_pfn > zone_end_pfn(zone))
>+		nr_filled_pages -= end_pfn - zone_end_pfn(zone);
>+	if (nr_filled_pages < (zone->spanned_pages - zone->present_pages)) {
>+		zone->contiguous = false;
>+		return true;
>+	}
>+

Mike's suggestion is easier for me to understand :-)

>+	clear_zone_contiguous(zone);
>+	return false;
>+}
>+
> #ifdef CONFIG_ZONE_DEVICE
> static void section_taint_zone_device(unsigned long pfn)
> {
>@@ -752,8 +803,7 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
> {
> 	struct pglist_data *pgdat = zone->zone_pgdat;
> 	int nid = pgdat->node_id;
>-
>-	clear_zone_contiguous(zone);
>+	const bool fast_path = check_zone_contiguous_fast(zone, start_pfn, nr_pages);
> 
> 	if (zone_is_empty(zone))
> 		init_currently_empty_zone(zone, start_pfn, nr_pages);
>@@ -783,7 +833,8 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
> 			 MEMINIT_HOTPLUG, altmap, migratetype,
> 			 isolate_pageblock);
> 
>-	set_zone_contiguous(zone);
>+	if (!fast_path)
>+		set_zone_contiguous(zone);
> }
> 
> struct auto_movable_stats {
>-- 
>2.47.1
>

-- 
Wei Yang
Help you, Help me

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ