lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a7da047e-0064-4f21-9f8b-fa4bcf40dca3@kernel.org>
Date: Wed, 14 Jan 2026 12:13:14 +0100
From: "David Hildenbrand (Red Hat)" <david@...nel.org>
To: "Li, Tianyou" <tianyou.li@...el.com>, Oscar Salvador <osalvador@...e.de>,
 Mike Rapoport <rppt@...nel.org>, Wei Yang <richard.weiyang@...il.com>
Cc: linux-mm@...ck.org, Yong Hu <yong.hu@...el.com>,
 Nanhai Zou <nanhai.zou@...el.com>, Yuan Liu <yuan1.liu@...el.com>,
 Tim Chen <tim.c.chen@...ux.intel.com>, Qiuxu Zhuo <qiuxu.zhuo@...el.com>,
 Yu C Chen <yu.c.chen@...el.com>, Pan Deng <pan.deng@...el.com>,
 Chen Zhang <zhangchen.kidd@...com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v7 2/2] mm/memory hotplug/unplug: Optimize
 zone->contiguous update when changes pfn range

>>
>> This is nasty. I would wish we could just leave that code path alone.
>>
>> In particular: I am 99% sure that we never ever run into this case in
>> practice.
>>
>> E.g., on x86, we can have up to 2 GiB memory blocks. But the memmap of
>> that is 64/4096*2GiB == 32 MB ... and a memory section is 128 MiB.
>>
>>
>> As commented on patch #1, we should drop the set_zone_contiguous() in
>> this function either way and let online_pages() deal with it.
>>
>> We just have to make sure that we don't create some inconsistencies by
>> doing that.
>>
>> Can you double-check?

I thought about this some more, and it's all a bit nasty. We have to get this right.

Losing the optimization for memmap_on_memory users indicates that we are doing the wrong thing.

You could introduce the set_zone_contiguous() in this patch here. But then, I think instead of

+	/*
+	 * If the allocated memmap pages are not in a full section, keep the
+	 * contiguous state as ZONE_CONTIG_NO.
+	 */
+	if (IS_ALIGNED(end_pfn, PAGES_PER_SECTION))
+		new_contiguous_state = zone_contig_state_after_growing(zone,
+								pfn, nr_pages);
+

We'd actually unconditionally have to do that, no?

-- 
Cheers

David

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ