lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250423081828.608422-1-dwmw2@infradead.org>
Date: Wed, 23 Apr 2025 08:52:42 +0100
From: David Woodhouse <dwmw2@...radead.org>
To: Mike Rapoport <rppt@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
	"Sauerwein, David" <dssauerw@...zon.de>,
	Anshuman Khandual <anshuman.khandual@....com>,
	Ard Biesheuvel <ardb@...nel.org>,
	Catalin Marinas <catalin.marinas@....com>,
	David Hildenbrand <david@...hat.com>,
	Marc Zyngier <maz@...nel.org>,
	Mark Rutland <mark.rutland@....com>,
	Mike Rapoport <rppt@...ux.ibm.com>,
	Will Deacon <will@...nel.org>,
	kvmarm@...ts.cs.columbia.edu,
	linux-arm-kernel@...ts.infradead.org,
	linux-kernel@...r.kernel.org,
	linux-mm@...ck.org,
	Ruihan Li <lrh2000@....edu.cn>
Subject: [PATCH v3 0/7] mm: Introduce for_each_valid_pfn()

There are cases where a naïve loop over a PFN range, calling pfn_valid() on
each one, is horribly inefficient. Ruihan Li reported the case where
memmap_init() iterates all the way from zero to a potentially large value
of ARCH_PFN_OFFSET, and we at Amazon found the reserve_bootmem_region()
one as it affects hypervisor live update. Others are more cosmetic.

By introducing a for_each_valid_pfn() helper it can optimise away a lot
of pointless calls to pfn_valid(), skipping immediately to the next
valid PFN and also skipping *all* checks within a valid (sub)region
according to the granularity of the memory model in use.

https://git.infradead.org/users/dwmw2/linux.git/shortlog/refs/heads/for_each_valid_pfn

v3: 
 • Fold the 'optimised' SPARSEMEM implementation into the original patch
 • Drop the use of (-1) as end marker, and use end_pfn instead.
 • Drop unused first_valid_pfn() helper for FLATMEM implementation
 • Add use case in memmap_init() from discussion at 
   https://lore.kernel.org/linux-mm/20250419122801.1752234-1-lrh2000@pku.edu.cn/

v2 [RFC]: https://lore.kernel.org/linux-mm/20250404155959.3442111-1-dwmw2@infradead.org/
 • Revised implementations with feedback from Mike
 • Add a few more use cases

v1 [RFC]: https://lore.kernel.org/linux-mm/20250402201841.3245371-1-dwmw2@infradead.org/
 • First proof of concept

David Woodhouse (7):
      mm: Introduce for_each_valid_pfn() and use it from reserve_bootmem_region()
      mm: Implement for_each_valid_pfn() for CONFIG_FLATMEM
      mm: Implement for_each_valid_pfn() for CONFIG_SPARSEMEM
      mm, PM: Use for_each_valid_pfn() in kernel/power/snapshot.c
      mm, x86: Use for_each_valid_pfn() from __ioremap_check_ram()
      mm: Use for_each_valid_pfn() in memory_hotplug
      mm/mm_init: Use for_each_valid_pfn() in init_unavailable_range()

 arch/x86/mm/ioremap.c              |  7 ++-
 include/asm-generic/memory_model.h | 26 ++++++++++-
 include/linux/mmzone.h             | 88 ++++++++++++++++++++++++++++++++++++++
 kernel/power/snapshot.c            | 42 +++++++++---------
 mm/memory_hotplug.c                |  8 +---
 mm/mm_init.c                       | 29 +++++--------
 6 files changed, 149 insertions(+), 51 deletions(-)


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ