[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aAjK8Yq3OJH5hP12@kernel.org>
Date: Wed, 23 Apr 2025 14:11:45 +0300
From: Mike Rapoport <rppt@...nel.org>
To: David Woodhouse <dwmw2@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Sauerwein, David" <dssauerw@...zon.de>,
Anshuman Khandual <anshuman.khandual@....com>,
Ard Biesheuvel <ardb@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
David Hildenbrand <david@...hat.com>, Marc Zyngier <maz@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Mike Rapoport <rppt@...ux.ibm.com>, Will Deacon <will@...nel.org>,
kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Ruihan Li <lrh2000@....edu.cn>
Subject: Re: [PATCH v3 3/7] mm: Implement for_each_valid_pfn() for
CONFIG_SPARSEMEM
On Wed, Apr 23, 2025 at 08:52:45AM +0100, David Woodhouse wrote:
> From: David Woodhouse <dwmw@...zon.co.uk>
>
> Implement for_each_valid_pfn() based on two helper functions.
>
> The first_valid_pfn() function largely mirrors pfn_valid(), calling into
> a pfn_section_first_valid() helper which is trivial for the !VMEMMAP case,
> and in the VMEMMAP case will skip to the next subsection as needed.
>
> Since next_valid_pfn() knows that its argument *is* a valid PFN, it
> doesn't need to do any checking at all while iterating over the low bits
> within a (sub)section mask; the whole (sub)section is either present or
> not.
>
> Note that the VMEMMAP version of pfn_section_first_valid() may return a
> value *higher* than end_pfn when skipping to the next subsection, and
> first_valid_pfn() happily returns that higher value. This is fine.
>
> Signed-off-by: David Woodhouse <dwmw@...zon.co.uk>
> Previous-revision-reviewed-by: Mike Rapoport (Microsoft) <rppt@...nel.org>
> ---
> include/asm-generic/memory_model.h | 26 ++++++++--
> include/linux/mmzone.h | 78 ++++++++++++++++++++++++++++++
> 2 files changed, 99 insertions(+), 5 deletions(-)
>
> diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h
> index 74d0077cc5fa..044536da3390 100644
> --- a/include/asm-generic/memory_model.h
> +++ b/include/asm-generic/memory_model.h
> @@ -31,12 +31,28 @@ static inline int pfn_valid(unsigned long pfn)
> }
> #define pfn_valid pfn_valid
>
> +static inline bool first_valid_pfn(unsigned long *pfn)
> +{
> + /* avoid <linux/mm.h> include hell */
> + extern unsigned long max_mapnr;
> + unsigned long pfn_offset = ARCH_PFN_OFFSET;
> +
> + if (*pfn < pfn_offset) {
> + *pfn = pfn_offset;
> + return true;
> + }
> +
> + if ((*pfn - pfn_offset) < max_mapnr)
> + return true;
> +
> + return false;
> +}
> +
Looks like it's a leftover from one of the previous versions.
> #ifndef for_each_valid_pfn
> -#define for_each_valid_pfn(pfn, start_pfn, end_pfn) \
> - for ((pfn) = max_t(unsigned long, (start_pfn), ARCH_PFN_OFFSET); \
> - (pfn) < min_t(unsigned long, (end_pfn), \
> - ARCH_PFN_OFFSET + max_mapnr); \
> - (pfn)++)
> +#define for_each_valid_pfn(pfn, start_pfn, end_pfn) \
> + for (pfn = max_t(unsigned long, start_pfn, ARCH_PFN_OFFSET); \
> + pfn < min_t(unsigned long, end_pfn, ARCH_PFN_OFFSET + max_mapnr); \
> + pfn++)
And this one is probably a rebase artifact?
With FLATMEM changes dropped
This-revision-also-reviewed-by: Mike Rapoport (Microsoft) <rppt@...nel.org>
> #endif /* for_each_valid_pfn */
> #endif /* valid_pfn */
>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists