[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-vn-sMtNfwyJ9VW@kernel.org>
Date: Tue, 1 Apr 2025 16:19:54 +0300
From: Mike Rapoport <rppt@...nel.org>
To: David Woodhouse <dwmw2@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Sauerwein, David" <dssauerw@...zon.de>,
Anshuman Khandual <anshuman.khandual@....com>,
Ard Biesheuvel <ardb@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
David Hildenbrand <david@...hat.com>, Marc Zyngier <maz@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Mike Rapoport <rppt@...ux.ibm.com>, Will Deacon <will@...nel.org>,
kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v4 2/4] memblock: update initialization of reserved pages
On Tue, Apr 01, 2025 at 12:50:33PM +0100, David Woodhouse wrote:
> On Tue, 2025-04-01 at 14:33 +0300, Mike Rapoport wrote:
> > On Mon, Mar 31, 2025 at 04:13:56PM +0100, David Woodhouse wrote:
> > > On Mon, 2025-03-31 at 17:50 +0300, Mike Rapoport wrote:
> > > > On Mon, Mar 31, 2025 at 01:50:33PM +0100, David Woodhouse wrote:
> > > > > On Tue, 2021-05-11 at 13:05 +0300, Mike Rapoport wrote:
> > > > >
> > > > > On platforms with large NOMAP regions (e.g. which are actually reserved
> > > > > for guest memory to keep it out of the Linux address map and allow for
> > > > > kexec-based live update of the hypervisor), this pointless loop ends up
> > > > > taking a significant amount of time which is visible as guest steal
> > > > > time during the live update.
> > > > >
> > > > > Can reserve_bootmem_region() skip the loop *completely* if no PFN in
> > > > > the range from start to end is valid? Or tweak the loop itself to have
> > > > > an 'else' case which skips to the next valid PFN? Something like
> > > > >
> > > > > for(...) {
> > > > > if (pfn_valid(start_pfn)) {
> > > > > ...
> > > > > } else {
> > > > > start_pfn = next_valid_pfn(start_pfn);
> > > > > }
> > > > > }
> > > >
> > > > My understanding is that you have large reserved NOMAP ranges that don't
> > > > appear as memory at all, so no memory map for them is created and so
> > > > pfn_valid() is false for pfns in those ranges.
> > > >
> > > > If this is the case one way indeed would be to make
> > > > reserve_bootmem_region() skip ranges with no valid pfns.
> > > >
> > > > Another way could be to memblock_reserved_mark_noinit() such ranges and
> > > > then reserve_bootmem_region() won't even get called, but that would require
> > > > firmware to pass that information somehow.
> > >
> > > I was thinking along these lines (not even build tested)...
> > >
> > > I don't much like the (unsigned long)-1 part. I might make the helper
> > > 'static inline bool first_valid_pfn (unsigned long *pfn)' and return
> > > success or failure. But that's an implementation detail.
> > >
> > > index 6d1fb6162ac1..edd27ba3e908 100644
> > > --- a/include/asm-generic/memory_model.h
> > > +++ b/include/asm-generic/memory_model.h
> > > @@ -29,8 +29,43 @@ static inline int pfn_valid(unsigned long pfn)
> > > return pfn >= pfn_offset && (pfn - pfn_offset) < max_mapnr;
> > > }
> > > #define pfn_valid pfn_valid
> > > +
> > > +static inline unsigned long first_valid_pfn(unsigned long pfn)
> > > +{
> > > + /* avoid <linux/mm.h> include hell */
> > > + extern unsigned long max_mapnr;
> > > + unsigned long pfn_offset = ARCH_PFN_OFFSET;
> > > +
> > > + if (pfn < pfn_offset)
> > > + return pfn_offset;
> > > +
> > > + if ((pfn - pfn_offset) < max_mapnr)
> > > + return pfn;
> > > +
> > > + return (unsigned long)(-1);
> > > +}
> >
> > This seems about right for FLATMEM. For SPARSEMEM it would be something
> > along these lines (I kept dubious -1):
>
> Thanks. Is that right even with CONFIG_SPARSEMEM_VMEMMAP? It seems that
> it's possible for pfn_valid() to be false for a given *page*, but there
> may still be valid pages in the remainder of the same section in that
> case?
Right, it might after memory hot-remove. At boot the entire section either
valid or not.
> I think it should only skip to the next section if the current section
> doesn't exist at all, not just when pfn_section_valid() return false?
Yeah, when pfn_section_valid() returns false it should itereate pfns until
the end of the section and check if they are valid.
> I also wasn't sure how to cope with the rcu_read_lock_sched() that
> happens in pfn_valid(). What's that protecting against? Does it mean
> that by the time pfn_valid() returns true, that might not be the
> correct answer any more?
That's protecting against kfree_rcu() in section_deactivate() so even if
the answer is still correct, later access to apparently valid struct page
may blow up :/
> > static inline unsigned long first_valid_pfn(unsigned long pfn)
> > {
> > unsigned long nr = pfn_to_section_nr(pfn);
> >
> > do {
> > if (pfn_valid(pfn))
> > return pfn;
> > pfn = section_nr_to_pfn(nr++);
> > } while (nr < NR_MEM_SECTIONS);
> >
> > return (unsigned long)-1;
> > }
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists