[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b47d5f5602573bd082be3729ceddb3d1dc374ef1.camel@infradead.org>
Date: Mon, 31 Mar 2025 16:13:56 +0100
From: David Woodhouse <dwmw2@...radead.org>
To: Mike Rapoport <rppt@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, "Sauerwein, David"
<dssauerw@...zon.de>, Anshuman Khandual <anshuman.khandual@....com>, Ard
Biesheuvel <ardb@...nel.org>, Catalin Marinas <catalin.marinas@....com>,
David Hildenbrand <david@...hat.com>, Marc Zyngier <maz@...nel.org>, Mark
Rutland <mark.rutland@....com>, Mike Rapoport <rppt@...ux.ibm.com>, Will
Deacon <will@...nel.org>, kvmarm@...ts.cs.columbia.edu,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v4 2/4] memblock: update initialization of reserved pages
On Mon, 2025-03-31 at 17:50 +0300, Mike Rapoport wrote:
> On Mon, Mar 31, 2025 at 01:50:33PM +0100, David Woodhouse wrote:
> > On Tue, 2021-05-11 at 13:05 +0300, Mike Rapoport wrote:
> > >
> > > +static void __init memmap_init_reserved_pages(void)
> > > +{
> > > + struct memblock_region *region;
> > > + phys_addr_t start, end;
> > > + u64 i;
> > > +
> > > + /* initialize struct pages for the reserved regions */
> > > + for_each_reserved_mem_range(i, &start, &end)
> > > + reserve_bootmem_region(start, end);
> > > +
> > > + /* and also treat struct pages for the NOMAP regions as PageReserved */
> > > + for_each_mem_region(region) {
> > > + if (memblock_is_nomap(region)) {
> > > + start = region->base;
> > > + end = start + region->size;
> > > + reserve_bootmem_region(start, end);
> > > + }
> > > + }
> > > +}
> > > +
> >
> > In some cases, that whole call to reserve_bootmem_region() may be a no-
> > op because pfn_valid() is not true for *any* address in that range.
> >
> > But reserve_bootmem_region() spends a long time iterating of them all,
> > and eventually doing nothing:
> >
> > void __meminit reserve_bootmem_region(phys_addr_t start,
> > phys_addr_t end, int nid)
> > {
> > unsigned long start_pfn = PFN_DOWN(start);
> > unsigned long end_pfn = PFN_UP(end);
> >
> > for (; start_pfn < end_pfn; start_pfn++) {
> > if (pfn_valid(start_pfn)) {
> > struct page *page = pfn_to_page(start_pfn);
> >
> > init_reserved_page(start_pfn, nid);
> >
> > /*
> > * no need for atomic set_bit because the struct
> > * page is not visible yet so nobody should
> > * access it yet.
> > */
> > __SetPageReserved(page);
> > }
> > }
> > }
> >
> > On platforms with large NOMAP regions (e.g. which are actually reserved
> > for guest memory to keep it out of the Linux address map and allow for
> > kexec-based live update of the hypervisor), this pointless loop ends up
> > taking a significant amount of time which is visible as guest steal
> > time during the live update.
> >
> > Can reserve_bootmem_region() skip the loop *completely* if no PFN in
> > the range from start to end is valid? Or tweak the loop itself to have
> > an 'else' case which skips to the next valid PFN? Something like
> >
> > for(...) {
> > if (pfn_valid(start_pfn)) {
> > ...
> > } else {
> > start_pfn = next_valid_pfn(start_pfn);
> > }
> > }
>
> My understanding is that you have large reserved NOMAP ranges that don't
> appear as memory at all, so no memory map for them is created and so
> pfn_valid() is false for pfns in those ranges.
>
> If this is the case one way indeed would be to make
> reserve_bootmem_region() skip ranges with no valid pfns.
>
> Another way could be to memblock_reserved_mark_noinit() such ranges and
> then reserve_bootmem_region() won't even get called, but that would require
> firmware to pass that information somehow.
I was thinking along these lines (not even build tested)...
I don't much like the (unsigned long)-1 part. I might make the helper
'static inline bool first_valid_pfn (unsigned long *pfn)' and return
success or failure. But that's an implementation detail.
index 6d1fb6162ac1..edd27ba3e908 100644
--- a/include/asm-generic/memory_model.h
+++ b/include/asm-generic/memory_model.h
@@ -29,8 +29,43 @@ static inline int pfn_valid(unsigned long pfn)
return pfn >= pfn_offset && (pfn - pfn_offset) < max_mapnr;
}
#define pfn_valid pfn_valid
+
+static inline unsigned long first_valid_pfn(unsigned long pfn)
+{
+ /* avoid <linux/mm.h> include hell */
+ extern unsigned long max_mapnr;
+ unsigned long pfn_offset = ARCH_PFN_OFFSET;
+
+ if (pfn < pfn_offset)
+ return pfn_offset;
+
+ if ((pfn - pfn_offset) < max_mapnr)
+ return pfn;
+
+ return (unsigned long)(-1);
+}
+
+#ifndef for_each_valid_pfn
+#define for_each_valid_pfn(pfn, start_pfn, end_pfn) \
+ /* Sanity check on the end condition */ \
+ BUG_ON(end_pfn == (unsigned long)-1); \
+ for (pfn = first_valid_pfn(pfn); pfn < end_pfn; \
+ pfn = first_valid_pfn(pfn + 1))
+#endif
+#endif
+
+/*
+ * If the architecture provides its own pfn_valid(), it can either
+ * provide a matching for_each_valid_pfn() or use the fallback which
+ * just iterates over them *all*, calling pfn_valid() for each.
+ */
+#ifndef for_each_valid_pfn
+#define for_each_valid_pfn(pfn, start_pfn, end_pfn) \
+ for (pfn = start_pfn; pfn < end_pfn, pfn++) { \
+ if (pfn_valid(pfn))
#endif
+
#elif defined(CONFIG_SPARSEMEM_VMEMMAP)
/* memmap is virtually contiguous. */
Download attachment "smime.p7s" of type "application/pkcs7-signature" (5069 bytes)
Powered by blists - more mailing lists