[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-qrtJ6cs-kXpepR@kernel.org>
Date: Mon, 31 Mar 2025 17:50:28 +0300
From: Mike Rapoport <rppt@...nel.org>
To: David Woodhouse <dwmw2@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Sauerwein, David" <dssauerw@...zon.de>,
Anshuman Khandual <anshuman.khandual@....com>,
Ard Biesheuvel <ardb@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
David Hildenbrand <david@...hat.com>, Marc Zyngier <maz@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Mike Rapoport <rppt@...ux.ibm.com>, Will Deacon <will@...nel.org>,
kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v4 2/4] memblock: update initialization of reserved pages
On Mon, Mar 31, 2025 at 01:50:33PM +0100, David Woodhouse wrote:
> On Tue, 2021-05-11 at 13:05 +0300, Mike Rapoport wrote:
> >
> > +static void __init memmap_init_reserved_pages(void)
> > +{
> > + struct memblock_region *region;
> > + phys_addr_t start, end;
> > + u64 i;
> > +
> > + /* initialize struct pages for the reserved regions */
> > + for_each_reserved_mem_range(i, &start, &end)
> > + reserve_bootmem_region(start, end);
> > +
> > + /* and also treat struct pages for the NOMAP regions as PageReserved */
> > + for_each_mem_region(region) {
> > + if (memblock_is_nomap(region)) {
> > + start = region->base;
> > + end = start + region->size;
> > + reserve_bootmem_region(start, end);
> > + }
> > + }
> > +}
> > +
>
> In some cases, that whole call to reserve_bootmem_region() may be a no-
> op because pfn_valid() is not true for *any* address in that range.
>
> But reserve_bootmem_region() spends a long time iterating of them all,
> and eventually doing nothing:
>
> void __meminit reserve_bootmem_region(phys_addr_t start,
> phys_addr_t end, int nid)
> {
> unsigned long start_pfn = PFN_DOWN(start);
> unsigned long end_pfn = PFN_UP(end);
>
> for (; start_pfn < end_pfn; start_pfn++) {
> if (pfn_valid(start_pfn)) {
> struct page *page = pfn_to_page(start_pfn);
>
> init_reserved_page(start_pfn, nid);
>
> /*
> * no need for atomic set_bit because the struct
> * page is not visible yet so nobody should
> * access it yet.
> */
> __SetPageReserved(page);
> }
> }
> }
>
> On platforms with large NOMAP regions (e.g. which are actually reserved
> for guest memory to keep it out of the Linux address map and allow for
> kexec-based live update of the hypervisor), this pointless loop ends up
> taking a significant amount of time which is visible as guest steal
> time during the live update.
>
> Can reserve_bootmem_region() skip the loop *completely* if no PFN in
> the range from start to end is valid? Or tweak the loop itself to have
> an 'else' case which skips to the next valid PFN? Something like
>
> for(...) {
> if (pfn_valid(start_pfn)) {
> ...
> } else {
> start_pfn = next_valid_pfn(start_pfn);
> }
> }
My understanding is that you have large reserved NOMAP ranges that don't
appear as memory at all, so no memory map for them is created and so
pfn_valid() is false for pfns in those ranges.
If this is the case one way indeed would be to make
reserve_bootmem_region() skip ranges with no valid pfns.
Another way could be to memblock_reserved_mark_noinit() such ranges and
then reserve_bootmem_region() won't even get called, but that would require
firmware to pass that information somehow.
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists