[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UeGZcUfC1m+uFtLgZWKEiRK_CkCvauZYB0VaeVc6uN50Q@mail.gmail.com>
Date: Mon, 3 Feb 2020 15:17:30 -0800
From: Alexander Duyck <alexander.duyck@...il.com>
To: David Hildenbrand <david@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>, linux-mm <linux-mm@...ck.org>,
Pavel Tatashin <pasha.tatashin@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>,
Oscar Salvador <osalvador@...e.de>,
"Kirill A . Shutemov" <kirill@...temov.name>
Subject: Re: [PATCH v1 1/2] mm/page_alloc: fix and rework pfn handling in memmap_init_zone()
On Mon, Feb 3, 2020 at 1:44 PM David Hildenbrand <david@...hat.com> wrote:
>
>
>
> > Am 03.02.2020 um 22:35 schrieb Alexander Duyck <alexander.duyck@...il.com>:
> >
> > On Mon, Jan 13, 2020 at 6:40 AM David Hildenbrand <david@...hat.com> wrote:
> >>
> >> Let's update the pfn manually whenever we continue the loop. This makes
> >> the code easier to read but also less error prone (and we can directly
> >> fix one issue).
> >>
> >> When overlap_memmap_init() returns true, pfn is updated to
> >> "memblock_region_memory_end_pfn(r)". So it already points at the *next*
> >> pfn to process. Incrementing the pfn another time is wrong, we might
> >> leave one uninitialized. I spotted this by inspecting the code, so I have
> >> no idea if this is relevant in practise (with kernelcore=mirror).
> >>
> >> Fixes: a9a9e77fbf27 ("mm: move mirrored memory specific code outside of memmap_init_zone")
> >> Cc: Pavel Tatashin <pasha.tatashin@...cle.com>
> >> Cc: Andrew Morton <akpm@...ux-foundation.org>
> >> Cc: Michal Hocko <mhocko@...nel.org>
> >> Cc: Oscar Salvador <osalvador@...e.de>
> >> Cc: Kirill A. Shutemov <kirill@...temov.name>
> >> Signed-off-by: David Hildenbrand <david@...hat.com>
> >> ---
> >> mm/page_alloc.c | 9 ++++++---
> >> 1 file changed, 6 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >> index a41bd7341de1..a92791512077 100644
> >> --- a/mm/page_alloc.c
> >> +++ b/mm/page_alloc.c
> >> @@ -5905,18 +5905,20 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
> >> }
> >> #endif
> >>
> >> - for (pfn = start_pfn; pfn < end_pfn; pfn++) {
> >> + for (pfn = start_pfn; pfn < end_pfn; ) {
> >> /*
> >> * There can be holes in boot-time mem_map[]s handed to this
> >> * function. They do not exist on hotplugged memory.
> >> */
> >> if (context == MEMMAP_EARLY) {
> >> if (!early_pfn_valid(pfn)) {
> >> - pfn = next_pfn(pfn) - 1;
> >> + pfn = next_pfn(pfn);
> >> continue;
> >> }
> >> - if (!early_pfn_in_nid(pfn, nid))
> >> + if (!early_pfn_in_nid(pfn, nid)) {
> >> + pfn++;
> >> continue;
> >> + }
> >> if (overlap_memmap_init(zone, &pfn))
> >> continue;
> >> if (defer_init(nid, pfn, end_pfn))
> >
> > I'm pretty sure this is a bit broken. The overlap_memmap_init is going
> > to return memblock_region_memory_end_pfn instead of the start of the
> > next region. I think that is going to stick you in a mirrored region
> > without advancing in that case. You would also need to have that case
> > do a pfn++ before the continue;
>
> Thanks for having a look.
>
> Did you read the description regarding this change?
Actually I hadn't read it all that closely, so my bad on that. The
part that had caught my attention though was that
memblock_region_memory_end is using PFN_DOWN to identify the end of
the memory region, Given that we probably shouldn't be messing with
the PFNs that may contain any of this memory it might make more sense
to use memblock_region_reserved_end_pfn which uses PFN_UP so that we
exclude all memory that is in the mirrored region just in case
something doesn't end on a PFN aligned boundary.
If we know that the mirrored region is going to always be page size
aligned then I guess you are good to go. That was the only thing I
wasn't sure about.
Reviewed-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
Powered by blists - more mailing lists