[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20210222105400.28583-1-rppt@kernel.org>
Date: Mon, 22 Feb 2021 12:54:00 +0200
From: Mike Rapoport <rppt@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Andrea Arcangeli <aarcange@...hat.com>,
Baoquan He <bhe@...hat.com>, Borislav Petkov <bp@...en8.de>,
Chris Wilson <chris@...is-wilson.co.uk>,
David Hildenbrand <david@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ćukasz Majczak <lma@...ihalf.com>,
Mel Gorman <mgorman@...e.de>, Michal Hocko <mhocko@...nel.org>,
Mike Rapoport <rppt@...nel.org>,
Mike Rapoport <rppt@...ux.ibm.com>, Qian Cai <cai@....pw>,
"Sarvela, Tomi P" <tomi.p.sarvela@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, stable@...r.kernel.org, x86@...nel.org
Subject: [PATCH v6 0/1] mm: fix initialization of struct page for holes in memory layout
From: Mike Rapoport <rppt@...ux.ibm.com>
Hi,
@Andrew, this is based on v5.11-mmotm-2021-02-18-18-29 with the previous
version reverted
Commit 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions rather
that check each PFN") exposed several issues with the memory map
initialization and these patches fix those issues.
Initially there were crashes during compaction that Qian Cai reported back
in April [1]. It seemed back then that the problem was fixed, but a few
weeks ago Andrea Arcangeli hit the same bug [2] and there was an additional
discussion at [3].
I didn't appreciate variety of ways BIOSes can report memory in the first
megabyte, so previous versions of this set caused all kinds of troubles.
The last version that implicitly extended node/zone to cover the complete
section might also have unexpected side effects, so this time I'm trying to
move in forward in baby steps.
This is mostly a return to the fist version that simply merges
init_unavailable_pages() into memmap_init() so that the only effective
change would be more sensible zone/node links in unavailable struct pages.
For now, I've dropped the patch that tried to make ZONE_DMA to span pfn 0
because it didn't cause any issues for really long time and there are way
to many hidden mines around this.
I have an ugly workaround for "pfn 0" issue that IMHO is the safest way to
deal with it until it could be gradually fixed properly:
https://git.kernel.org/pub/scm/linux/kernel/git/rppt/linux.git/commit/?id=a1b6e4d7e4a6d893caeda9a7f3800766243a02fe
v6:
* only interleave initialization of unavailable pages in memmap_init(), so
that it is essentially includes init_unavailable_pages().
v5: https://lore.kernel.org/lkml/20210208110820.6269-1-rppt@kernel.org
* extend node/zone spans to cover complete sections, this allows to interleave
the initialization of unavailable pages with "normal" memory map init.
* drop modifications to x86 early setup
v4: https://lore.kernel.org/lkml/20210130221035.4169-1-rppt@kernel.org/
* make sure pages in the range 0 - start_pfn_of_lowest_zone are initialized
even if an architecture hides them from the generic mm
* finally make pfn 0 on x86 to be a part of memory visible to the generic
mm as reserved memory.
v3: https://lore.kernel.org/lkml/20210111194017.22696-1-rppt@kernel.org
* use architectural zone constraints to set zone links for struct pages
corresponding to the holes
* drop implicit update of memblock.memory
* add a patch that sets pfn 0 to E820_TYPE_RAM on x86
v2: https://lore.kernel.org/lkml/20201209214304.6812-1-rppt@kernel.org/):
* added patch that adds all regions in memblock.reserved that do not
overlap with memblock.memory to memblock.memory in the beginning of
free_area_init()
[1] https://lore.kernel.org/lkml/8C537EB7-85EE-4DCF-943E-3CC0ED0DF56D@lca.pw
[2] https://lore.kernel.org/lkml/20201121194506.13464-1-aarcange@redhat.com
[3] https://lore.kernel.org/mm-commits/20201206005401.qKuAVgOXr%akpm@linux-foundation.org
Mike Rapoport (1):
mm/page_alloc.c: refactor initialization of struct page for holes in
memory layout
mm/page_alloc.c | 144 ++++++++++++++++++++----------------------------
1 file changed, 61 insertions(+), 83 deletions(-)
--
2.28.0
Powered by blists - more mailing lists