[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200729033424.2629-5-justin.he@arm.com>
Date: Wed, 29 Jul 2020 11:34:22 +0800
From: Jia He <justin.he@....com>
To: Dan Williams <dan.j.williams@...el.com>,
Vishal Verma <vishal.l.verma@...el.com>,
Mike Rapoport <rppt@...ux.ibm.com>,
David Hildenbrand <david@...hat.com>
Cc: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Dave Jiang <dave.jiang@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Steve Capper <steve.capper@....com>,
Mark Rutland <mark.rutland@....com>,
Logan Gunthorpe <logang@...tatee.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Hsin-Yi Wang <hsinyi@...omium.org>,
Jason Gunthorpe <jgg@...pe.ca>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Kees Cook <keescook@...omium.org>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-nvdimm@...ts.01.org, linux-mm@...ck.org,
Wei Yang <richardw.yang@...ux.intel.com>,
Pankaj Gupta <pankaj.gupta.linux@...il.com>,
Ira Weiny <ira.weiny@...el.com>, Kaly Xin <Kaly.Xin@....com>,
Jia He <justin.he@....com>
Subject: [RFC PATCH 4/6] mm/page_alloc: adjust the start,end in dax pmem kmem case
There are 3 cases when doing online pages:
- normal RAM, should be aligned with memory block size
- persistent memory with ZONE_DEVICE
- persistent memory used as normal RAM (kmem) with ZONE_NORMAL, this patch
tries to adjust the start_pfn/end_pfn after finding the corresponding
resource range.
Without this patch, the check of __init_single_page when doing online memory
will be failed because those pages haven't been mapped in mmu(not present
from mmu's point of view).
Signed-off-by: Jia He <justin.he@....com>
---
mm/page_alloc.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e028b87ce294..13216ab3623f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5971,6 +5971,20 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
if (start_pfn == altmap->base_pfn)
start_pfn += altmap->reserve;
end_pfn = altmap->base_pfn + vmem_altmap_offset(altmap);
+ } else {
+ struct resource res;
+ int ret;
+
+ /* adjust the start,end in dax pmem kmem case */
+ ret = find_next_iomem_res(start_pfn << PAGE_SHIFT,
+ (end_pfn << PAGE_SHIFT) - 1,
+ IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY,
+ IORES_DESC_PERSISTENT_MEMORY,
+ false, &res);
+ if (!ret) {
+ start_pfn = PFN_UP(res.start);
+ end_pfn = PFN_DOWN(res.end + 1);
+ }
}
#endif
--
2.17.1
Powered by blists - more mailing lists