[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <153176043742.12695.12733023097134464039.stgit@dwillia2-desk3.amr.corp.intel.com>
Date: Mon, 16 Jul 2018 10:00:37 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: akpm@...ux-foundation.org
Cc: Logan Gunthorpe <logang@...tatee.com>,
Jérôme Glisse <jglisse@...hat.com>,
Christoph Hellwig <hch@....de>, Michal Hocko <mhocko@...e.com>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
Pavel Tatashin <pasha.tatashin@...cle.com>,
vishal.l.verma@...el.com, linux-mm@...ck.org, jack@...e.cz,
linux-nvdimm@...ts.01.org, linux-kernel@...r.kernel.org
Subject: [PATCH v2 03/14] mm: Teach memmap_init_zone() to initialize
ZONE_DEVICE pages
Rather than run a loop over the freshly initialized pages in
devm_memremap_pages() *after* arch_add_memory() returns, teach
memmap_init_zone() to return the pages fully initialized. This is in
preparation for multi-threading page initialization work, but it also
has some straight line performance benefits to not incur another loop of
cache misses across a large (100s of GBs to TBs) address range.
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Logan Gunthorpe <logang@...tatee.com>
Cc: "Jérôme Glisse" <jglisse@...hat.com>
Cc: Christoph Hellwig <hch@....de>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Daniel Jordan <daniel.m.jordan@...cle.com>
Cc: Pavel Tatashin <pasha.tatashin@...cle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Dan Williams <dan.j.williams@...el.com>
---
kernel/memremap.c | 16 +---------------
mm/page_alloc.c | 19 +++++++++++++++++++
2 files changed, 20 insertions(+), 15 deletions(-)
diff --git a/kernel/memremap.c b/kernel/memremap.c
index b861fe909932..85e4a7c576b2 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -173,8 +173,8 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap,
struct vmem_altmap *altmap = pgmap->altmap_valid ?
&pgmap->altmap : NULL;
struct resource *res = &pgmap->res;
- unsigned long pfn, pgoff, order;
pgprot_t pgprot = PAGE_KERNEL;
+ unsigned long pgoff, order;
int error, nid, is_ram;
if (!pgmap->ref || !kill)
@@ -251,20 +251,6 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap,
if (error)
goto err_add_memory;
- for_each_device_pfn(pfn, pgmap) {
- struct page *page = pfn_to_page(pfn);
-
- /*
- * ZONE_DEVICE pages union ->lru with a ->pgmap back
- * pointer. It is a bug if a ZONE_DEVICE page is ever
- * freed or placed on a driver-private list. Seed the
- * storage with LIST_POISON* values.
- */
- list_del(&page->lru);
- page->pgmap = pgmap;
- percpu_ref_get(pgmap->ref);
- }
-
pgmap->kill = kill;
error = devm_add_action_or_reset(dev, devm_memremap_pages_release,
pgmap);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f83682ef006e..fb45cfeb4a50 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5548,6 +5548,25 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
cond_resched();
}
+
+ if (is_zone_device_page(page)) {
+ if (WARN_ON_ONCE(!pgmap))
+ continue;
+
+ /* skip invalid device pages */
+ if (altmap && (pfn < (altmap->base_pfn
+ + vmem_altmap_offset(altmap))))
+ continue;
+ /*
+ * ZONE_DEVICE pages union ->lru with a ->pgmap back
+ * pointer. It is a bug if a ZONE_DEVICE page is ever
+ * freed or placed on a driver-private list. Seed the
+ * storage with poison.
+ */
+ page->lru.prev = LIST_POISON2;
+ page->pgmap = pgmap;
+ percpu_ref_get(pgmap->ref);
+ }
}
}
Powered by blists - more mailing lists