[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190916060544.21824-3-alastair@au1.ibm.com>
Date: Mon, 16 Sep 2019 16:05:40 +1000
From: "Alastair D'Silva" <alastair@....ibm.com>
To: alastair@...ilva.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Oscar Salvador <osalvador@...e.com>,
Michal Hocko <mhocko@...e.com>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Wei Yang <richard.weiyang@...il.com>,
Dan Williams <dan.j.williams@...el.com>, Qian Cai <cai@....pw>,
Jason Gunthorpe <jgg@...pe.ca>,
Logan Gunthorpe <logang@...tatee.com>,
Ira Weiny <ira.weiny@...el.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [PATCH v2 2/2] mm: Add a bounds check in devm_memremap_pages()
From: Alastair D'Silva <alastair@...ilva.org>
The call to check_hotplug_memory_addressable() validates that the memory
is fully addressable.
Without this call, it is possible that we may remap pages that is
not physically addressable, resulting in bogus section numbers
being returned from __section_nr().
Signed-off-by: Alastair D'Silva <alastair@...ilva.org>
---
mm/memremap.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/mm/memremap.c b/mm/memremap.c
index 86432650f829..fd00993caa3e 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -269,6 +269,13 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
mem_hotplug_begin();
+ error = check_hotplug_memory_addressable(res->start,
+ resource_size(res));
+ if (error) {
+ mem_hotplug_done();
+ goto err_checkrange;
+ }
+
/*
* For device private memory we call add_pages() as we only need to
* allocate and initialize struct page for the device memory. More-
@@ -324,6 +331,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap)
err_add_memory:
kasan_remove_zero_shadow(__va(res->start), resource_size(res));
+ err_checkrange:
err_kasan:
untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res));
err_pfn_remap:
--
2.21.0
Powered by blists - more mailing lists