[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210621123419.2976-2-yaohuiwang@linux.alibaba.com>
Date: Mon, 21 Jun 2021 20:34:18 +0800
From: Yaohui Wang <yaohuiwang@...ux.alibaba.com>
To: dave.hansen@...ux.intel.com, tglx@...utronix.de
Cc: luto@...nel.org, peterz@...radead.org, mingo@...hat.com,
bp@...en8.de, x86@...nel.org, linux-kernel@...r.kernel.org,
luoben@...ux.alibaba.com, yaohuiwang@...ux.alibaba.com
Subject: [PATCH v3 1/2] x86/ioremap: fix the pfn calculation mistake in __ioremap_check_ram()
In __ioremap_check_ram(), the pfn wrapping calculation supposes res->start
to be page-aligned and res->end to be PAGE_SIZE - 1 aligned. But
res->start and res->end may not follow such alignment, which may make the
RAM checking be omitted for the very start page or the very end page of
the memory range. This can cause ioremap_xxx() to succeed on normal RAM by
mistake.
For example, suppose memory range [phys_addr ~ phys_addr + PAGE_SIZE - 1]
is a normal RAM page. ioremap(phys_addr, PAGE_SIZE - 1) will succeed
(but it should not) because the pfn wrapping prevents this page to be
checked whether it touches non-ioremappable resources.
The new pfn wrapping calculation makes sure the resulting pfn range covers
[res->start, res->end] completely.
Fixes: 0e4c12b45aa8 (x86/mm, resource: Use PAGE_KERNEL protection for ioremap of memory pages)
Signed-off-by: Yahui Wang <yaohuiwang@...ux.alibaba.com>
Signed-off-by: Ben Luo <luoben@...ux.alibaba.com>
---
arch/x86/mm/ioremap.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 60ade7dd71bd..609a8bd6f680 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -68,19 +68,19 @@ int ioremap_change_attr(unsigned long vaddr, unsigned long size,
/* Does the range (or a subset of) contain normal RAM? */
static unsigned int __ioremap_check_ram(struct resource *res)
{
- unsigned long start_pfn, stop_pfn;
+ unsigned long start_pfn, stop_pfn, npages;
unsigned long i;
if ((res->flags & IORESOURCE_SYSTEM_RAM) != IORESOURCE_SYSTEM_RAM)
return 0;
- start_pfn = (res->start + PAGE_SIZE - 1) >> PAGE_SHIFT;
- stop_pfn = (res->end + 1) >> PAGE_SHIFT;
- if (stop_pfn > start_pfn) {
- for (i = 0; i < (stop_pfn - start_pfn); ++i)
- if (pfn_valid(start_pfn + i) &&
- !PageReserved(pfn_to_page(start_pfn + i)))
- return IORES_MAP_SYSTEM_RAM;
+ start_pfn = PFN_DOWN(res->start);
+ stop_pfn = PFN_DOWN(res->end);
+ npages = stop_pfn - start_pfn + 1;
+ for (i = 0; i < npages; ++i) {
+ if (pfn_valid(start_pfn + i) &&
+ !PageReserved(pfn_to_page(start_pfn + i)))
+ return IORES_MAP_SYSTEM_RAM;
}
return 0;
--
2.25.1
Powered by blists - more mailing lists