[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-d6ef1f194b7569af8b8397876dc9ab07649d63cb@git.kernel.org>
Date: Tue, 17 Apr 2018 06:46:45 -0700
From: tip-bot for Joerg Roedel <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: hpa@...or.com, mingo@...nel.org, jroedel@...e.de,
tglx@...utronix.de, linux-kernel@...r.kernel.org
Subject: [tip:x86/urgent] x86/mm: Prevent kernel Oops in PTDUMP code with
HIGHPTE=y
Commit-ID: d6ef1f194b7569af8b8397876dc9ab07649d63cb
Gitweb: https://git.kernel.org/tip/d6ef1f194b7569af8b8397876dc9ab07649d63cb
Author: Joerg Roedel <jroedel@...e.de>
AuthorDate: Tue, 17 Apr 2018 15:27:16 +0200
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitDate: Tue, 17 Apr 2018 15:43:01 +0200
x86/mm: Prevent kernel Oops in PTDUMP code with HIGHPTE=y
The walk_pte_level() function just uses __va to get the virtual address of
the PTE page, but that breaks when the PTE page is not in the direct
mapping with HIGHPTE=y.
The result is an unhandled kernel paging request at some random address
when accessing the current_kernel or current_user file.
Use the correct API to access PTE pages.
Fixes: fe770bf0310d ('x86: clean up the page table dumper and add 32-bit support')
Signed-off-by: Joerg Roedel <jroedel@...e.de>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: stable@...r.kernel.org
Cc: jgross@...e.com
Cc: JBeulich@...e.com
Cc: hpa@...or.com
Cc: aryabinin@...tuozzo.com
Cc: kirill.shutemov@...ux.intel.com
Link: https://lkml.kernel.org/r/1523971636-4137-1-git-send-email-joro@8bytes.org
---
arch/x86/mm/dump_pagetables.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index 62a7e9f65dec..cc7ff5957194 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -18,6 +18,7 @@
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/seq_file.h>
+#include <linux/highmem.h>
#include <asm/pgtable.h>
@@ -334,16 +335,16 @@ static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr,
pgprotval_t eff_in, unsigned long P)
{
int i;
- pte_t *start;
+ pte_t *pte;
pgprotval_t prot, eff;
- start = (pte_t *)pmd_page_vaddr(addr);
for (i = 0; i < PTRS_PER_PTE; i++) {
- prot = pte_flags(*start);
- eff = effective_prot(eff_in, prot);
st->current_address = normalize_addr(P + i * PTE_LEVEL_MULT);
+ pte = pte_offset_map(&addr, st->current_address);
+ prot = pte_flags(*pte);
+ eff = effective_prot(eff_in, prot);
note_page(m, st, __pgprot(prot), eff, 5);
- start++;
+ pte_unmap(pte);
}
}
#ifdef CONFIG_KASAN
Powered by blists - more mailing lists