[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1539530741.828213442@decadent.org.uk>
Date: Sun, 14 Oct 2018 16:25:41 +0100
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org, aryabinin@...tuozzo.com, hpa@...or.com,
"Thomas Gleixner" <tglx@...utronix.de>,
"Joerg Roedel" <jroedel@...e.de>, jgross@...e.com,
JBeulich@...e.com, kirill.shutemov@...ux.intel.com
Subject: [PATCH 3.16 165/366] x86/mm: Prevent kernel Oops in PTDUMP code
with HIGHPTE=y
3.16.60-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Joerg Roedel <jroedel@...e.de>
commit d6ef1f194b7569af8b8397876dc9ab07649d63cb upstream.
The walk_pte_level() function just uses __va to get the virtual address of
the PTE page, but that breaks when the PTE page is not in the direct
mapping with HIGHPTE=y.
The result is an unhandled kernel paging request at some random address
when accessing the current_kernel or current_user file.
Use the correct API to access PTE pages.
Fixes: fe770bf0310d ('x86: clean up the page table dumper and add 32-bit support')
Signed-off-by: Joerg Roedel <jroedel@...e.de>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: jgross@...e.com
Cc: JBeulich@...e.com
Cc: hpa@...or.com
Cc: aryabinin@...tuozzo.com
Cc: kirill.shutemov@...ux.intel.com
Link: https://lkml.kernel.org/r/1523971636-4137-1-git-send-email-joro@8bytes.org
[bwh: Backported to 3.16:
- Keep using pte_pgprot() to get protection flags
- Adjust context]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -16,6 +16,7 @@
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/seq_file.h>
+#include <linux/highmem.h>
#include <asm/pgtable.h>
@@ -263,15 +264,16 @@ static void walk_pte_level(struct seq_fi
unsigned long P)
{
int i;
- pte_t *start;
+ pte_t *pte;
- start = (pte_t *) pmd_page_vaddr(addr);
for (i = 0; i < PTRS_PER_PTE; i++) {
- pgprot_t prot = pte_pgprot(*start);
+ pgprot_t prot;
st->current_address = normalize_addr(P + i * PTE_LEVEL_MULT);
+ pte = pte_offset_map(&addr, st->current_address);
+ prot = pte_pgprot(*pte);
note_page(m, st, prot, 4);
- start++;
+ pte_unmap(pte);
}
}
Powered by blists - more mailing lists