[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201210143527.2398-1-jiangshanlai@gmail.com>
Date: Thu, 10 Dec 2020 22:35:24 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: linux-kernel@...r.kernel.org
Cc: Lai Jiangshan <laijs@...ux.alibaba.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>
Subject: [PATCH V2 1/3] x86/mm/pti: handle unaligned address for pmd clone in pti_clone_pagetable()
From: Lai Jiangshan <laijs@...ux.alibaba.com>
The commit 825d0b73cd752("x86/mm/pti: Handle unaligned address gracefully
in pti_clone_pagetable()") handles unaligned address well for unmapped
PUD/PMD etc. But unaligned address for mapped pmd also needs to
be aware.
For mapped pmd, if @addr is not aligned to PMD_SIZE, the next pmd
(PTI_CLONE_PMD or the next pmd is large) or the last ptes (PTI_CLONE_PTE)
in the next pmd will not be cloned when @end < @addr + PMD_SIZE in the
current logic in the code.
It is not a good idea to force alignment in the caller due to one of
the cases (see the comments in the code), so it just handles the alignment
in pti_clone_pagetable().
Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com>
---
arch/x86/mm/pti.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index 1aab92930569..7ee99ef13a99 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -342,6 +342,21 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
}
if (pmd_large(*pmd) || level == PTI_CLONE_PMD) {
+ /*
+ * pti_clone_kernel_text() might be called with
+ * @start not aligned to PMD_SIZE. We need to make
+ * it aligned, otherwise the next pmd or last ptes
+ * are not cloned when @end < @addr + PMD_SIZE.
+ *
+ * We can't force pti_clone_kernel_text() to align
+ * the @addr to PMD_SIZE when level == PTI_CLONE_PTE.
+ * But the problem can still possible exist when the
+ * first pmd is large. And it is not a good idea to
+ * check whether the first pmd is large or not in the
+ * caller, so we just simply align it here.
+ */
+ addr = round_down(addr, PMD_SIZE);
+
target_pmd = pti_user_pagetable_walk_pmd(addr);
if (WARN_ON(!target_pmd))
return;
--
2.19.1.6.gb485710b
Powered by blists - more mailing lists