lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201130152516.2387-1-jiangshanlai@gmail.com>
Date:   Mon, 30 Nov 2020 23:25:15 +0800
From:   Lai Jiangshan <jiangshanlai@...il.com>
To:     linux-kernel@...r.kernel.org
Cc:     Lai Jiangshan <laijs@...ux.alibaba.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Andy Lutomirski <luto@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>
Subject: [PATCH 1/2] x86/mm/pti: Check unaligned address for pmd clone in pti_clone_pagetable()

From: Lai Jiangshan <laijs@...ux.alibaba.com>

The commit 825d0b73cd752("x86/mm/pti: Handle unaligned address gracefully
in pti_clone_pagetable()") handles unaligned address well for unmapped
PUD/PMD etc. But unaligned address for pmd_large() or PTI_CLONE_PMD is also
needed to be aware.

For example, when pti_clone_pagetable(start, end, PTI_CLONE_PTE) is
called with @start=@..._aligned_addr+100*PAGE_SIZE,
@bug_addr=@..._aligned_addr+x*PMD_SIZE and  @end is larger than
@bug_addr+PMD_SIZE+PAGE_SIZE.

So @bug_addr is pmd aligned. If @bug_addr is mapped as large page
and @bug_addr+PMD_SIZE is not large page. It is easy to see that
[@bug_addr+PMD_SIZE, @bug_addr+PMD_SIZE+PAGE_SIZE) is not cloned.
(In the code, @addr=@..._addr+100*PAGE_SIZE is handled as large page,
and is advanced to @bug_addr+100*PAGE_SIZE+PMD_SIZE which is not
large page mapped and 100 pages is skipped without cloned)

Similar for PTI_CLONE_PMD when @bug_addr+100*PAGE_SIZE+PMD_SIZE
is larger than @end even @bug_addr is not large page.
In the case several pages after @bug_addr+PMD_SIZE is not cloned.

We also use addr = round_up(addr+1, PAGE_SIZE) in another branch for
consistent coding without fixing anything since the addresses are
at least PAGE_ALIGNED.

No real bug is found, this patch is just for the sake of robustness.

Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com>
---
 arch/x86/mm/pti.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index 1aab92930569..a229320515da 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -374,7 +374,7 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
 			 */
 			*target_pmd = *pmd;
 
-			addr += PMD_SIZE;
+			addr = round_up(addr + 1, PMD_SIZE);
 
 		} else if (level == PTI_CLONE_PTE) {
 
@@ -401,7 +401,7 @@ pti_clone_pgtable(unsigned long start, unsigned long end,
 			/* Clone the PTE */
 			*target_pte = *pte;
 
-			addr += PAGE_SIZE;
+			addr = round_up(addr + 1, PAGE_SIZE);
 
 		} else {
 			BUG();
-- 
2.19.1.6.gb485710b

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ