lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <1480429817-16163-1-git-send-email-yuriy.kolerov@synopsys.com>
Date:   Tue, 29 Nov 2016 17:30:17 +0300
From:   Yuriy Kolerov <yuriy.kolerov@...opsys.com>
To:     linux-snps-arc@...ts.infradead.org
Cc:     Vineet.Gupta1@...opsys.com, Alexey.Brodkin@...opsys.com,
        linux-kernel@...r.kernel.org,
        Yuriy Kolerov <yuriy.kolerov@...opsys.com>
Subject: [PATCH v2] ARC: mm: Fix invalid page mapping in kernel with PAE40

Originally pfn_pte(pfn, prot) macro is implemented incorrectly
and truncates the most significant byte in the value of PTE
(Page Table Entry). It leads to the creation of invalid page
mapping in the kernel with PAE40 if the physical page frame
resides in the memory above of 4GB boundary.

The behaviour of the system with such corrupted mappings is
undefined. The kernel can crash when such pages are unmapped
because the kernel can try to get access to bad address.

For example if the kernel with 8KB pages will try to create
a mapping of the virtual page to the physical frame (pfn) at
0x110000 then the value of pte will be truncated (0x10000000)
and the invalid mapping will be created.

Signed-off-by: Yuriy Kolerov <yuriy.kolerov@...opsys.com>
---
 arch/arc/include/asm/pgtable.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
index 89eeb37..e94ca72 100644
--- a/arch/arc/include/asm/pgtable.h
+++ b/arch/arc/include/asm/pgtable.h
@@ -280,7 +280,7 @@ static inline void pmd_set(pmd_t *pmdp, pte_t *ptep)
 
 #define pte_page(pte)		pfn_to_page(pte_pfn(pte))
 #define mk_pte(page, prot)	pfn_pte(page_to_pfn(page), prot)
-#define pfn_pte(pfn, prot)	__pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
+#define pfn_pte(pfn, prot)	__pte(__pfn_to_phys(pfn) | pgprot_val(prot))
 
 /* Don't use virt_to_pfn for macros below: could cause truncations for PAE40*/
 #define pte_pfn(pte)		(pte_val(pte) >> PAGE_SHIFT)
-- 
2.7.4


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ