lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1539530741.922183139@decadent.org.uk>
Date:   Sun, 14 Oct 2018 16:25:41 +0100
From:   Ben Hutchings <ben@...adent.org.uk>
To:     linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC:     akpm@...ux-foundation.org, "Michael Ellerman" <mpe@...erman.id.au>,
        "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
Subject: [PATCH 3.16 095/366] powerpc/mm/hugetlb: initialize the pagetable
 cache correctly for hugetlb

3.16.60-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>

commit 6fa504835d6969144b2bd3699684dd447c789ba2 upstream.

With 64k page size, we have hugetlb pte entries at the pmd and pud level for
book3s64. We don't need to create a separate page table cache for that. With 4k
we need to make sure hugepd page table cache for 16M is placed at PUD level
and 16G at the PGD level.

Simplify all these by not using HUGEPD_PD_SHIFT which is confusing for book3s64.

Without this patch, with 64k page size we create pagetable caches with shift
value 10 and 7 which are not used at all.

Fixes: 419df06eea5b ("powerpc: Reduce the PTE_INDEX_SIZE")

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@...ux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@...erman.id.au>
[bwh: Backported to 3.16: Don't use an #ifdef because this implementation of
 hugetlbpage_init() is only used if CONFIG_PPC_BOOK3S_64 is enabled.]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -175,9 +175,6 @@ static int __hugepte_alloc(struct mm_str
 #ifdef CONFIG_PPC_FSL_BOOK3E
 #define HUGEPD_PGD_SHIFT PGDIR_SHIFT
 #define HUGEPD_PUD_SHIFT PUD_SHIFT
-#else
-#define HUGEPD_PGD_SHIFT PUD_SHIFT
-#define HUGEPD_PUD_SHIFT PMD_SHIFT
 #endif
 
 #ifdef CONFIG_PPC_BOOK3S_64
@@ -871,15 +868,17 @@ static int __init hugetlbpage_init(void)
 
 		shift = mmu_psize_to_shift(psize);
 
-		if (add_huge_page_size(1ULL << shift) < 0)
+		if (shift > PGDIR_SHIFT)
 			continue;
-
-		if (shift < PMD_SHIFT)
-			pdshift = PMD_SHIFT;
-		else if (shift < PUD_SHIFT)
+		else if (shift > PUD_SHIFT)
+			pdshift = PGDIR_SHIFT;
+		else if (shift > PMD_SHIFT)
 			pdshift = PUD_SHIFT;
 		else
-			pdshift = PGDIR_SHIFT;
+			pdshift = PMD_SHIFT;
+
+		if (add_huge_page_size(1ULL << shift) < 0)
+			continue;
 		/*
 		 * if we have pdshift and shift value same, we don't
 		 * use pgt cache for hugepd.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ