[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200227132236.315442234@linuxfoundation.org>
Date: Thu, 27 Feb 2020 14:36:28 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Christophe Leroy <christophe.leroy@....fr>,
Michael Ellerman <mpe@...erman.id.au>
Subject: [PATCH 5.4 048/135] powerpc/hugetlb: Fix 8M hugepages on 8xx
From: Christophe Leroy <christophe.leroy@....fr>
commit 50a175dd18de7a647e72aca7daf4744e3a5a81e3 upstream.
With HW assistance all page tables must be 4k aligned, the 8xx drops
the last 12 bits during the walk.
Redefine HUGEPD_SHIFT_MASK to mask last 12 bits out. HUGEPD_SHIFT_MASK
is used to for alignment of page table cache.
Fixes: 22569b881d37 ("powerpc/8xx: Enable 8M hugepage support with HW assistance")
Cc: stable@...r.kernel.org # v5.0+
Signed-off-by: Christophe Leroy <christophe.leroy@....fr>
Signed-off-by: Michael Ellerman <mpe@...erman.id.au>
Link: https://lore.kernel.org/r/778b1a248c4c7ca79640eeff7740044da6a220a0.1581264115.git.christophe.leroy@c-s.fr
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/powerpc/include/asm/page.h | 5 +++++
1 file changed, 5 insertions(+)
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -295,8 +295,13 @@ static inline bool pfn_valid(unsigned lo
/*
* Some number of bits at the level of the page table that points to
* a hugepte are used to encode the size. This masks those bits.
+ * On 8xx, HW assistance requires 4k alignment for the hugepte.
*/
+#ifdef CONFIG_PPC_8xx
+#define HUGEPD_SHIFT_MASK 0xfff
+#else
#define HUGEPD_SHIFT_MASK 0x3f
+#endif
#ifndef __ASSEMBLY__
Powered by blists - more mailing lists