[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <4ad03047ac61bfbdad3edb92542dedc807fc3cf4.1581011735.git.christophe.leroy@c-s.fr>
Date: Thu, 6 Feb 2020 19:21:54 +0000 (UTC)
From: Christophe Leroy <christophe.leroy@....fr>
To: Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
aneesh.kumar@...ux.ibm.com
Cc: linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org
Subject: [PATCH 1/2] powerpc/8xx: Merge 8M hugepage slice and basepage slice
On 8xx, slices are used because hugepages (512k or 8M) and small
pages (4k or 16k) cannot share the same PGD entry. However, as 8M
entirely covers two PGD entries (One PGD entry covers 4M), there
will implicitely be no conflict between 8M pages and any other size.
So 8M is compatible with the basepage size as well.
Remove the struct slice_mask mask_8m from mm_context_t and make
vma_mmu_pagesize() rely on vma_kernel_pagesize() as the base
slice can now host several sizes.
Signed-off-by: Christophe Leroy <christophe.leroy@....fr>
---
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 7 ++-----
arch/powerpc/mm/hugetlbpage.c | 3 ++-
2 files changed, 4 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
index 76af5b0cb16e..54f7f3362edb 100644
--- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
@@ -215,9 +215,8 @@ typedef struct {
unsigned char low_slices_psize[SLICE_ARRAY_SIZE];
unsigned char high_slices_psize[0];
unsigned long slb_addr_limit;
- struct slice_mask mask_base_psize; /* 4k or 16k */
+ struct slice_mask mask_base_psize; /* 4k or 16k or 8M */
struct slice_mask mask_512k;
- struct slice_mask mask_8m;
#endif
void *pte_frag;
} mm_context_t;
@@ -257,10 +256,8 @@ static inline struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psiz
{
if (psize == MMU_PAGE_512K)
return &ctx->mask_512k;
- if (psize == MMU_PAGE_8M)
- return &ctx->mask_8m;
- BUG_ON(psize != mmu_virtual_psize);
+ BUG_ON(psize != mmu_virtual_psize && psize != MMU_PAGE_8M);
return &ctx->mask_base_psize;
}
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index edf511c2a30a..0b4ab741bf09 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -551,7 +551,8 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
{
/* With radix we don't use slice, so derive it from vma*/
- if (IS_ENABLED(CONFIG_PPC_MM_SLICES) && !radix_enabled()) {
+ if (IS_ENABLED(CONFIG_PPC_MM_SLICES) && !IS_ENABLED(CONFIG_PPC_8xx) &&
+ !radix_enabled()) {
unsigned int psize = get_slice_psize(vma->vm_mm, vma->vm_start);
return 1UL << mmu_psize_to_shift(psize);
--
2.25.0
Powered by blists - more mailing lists