[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191204164143.GB2810@hirez.programming.kicks-ass.net>
Date: Wed, 4 Dec 2019 17:41:43 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Geert Uytterhoeven <geert@...ux-m68k.org>
Cc: Will Deacon <will.deacon@....com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Nicholas Piggin <npiggin@...il.com>,
Linux-Arch <linux-arch@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Russell King <linux@...linux.org.uk>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Rik van Riel <riel@...riel.com>,
Yoshinori Sato <ysato@...rs.sourceforge.jp>,
Rich Felker <dalias@...c.org>,
Linux-sh list <linux-sh@...r.kernel.org>,
Guenter Roeck <linux@...ck-us.net>
Subject: Re: [PATCH v6 10/18] sh/tlb: Convert SH to generic mmu_gather
On Wed, Dec 04, 2019 at 04:07:53PM +0100, Geert Uytterhoeven wrote:
> On Wed, Dec 4, 2019 at 2:35 PM Peter Zijlstra <peterz@...radead.org> wrote:
> > Does this fare better?
>
> Yes. Migo-R is happy again.
> Tested-by: Geert Uytterhoeven <geert+renesas@...der.be>
>
> > --- a/arch/sh/include/asm/pgalloc.h
> > +++ b/arch/sh/include/asm/pgalloc.h
> > @@ -36,9 +36,7 @@ do { \
> > #if CONFIG_PGTABLE_LEVELS > 2
> > #define __pmd_free_tlb(tlb, pmdp, addr) \
> > do { \
> > - struct page *page = virt_to_page(pmdp); \
> > - pgtable_pmd_page_dtor(page); \
> > - tlb_remove_page((tlb), page); \
> > + pmd_free((tlb)->mm, (pmdp)); \
> > } while (0);
> > #endif
OK, so I was going to write a Changelog to go with that, but then I
realized that while this works and is similar to before the patch, I'm
not sure this is in fact correct.
With this on (and also before) we're freeing the PMD before we've done
the TLB invalidate, that seems wrong!
Looking at the size of that pmd_cache, that looks to be 30-(12+12-3)+3
== 12, which is exactly 1 page, for PAGE_SIZE_4K, less for the larger
pages.
I'm thinking perhaps we should do something like the below instead?
---
arch/sh/mm/pgtable.c | 16 ++--------------
1 file changed, 2 insertions(+), 14 deletions(-)
diff --git a/arch/sh/mm/pgtable.c b/arch/sh/mm/pgtable.c
index 5c8f9247c3c2..fac7e822fd0c 100644
--- a/arch/sh/mm/pgtable.c
+++ b/arch/sh/mm/pgtable.c
@@ -5,9 +5,6 @@
#define PGALLOC_GFP GFP_KERNEL | __GFP_ZERO
static struct kmem_cache *pgd_cachep;
-#if PAGETABLE_LEVELS > 2
-static struct kmem_cache *pmd_cachep;
-#endif
void pgd_ctor(void *x)
{
@@ -23,11 +20,6 @@ void pgtable_cache_init(void)
pgd_cachep = kmem_cache_create("pgd_cache",
PTRS_PER_PGD * (1<<PTE_MAGNITUDE),
PAGE_SIZE, SLAB_PANIC, pgd_ctor);
-#if PAGETABLE_LEVELS > 2
- pmd_cachep = kmem_cache_create("pmd_cache",
- PTRS_PER_PMD * (1<<PTE_MAGNITUDE),
- PAGE_SIZE, SLAB_PANIC, NULL);
-#endif
}
pgd_t *pgd_alloc(struct mm_struct *mm)
@@ -48,11 +40,7 @@ void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
{
- return kmem_cache_alloc(pmd_cachep, PGALLOC_GFP);
-}
-
-void pmd_free(struct mm_struct *mm, pmd_t *pmd)
-{
- kmem_cache_free(pmd_cachep, pmd);
+ BUILD_BUG_ON(PTRS_PER_PMD * (1<<PTE_MAGNITUDE) <= PAGE_SIZE);
+ return (pmd_t *)__get_free_page(PGALLOC_GFP);
}
#endif /* PAGETABLE_LEVELS > 2 */
Powered by blists - more mailing lists