[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180523140157.GG26965@arm.com>
Date: Wed, 23 May 2018 15:01:58 +0100
From: Will Deacon <will.deacon@....com>
To: Chintan Pandya <cpandya@...eaurora.org>
Cc: Arnd Bergmann <arnd@...db.de>, Mark Rutland <mark.rutland@....com>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Marc Zyngier <marc.zyngier@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
Philip Elcan <pelcan@...eaurora.org>,
James Morse <james.morse@....com>,
Kristina Martsenko <kristina.martsenko@....com>,
Toshi Kani <toshi.kani@....com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Joerg Roedel <joro@...tes.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-arch@...r.kernel.org
Subject: Re: [PATCH v9 3/4] arm64: Implement page table free interfaces
Hi Chintan,
[as a side note: I'm confused on the status of this patch series, as part
of it was reposted separately by Toshi. Please can you work together?]
On Mon, Apr 30, 2018 at 01:11:33PM +0530, Chintan Pandya wrote:
> Implement pud_free_pmd_page() and pmd_free_pte_page().
>
> Implementation requires,
> 1) Clearing off the current pud/pmd entry
> 2) Invalidate TLB which could have previously
> valid but not stale entry
> 3) Freeing of the un-used next level page tables
>
> Signed-off-by: Chintan Pandya <cpandya@...eaurora.org>
> ---
> arch/arm64/mm/mmu.c | 29 +++++++++++++++++++++++++----
> 1 file changed, 25 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index da98828..0f651db 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -45,6 +45,7 @@
> #include <asm/memblock.h>
> #include <asm/mmu_context.h>
> #include <asm/ptdump.h>
> +#include <asm/tlbflush.h>
>
> #define NO_BLOCK_MAPPINGS BIT(0)
> #define NO_CONT_MAPPINGS BIT(1)
> @@ -973,12 +974,32 @@ int pmd_clear_huge(pmd_t *pmdp)
> return 1;
> }
>
> -int pud_free_pmd_page(pud_t *pud, unsigned long addr)
> +int pmd_free_pte_page(pmd_t *pmdp, unsigned long addr)
> {
> - return pud_none(*pud);
> + pmd_t *table;
> +
> + if (pmd_present(READ_ONCE(*pmdp))) {
Might also be worth checking pmd_table here, just in case. (same for pud)
> + table = __va(pmd_val(*pmdp));
Can you avoid dereferencing *pmdp twice, and instead READ_ONCE into a local
variable, please? (same for pud)
> + pmd_clear(pmdp);
> + __flush_tlb_kernel_pgtable(addr);
> + free_page((unsigned long) table);
Shouldn't this be pte_free_kernel, to pair with pte_alloc_kernel which
was used to allocate the page in the first place? (similarly for pud)
> + }
> + return 1;
> }
>
> -int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
> +int pud_free_pmd_page(pud_t *pudp, unsigned long addr)
> {
> - return pmd_none(*pmd);
> + pmd_t *table;
> + int i;
> +
> + if (pud_present(READ_ONCE(*pudp))) {
> + table = __va(pud_val(*pudp));
> + for (i = 0; i < PTRS_PER_PMD; i++)
> + pmd_free_pte_page(&table[i], addr + (i * PMD_SIZE));
I think it would be cleaner to write this as a do { ... } while, for
consistency with the ioremap and vmalloc code.
Will
Powered by blists - more mailing lists