[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1520540118.2693.103.camel@hpe.com>
Date: Thu, 8 Mar 2018 19:30:23 +0000
From: "Kani, Toshi" <toshi.kani@....com>
To: "will.deacon@....com" <will.deacon@....com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"bp@...e.de" <bp@...e.de>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"guohanjun@...wei.com" <guohanjun@...wei.com>,
"wxf.wang@...ilicon.com" <wxf.wang@...ilicon.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"x86@...nel.org" <x86@...nel.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"hpa@...or.com" <hpa@...or.com>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"mingo@...hat.com" <mingo@...hat.com>,
"Hocko, Michal" <mhocko@...e.com>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH 1/2] mm/vmalloc: Add interfaces to free unused page table
On Thu, 2018-03-08 at 18:04 +0000, Will Deacon wrote:
:
> > diff --git a/lib/ioremap.c b/lib/ioremap.c
> > index b808a390e4c3..54e5bbaa3200 100644
> > --- a/lib/ioremap.c
> > +++ b/lib/ioremap.c
> > @@ -91,7 +91,8 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
> >
> > if (ioremap_pmd_enabled() &&
> > ((next - addr) == PMD_SIZE) &&
> > - IS_ALIGNED(phys_addr + addr, PMD_SIZE)) {
> > + IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
> > + pmd_free_pte_page(pmd)) {
>
> I find it a bit weird that we're postponing this to the subsequent map. If
> we want to address the break-before-make issue that was causing a panic on
> arm64, then I think it would be better to do this on the unmap path to avoid
> duplicating TLB invalidation.
Hi Will,
Yes, I started looking into doing it the unmap path, but found the
following issues:
- The iounmap() path is shared with vunmap(). Since vmap() only
supports pte mappings, making vunmap() to free pte pages is an overhead
for regular vmap users as they do not need pte pages freed up.
- Checking to see if all entries in a pte page are cleared in the unmap
path is racy, and serializing this check is expensive.
- The unmap path calls free_vmap_area_noflush() to do lazy TLB purges.
Clearing a pud/pmd entry before the lazy TLB purges needs extra TLB
purge.
Hence, I decided to postpone and do it in the ioremap path when a
pud/pmd mapping is set. The "break" on arm64 happens when you update a
pmd entry without purging it. So, the unmap path is not broken. I
understand that arm64 may need extra TLB purge in pmd_free_pte_page(),
but it limits this overhead only when it sets up a pud/pmd mapping.
Thanks,
-Toshi
Powered by blists - more mailing lists