[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180919115158.GD24124@hirez.programming.kicks-ass.net>
Date: Wed, 19 Sep 2018 13:51:58 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Will Deacon <will.deacon@....com>
Cc: aneesh.kumar@...ux.vnet.ibm.com, akpm@...ux-foundation.org,
npiggin@...il.com, linux-arch@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux@...linux.org.uk,
heiko.carstens@...ibm.com
Subject: Re: [RFC][PATCH 01/11] asm-generic/tlb: Provide a comment
On Fri, Sep 14, 2018 at 05:48:57PM +0100, Will Deacon wrote:
> > + * - mmu_gather::fullmm
> > + *
> > + * A flag set by tlb_gather_mmu() to indicate we're going to free
> > + * the entire mm; this allows a number of optimizations.
> > + *
> > + * XXX list optimizations
>
> On arm64, we can elide the invalidation altogether because we won't
> re-allocate the ASID. We also have an invalidate-by-ASID (mm) instruction,
> which we could use if we needed to.
Right, but I was also struggling to put into words the normal fullmm
case.
I now ended up with:
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -82,7 +82,11 @@
* A flag set by tlb_gather_mmu() to indicate we're going to free
* the entire mm; this allows a number of optimizations.
*
- * XXX list optimizations
+ * - We can ignore tlb_{start,end}_vma(); because we don't
+ * care about ranges. Everything will be shot down.
+ *
+ * - (RISC) architectures that use ASIDs can cycle to a new ASID
+ * and delay the invalidation until ASID space runs out.
*
* - mmu_gather::need_flush_all
*
Does that about cover things; or do we need more?
> > + *
> > + * - mmu_gather::need_flush_all
> > + *
> > + * A flag that can be set by the arch code if it wants to force
> > + * flush the entire TLB irrespective of the range. For instance
> > + * x86-PAE needs this when changing top-level entries.
> > + *
> > + * And requires the architecture to provide and implement tlb_flush().
> > + *
> > + * tlb_flush() may, in addition to the above mentioned mmu_gather fields, make
> > + * use of:
> > + *
> > + * - mmu_gather::start / mmu_gather::end
> > + *
> > + * which (when !need_flush_all; fullmm will have start = end = ~0UL) provides
> > + * the range that needs to be flushed to cover the pages to be freed.
>
> I don't understand the mention of need_flush_all here -- I didn't think it
> was used by the core code at all.
The core does indeed not use that flag; but if the architecture set
that, the range is still ignored.
Can you suggest clearer wording?
Powered by blists - more mailing lists