[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191217085854.GW2844@hirez.programming.kicks-ass.net>
Date: Tue, 17 Dec 2019 09:58:54 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>
Cc: akpm@...ux-foundation.org, npiggin@...il.com, mpe@...erman.id.au,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org
Subject: Re: [RFC PATCH 2/2] mm/mmu_gather: Avoid multiple page walk cache
flush
On Tue, Dec 17, 2019 at 12:47:13PM +0530, Aneesh Kumar K.V wrote:
> On tlb_finish_mmu() kernel does a tlb flush before mmu gather table invalidate.
> The mmu gather table invalidate depending on kernel config also does another
> TLBI. Avoid the later on tlb_finish_mmu().
That is already avoided, if you look at tlb_flush_mmu_tlbonly() it does
__tlb_range_reset(), which results in ->end = 0, which then triggers the
early exit on the next invocation:
if (!tlb->end)
return;
Powered by blists - more mailing lists