[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <32404765-ad4f-6612-d1a9-43f9acdc8a62@linux.ibm.com>
Date: Tue, 17 Dec 2019 15:45:36 +0530
From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: akpm@...ux-foundation.org, npiggin@...il.com, mpe@...erman.id.au,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org
Subject: Re: [RFC PATCH 2/2] mm/mmu_gather: Avoid multiple page walk cache
flush
On 12/17/19 2:28 PM, Peter Zijlstra wrote:
> On Tue, Dec 17, 2019 at 12:47:13PM +0530, Aneesh Kumar K.V wrote:
>> On tlb_finish_mmu() kernel does a tlb flush before mmu gather table invalidate.
>> The mmu gather table invalidate depending on kernel config also does another
>> TLBI. Avoid the later on tlb_finish_mmu().
>
> That is already avoided, if you look at tlb_flush_mmu_tlbonly() it does
> __tlb_range_reset(), which results in ->end = 0, which then triggers the
> early exit on the next invocation:
>
> if (!tlb->end)
> return;
>
Is that true for tlb->fulmm flush?
-aneesh
Powered by blists - more mailing lists