[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <AFCF5AC6-EBA9-4F5B-9E05-C5CBF9B3EDC7@gmail.com>
Date: Tue, 2 Aug 2016 17:46:40 -0700
From: Nadav Amit <nadav.amit@...il.com>
To: Rafael Aquini <aquini@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
"open list:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
X86 ML <x86@...nel.org>, Andy Lutomirski <luto@...nel.org>,
Andrea Arcangeli <aarcange@...hat.com>, lwoodman@...hat.com,
Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>,
akpm@...ux-foundation.org
Subject: Re: [PATCH] x86/mm: Add barriers and document switch_mm()-vs-flush synchronization follow-up
Rafael Aquini <aquini@...hat.com> wrote:
> On Tue, Aug 02, 2016 at 03:27:06PM -0700, Nadav Amit wrote:
>> Rafael Aquini <aquini@...hat.com> wrote:
>>
>>> While backporting 71b3c126e611 ("x86/mm: Add barriers and document switch_mm()-vs-flush synchronization")
>>> we stumbled across a possibly missing barrier at flush_tlb_page().
>>
>> I too noticed it and submitted a similar patch that never got a response [1].
>
> As far as I understood Andy's rationale for the original patch you need
> a full memory barrier there in flush_tlb_page to get that cache-eviction
> race sorted out.
I am completely ok with your fix (except for the missing barrier in
set_tlb_ubc_flush_pending() ). However, I think mine should suffice. As far as
I saw, an atomic operation preceded every invocation of flush_tlb_page(). I
was afraid someone would send me to measure the patch performance impact so I
looked for one with the least impact.
See Intel SDM 8.2.2 "Memory Ordering in P6 and More Recent Processor Families"
for the reasoning behind smp_mb__after_atomic() . The result of an atomic
operation followed by smp_mb__after_atomic should be identical to smp_mb().
Regards,
Nadav
Powered by blists - more mailing lists