[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170725100234.qbsuphozotivan3c@hirez.programming.kicks-ass.net>
Date: Tue, 25 Jul 2017 12:02:34 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Andy Lutomirski <luto@...nel.org>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org,
Borislav Petkov <bp@...en8.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Nadav Amit <nadav.amit@...il.com>,
Rik van Riel <riel@...hat.com>,
Dave Hansen <dave.hansen@...el.com>,
Arjan van de Ven <arjan@...ux.intel.com>
Subject: Re: [PATCH v5 2/2] x86/mm: Improve TLB flush documentation
On Mon, Jul 24, 2017 at 09:41:39PM -0700, Andy Lutomirski wrote:
> + /*
> + * Resume remote flushes and then read tlb_gen. The
> + * implied barrier in atomic64_read() synchronizes
There is no barrier in atomic64_read().
> + * with inc_mm_tlb_gen() like this:
> + *
> + * switch_mm_irqs_off(): flush request:
> + * cpumask_set_cpu(...); inc_mm_tlb_gen();
> + * MB MB
> + * atomic64_read(.tlb_gen); flush_tlb_others(mm_cpumask());
> + */
> cpumask_set_cpu(cpu, mm_cpumask(next));
> next_tlb_gen = atomic64_read(&next->context.tlb_gen);
>
Powered by blists - more mailing lists