[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1502473549.2047.36.camel@codethink.co.uk>
Date: Fri, 11 Aug 2017 18:45:49 +0100
From: Ben Hutchings <ben.hutchings@...ethink.co.uk>
To: Mel Gorman <mgorman@...e.de>
Cc: linux-kernel@...r.kernel.org, stable@...r.kernel.org,
Nadav Amit <nadav.amit@...il.com>,
Andy Lutomirski <luto@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH 4.4 18/58] mm, mprotect: flush TLB if potentially racing
with a parallel reclaim leaving stale TLB entries
On Wed, 2017-08-09 at 12:41 -0700, Greg Kroah-Hartman wrote:
> 4.4-stable review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Mel Gorman <mgorman@...e.de>
>
> commit 3ea277194daaeaa84ce75180ec7c7a2075027a68 upstream.
[...]
> +/*
> + * Reclaim unmaps pages under the PTL but do not flush the TLB prior to
> + * releasing the PTL if TLB flushes are batched. It's possible for a parallel
> + * operation such as mprotect or munmap to race between reclaim unmapping
> + * the page and flushing the page. If this race occurs, it potentially allows
> + * access to data via a stale TLB entry. Tracking all mm's that have TLB
> + * batching in flight would be expensive during reclaim so instead track
> + * whether TLB batching occurred in the past and if so then do a flush here
> + * if required. This will cost one additional flush per reclaim cycle paid
> + * by the first operation at risk such as mprotect and mumap.
> + *
> + * This must be called under the PTL so that an access to tlb_flush_batched
> + * that is potentially a "reclaim vs mprotect/munmap/etc" race will synchronise
> + * via the PTL.
What about USE_SPLIT_PTE_PTLOCKS? I don't see how you can use "the PTL"
to synchronise access to a per-mm flag.
Ben.
> + */
> +void flush_tlb_batched_pending(struct mm_struct *mm)
> +{
> + if (mm->tlb_flush_batched) {
> + flush_tlb_mm(mm);
> +
> + /*
> + * Do not allow the compiler to re-order the clearing of
> + * tlb_flush_batched before the tlb is flushed.
> + */
> + barrier();
> + mm->tlb_flush_batched = false;
> + }
> +}
> #else
> static void set_tlb_ubc_flush_pending(struct mm_struct *mm,
> struct page *page, bool writable)
>
>
--
Ben Hutchings
Software Developer, Codethink Ltd.
Powered by blists - more mailing lists