[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52A72463.9080108@redhat.com>
Date: Tue, 10 Dec 2013 09:25:39 -0500
From: Rik van Riel <riel@...hat.com>
To: Mel Gorman <mgorman@...e.de>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Alex Thorlton <athorlton@....com>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 11/18] mm: fix TLB flush race between migration, and change_protection_range
On 12/09/2013 02:09 AM, Mel Gorman wrote:
After reading the locking thread that Paul McKenney started,
I wonder if I got the barriers wrong in these functions...
> +#if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_COMPACTION)
> +/*
> + * Memory barriers to keep this state in sync are graciously provided by
> + * the page table locks, outside of which no page table modifications happen.
> + * The barriers below prevent the compiler from re-ordering the instructions
> + * around the memory barriers that are already present in the code.
> + */
> +static inline bool tlb_flush_pending(struct mm_struct *mm)
> +{
> + barrier();
Should this be smp_mb__after_unlock_lock(); ?
> + return mm->tlb_flush_pending;
> +}
> +static inline void set_tlb_flush_pending(struct mm_struct *mm)
> +{
> + mm->tlb_flush_pending = true;
> + barrier();
> +}
> +/* Clearing is done after a TLB flush, which also provides a barrier. */
> +static inline void clear_tlb_flush_pending(struct mm_struct *mm)
> +{
> + barrier();
> + mm->tlb_flush_pending = false;
> +}
And these smp_mb__before_spinlock() ?
Paul? Peter?
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists