[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150220202319.GA21132@redhat.com>
Date: Fri, 20 Feb 2015 21:23:19 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Manfred Spraul <manfred@...orfullife.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Kirill Tkhai <ktkhai@...allels.com>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Josh Poimboeuf <jpoimboe@...hat.com>
Subject: Re: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking
cycles
On 02/20, Peter Zijlstra wrote:
>
> I think I agree with Oleg in that we only need the smp_rmb(); of course
> that wants a somewhat elaborate comment to go along with it. How about
> something like so:
>
> spin_unlock_wait(&local);
> /*
> * The above spin_unlock_wait() forms a control dependency with
> * any following stores; because we must first observe the lock
> * unlocked and we cannot speculate stores.
> *
> * Subsequent loads however can easily pass through the loads
> * represented by spin_unlock_wait() and therefore we need the
> * read barrier.
> *
> * This together is stronger than ACQUIRE for @local and
> * therefore we will observe the complete prior critical section
> * of @local.
> */
> smp_rmb();
>
> The obvious alternative is using spin_unlock_wait() with an
> smp_load_acquire(), but that might be more expensive on some archs due
> to repeated issuing of memory barriers.
Yes, yes, thanks!
But note that we need the same comment after sem_lock()->spin_is_locked().
So perhaps we can add this comment into include/linux/spinlock.h ? In this
case perhaps it makes sense to add, say,
#define smp_mb__after_unlock_wait() smp_rmb()
with this comment above? Another potential user task_work_run(). It could
use rmb() too, but this again needs the same fat comment.
Ehat do you think?
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists