[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131210171522.GR4208@linux.vnet.ibm.com>
Date: Tue, 10 Dec 2013 09:15:22 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
laijs@...fujitsu.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
peterz@...radead.org, rostedt@...dmis.org, dhowells@...hat.com,
edumazet@...gle.com, darren@...art.com, fweisbec@...il.com,
sbw@....edu, Ingo Molnar <mingo@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Will Deacon <will.deacon@....com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Waiman Long <waiman.long@...com>,
Andrea Arcangeli <aarcange@...hat.com>,
Andi Kleen <andi@...stfloor.org>,
Michel Lespinasse <walken@...gle.com>,
Davidlohr Bueso <davidlohr.bueso@...com>,
Rik van Riel <riel@...hat.com>,
Peter Hurley <peter@...leysoftware.com>,
"H. Peter Anvin" <hpa@...or.com>, Arnd Bergmann <arnd@...db.de>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [PATCH v5 tip/core/locking 5/7]
Documentation/memory-barriers.txt: Downgrade UNLOCK+LOCK
On Tue, Dec 10, 2013 at 05:44:37PM +0100, Oleg Nesterov wrote:
> On 12/09, Paul E. McKenney wrote:
> >
> > @@ -1626,7 +1626,10 @@ for each construct. These operations all imply certain barriers:
> > operation has completed.
> >
> > Memory operations issued before the LOCK may be completed after the LOCK
> > - operation has completed.
> > + operation has completed. An smp_mb__before_spinlock(), combined
> > + with a following LOCK, acts as an smp_wmb(). Note the "w",
> > + this is smp_wmb(), not smp_mb().
>
> Well, but smp_mb__before_spinlock + LOCK is not wmb... But it is not
> the full barrier. It should guarantee that, say,
>
> CONDITION = true; // 1
>
> // try_to_wake_up
> smp_mb__before_spinlock();
> spin_lock(&task->pi_lock);
>
> if (!(p->state & state)) // 2
> return;
>
> can't race with with set_current_state() + check(CONDITION), this means
> that 1 and 2 above must not be reordered.
>
> But a LOAD before before spin_lock() can leak into the critical section.
>
> Perhaps this should be clarified somehow, or perhaps it should actually
> imply mb (if combined with LOCK).
If we leave the implementation the same, does the following capture the
constraints?
Memory operations issued before the LOCK may be completed after
the LOCK operation has completed. An smp_mb__before_spinlock(),
combined with a following LOCK, orders prior loads against
subsequent stores and stores and prior stores against
subsequent stores. Note that this is weaker than smp_mb()! The
smp_mb__before_spinlock() primitive is free on many architectures.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists