[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1385409830.12342.41.camel@dvhart-mobl4.amr.corp.intel.com>
Date: Mon, 25 Nov 2013 12:03:50 -0800
From: Darren Hart <dvhart@...ux.intel.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Davidlohr Bueso <davidlohr@...com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Mike Galbraith <efault@....de>, jeffm@...e.com,
"Norton, Scott J" <scott.norton@...com>, tom.vaden@...com,
"Chandramouleeswaran, Aswin" <aswin@...com>,
Waiman Long <Waiman.Long@...com>,
Jason Low <jason.low2@...com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [PATCH 4/5] futex: Avoid taking hb lock if nothing to wakeup
On Mon, 2013-11-25 at 20:47 +0100, Thomas Gleixner wrote:
> On Sat, 23 Nov 2013, Thomas Gleixner wrote:
> > On Fri, 22 Nov 2013, Davidlohr Bueso wrote:
> > So with the atomic ops you are changing that to:
> >
> > CPU 0 CPU 1
> >
> > val = *futex;
> > futex_wait(futex, val);
> >
> > spin_lock(&hb->lock);
> >
> > atomic_inc(&hb->waiters);
> > uval = *futex;
> > *futex = newval;
> >
> > if (uval != val) { futex_wake();
> > spin_unlock(&hb->lock); if (!atomic_read(&hb->waiters))
> > return; return;
> > }
> > spin_lock((&hb->lock);
> > plist_add(hb, self);
> > spin_unlock(&hb->lock);
> > schedule(); wakeup_waiters(hb);
> > ...
> >
> > which restores the ordering guarantee, which the hash bucket lock
> > provided so far.
>
> Actually that's not true by design, it just happens to work.
>
> atomic_inc() on x86 is a "lock incl".
>
> The LOCK prefix just guarantees that the cache line which is affected
> by the INCL is locked. And it guarantees that locked operations
> serialize all outstanding load and store operations.
>
> But Documentation/atomic_ops.txt says about atomic_inc():
>
> "One very important aspect of these two routines is that they DO NOT
> require any explicit memory barriers. They need only perform the
> atomic_t counter update in an SMP safe manner."
>
> So while this has a barrier on x86, it's not guaranteed.
But it is guaranteed to be "in an SMP safe manner"... which I guess just
means that two writes will not intermix bytes, but no guarantee that the
value will be visible to other CPUs unless some kind of barrier is
explicitly imposed.
Correct?
> atomic_read() is a simple read.
>
> This does not guarantee anything. And if you read
> Documentation/atomic_ops.txt it's well documented:
>
> "*** WARNING: atomic_read() and atomic_set() DO NOT IMPLY BARRIERS! ***"
>
>
> So now your code melts down to:
>
> write(hb->waiters) | write(uaddr)
> mb | read(hb->waiters)
> read(uaddr)
>
> I fear you simply managed to make the window small enough that your
> testing was not longer able expose it.
Does seem to be the case.
--
Darren Hart
Intel Open Source Technology Center
Yocto Project - Linux Kernel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists