lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170317003505.GA13135@fury>
Date:   Thu, 16 Mar 2017 17:35:05 -0700
From:   Darren Hart <dvhart@...radead.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     tglx@...utronix.de, mingo@...nel.org, juri.lelli@....com,
        rostedt@...dmis.org, xlpang@...hat.com, bigeasy@...utronix.de,
        linux-kernel@...r.kernel.org, mathieu.desnoyers@...icios.com,
        jdesfossez@...icios.com, bristot@...hat.com,
        paulmck@...ux.vnet.ibm.com
Subject: Re: [PATCH -v4 04/10] futex: Use smp_store_release() in
 mark_wake_futex()

On Wed, Feb 22, 2017 at 03:03:16PM +0100, Peter Zijlstra wrote:
> On Fri, Dec 16, 2016 at 04:50:45PM -0800, Darren Hart wrote:
> > On Tue, Dec 13, 2016 at 09:36:42AM +0100, Peter Zijlstra wrote:
> > > Since the futex_q can dissapear the instruction after assigning NULL,
> > > this really should be a RELEASE barrier. That stops loads from hitting
> > > dead memory too.
> > > 
> > 
> > +Paul McKenney
> > 
> > Per the introduction of the comment below from:
> > 
> > 	f1a11e0 futex: remove the wait queue
> > 
> > I believe the intent was to ensure the plist_del in ... the previous
> > __unqueue_futex(q) ... from getting ahead of the smp_store_release added here,
> > which could result in q being destroyed by the waking task before plist_del can
> > act on it. Is that
> > right?
> > 
> > The comment below predates the refactoring which hid plist_del under the
> > __unqueue_futex() making it a bit less clear as to the associated plist_del:
> > 
> > However, since this comment, we have moved the wake-up out of wake_futex through
> > the use of wake queues (wake_up_q) which now happens after the hb lock is
> > released (see futex_wake, futex_wake_op, and futex_requeue). Is this race still
> > a valid concern?
> 
> Yes I think so, since __unqueue_futex() dereferences lock_ptr and does
> stores in the memory it points to, those stores must not happen _after_
> we NULL lock_ptr itself.

Are you referring to the q->lock_ptr = NULL in mark_wake_futex()? 
So the concern is parallel mark_wake_futex() calls on the same futex? But that
can't happen because the call is wrapped by the hb locks. In what scenario can
this occur?

> futex_wait(), which calls unqueue_me() could have had a spurious wakeup
> and observe our NULL store and 'free' the futex_q.

Urg. Spurious wakeups... yes... OK, still necessary. Gah. :-(

-- 
Darren Hart
VMware Open Source Technology Center

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ