[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161008165540.GI3568@worktop.programming.kicks-ass.net>
Date: Sat, 8 Oct 2016 18:55:40 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: mingo@...nel.org, juri.lelli@....com, rostedt@...dmis.org,
xlpang@...hat.com, bigeasy@...utronix.de,
linux-kernel@...r.kernel.org, mathieu.desnoyers@...icios.com,
jdesfossez@...icios.com, bristot@...hat.com
Subject: Re: [RFC][PATCH 4/4] futex: Rewrite FUTEX_UNLOCK_PI
On Sat, Oct 08, 2016 at 05:53:49PM +0200, Thomas Gleixner wrote:
> On Fri, 7 Oct 2016, Peter Zijlstra wrote:
> > Solve all that by:
> >
> > - using futex specific rt_mutex calls that lack the fastpath, futexes
> > have their own fastpath anyway. This makes that
> > rt_mutex_futex_unlock() doesn't need to drop rt_mutex::wait_lock
> > and the unlock is guaranteed if we manage to update user state.
> >
> > - make futex_unlock_pi() drop hb->lock early and only use
> > rt_mutex::wait_lock to serialize against rt_mutex waiters
> > update the futex value and unlock.
> >
> > - in case futex and rt_mutex disagree on waiters, side with rt_mutex
> > and simply clear the user value. This works because either there
> > really are no waiters left, or futex_lock_pi() triggers the
> > lock-steal path and fixes up the WAITERS flag.
>
> I stared at this for a few hours and while I'm not yet done analyzing all
> possible combinations I found at least one thing which is broken:
>
> CPU 0 CPU 1
>
> unlock_pi(f)
> ....
> unlock(hb->lock)
> *f = new_owner_tid | WAITERS;
>
> lock_pi(f)
> lock(hb->lock)
> uval = *f;
> topwaiter = futex_top_waiter();
> attach_to_pi_state(uval, topwaiter->pistate);
> pid = uval & TID_MASK;
> if (pid != task_pid_vnr(pistate->owner))
> return -EINVAL;
> ....
> pistate->owner = newowner;
>
> So in this case we tell the caller on CPU 1 that the futex is in
> inconsistent state, because pistate->owner still points to the unlocking
> task while the user space value alread shows the new owner. So this sanity
> check triggers and we simply fail while we should not. It's [10] in the
> state matrix above attach_to_pi_state().
Urgh, yes. I think I can cure that, by taking
pi_state->pi_mutex.wait_lock in attach_to_pi_state(), but blergh.
> I suspect that there are more issues like this, especially since I did not
> look at requeue_pi yet, but by now my brain is completely fried.
Yes, I know about fried brains :-( This stuff has far too many moving
parts. I've been staring at this stuff far too long.
Also, we need better tools to stress this stuff.
Powered by blists - more mailing lists