[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090922091045.GB7755@elte.hu>
Date: Tue, 22 Sep 2009 11:10:45 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Darren Hart <dvhltc@...ibm.com>, linux-kernel@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Dinakar Guniguntala <dino@...ibm.com>,
John Stultz <johnstul@...ibm.com>
Subject: Re: [PATCH 5/5] futex: fix wakeup race by setting
TASK_INTERRUPTIBLE before queue_me
* Eric Dumazet <eric.dumazet@...il.com> wrote:
> Darren Hart a ??crit :
> > PI futexes do not use the same plist_node_empty() test for wakeup. It was
> > possible for the waiter (in futex_wait_requeue_pi()) to set TASK_INTERRUPTIBLE
> > after the waker assigned the rtmutex to the waiter. The waiter would then note
> > the plist was not empty and call schedule(). The task would not be found by any
> > subsequeuent futex wakeups, resulting in a userspace hang. By moving the
> > setting of TASK_INTERRUPTIBLE to before the call to queue_me(), the race with
> > the waker is eliminated. Since we no longer call get_user() from within
> > queue_me(), there is no need to delay the setting of TASK_INTERRUPTIBLE until
> > after the call to queue_me().
> >
> > The FUTEX_LOCK_PI operation is not affected as futex_lock_pi() relies entirely
> > on the rtmutex code to handle schedule() and wakeup. The requeue PI code is
> > affected because the waiter starts as a non-PI waiter and is woken on a PI
> > futex.
> >
> > Remove the crusty old comment about holding spinlocks() across get_user() as we
> > no longer do that. Correct the locking statement with a description of why the
> > test is performed.
>
> I am very confused by this ChangeLog...
>
> >
> > Signed-off-by: Darren Hart <dvhltc@...ibm.com>
> > Cc: Thomas Gleixner <tglx@...utronix.de>
> > Cc: Peter Zijlstra <peterz@...radead.org>
> > Cc: Steven Rostedt <rostedt@...dmis.org>
> > Cc: Ingo Molnar <mingo@...e.hu>
> > CC: Eric Dumazet <eric.dumazet@...il.com>
> > CC: Dinakar Guniguntala <dino@...ibm.com>
> > CC: John Stultz <johnstul@...ibm.com>
> > ---
> >
> > kernel/futex.c | 15 +++------------
> > 1 files changed, 3 insertions(+), 12 deletions(-)
> >
> > diff --git a/kernel/futex.c b/kernel/futex.c
> > index f92afbe..463af2e 100644
> > --- a/kernel/futex.c
> > +++ b/kernel/futex.c
> > @@ -1656,17 +1656,8 @@ out:
> > static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q,
> > struct hrtimer_sleeper *timeout)
> > {
> > - queue_me(q, hb);
> > -
> > - /*
> > - * There might have been scheduling since the queue_me(), as we
> > - * cannot hold a spinlock across the get_user() in case it
> > - * faults, and we cannot just set TASK_INTERRUPTIBLE state when
> > - * queueing ourselves into the futex hash. This code thus has to
> > - * rely on the futex_wake() code removing us from hash when it
> > - * wakes us up.
> > - */
> > set_current_state(TASK_INTERRUPTIBLE);
>
> Hmm, you missed the smp_mb() properties here...
>
> Before :
> queue_me()
> set_mb(current->state, TASK_INTERRUPTIBLE);
> if (timeout) {...}
> if (likely(!plist_node_empty(&q->list))) {
> ...
> }
>
> After :
> set_mb(current->state, TASK_INTERRUPTIBLE);
> queue_me();
> if (timeout) {...}
> // no barrier... why ar we still testing q->list
> // since it has no synchro between queue_me() and test ?
> if (likely(!plist_node_empty(&q->list))) {
> ...
> }
queue_me() itself does a spin_unlock(), so at least for the bits
protected by hb->lock it should be half-serializing.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists