[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0903071645480.29264@localhost.localdomain>
Date: Sat, 7 Mar 2009 16:50:56 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Darren Hart <dvhltc@...ibm.com>
cc: Steven Rostedt <rostedt@...dmis.org>,
"lkml, " <linux-kernel@...r.kernel.org>,
Sripathi Kodi <sripathik@...ibm.com>,
John Stultz <johnstul@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [TIP][RFC 6/7] futex: add requeue_pi calls
On Thu, 5 Mar 2009, Darren Hart wrote:
> int rt_mutex_start_proxy_lock(struct rt_mutex *lock,
> struct rt_mutex_waiter *waiter,
> struct task_struct *task, int detect_deadlock)
> {
> int ret;
>
> spin_lock(&lock->wait_lock);
> ret = task_blocks_on_rt_mutex(lock, waiter, task, detect_deadlock);
>
>
> I add the following line to fix the bug. Question is, should I use this
> atomic
> optimization here (under the lock->wait_lock) or should I just do "lock->owner
> |= RT_MUTEX_HAS_WAITERS" ?
>
> =====> mark_rt_mutex_waiters(lock);
This is still not enough as I explained in the review of the original
patch. What you need to do is:
if (try_to_take_rt_mutex(lock, task)) {
spin_unlock(&lock->wait_lock);
/* The caller needs to wake up task, as it is now the owner */
return WAKEIT;
}
ret = task_blocks_on_rt_mutex(lock, waiter, task, detect_deadlock);
> if (ret && !waiter->task) {
> /*
> * Reset the return value. We might have
> * returned with -EDEADLK and the owner
> * released the lock while we were walking the
> * pi chain. Let the waiter sort it out.
> */
> ret = 0;
> }
> spin_unlock(&lock->wait_lock);
>
> debug_rt_mutex_print_deadlock(waiter);
>
> return ret;
> }
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists