[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E749937.5090803@colorfullife.com>
Date: Sat, 17 Sep 2011 14:57:27 +0200
From: Manfred Spraul <manfred@...orfullife.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC: Ingo Molnar <mingo@...e.hu>, Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>,
Darren Hart <dvhart@...ux.intel.com>,
David Miller <davem@...emloft.net>,
Eric Dumazet <eric.dumazet@...il.com>,
Mike Galbraith <efault@....de>
Subject: Re: [RFC][PATCH 2/3] futex: Reduce hash bucket lock contention
On 09/16/2011 02:34 PM, Peter Zijlstra wrote:
> So while initially I thought the sem patch was busted, it turns out this
> one is.
>
> Thomas managed to spot the race:
>
> Task-0 Task-1
>
> futex_wait()
> queue_me()
>
> futex_wake()
> wake_list_add();
> __unqueue_futex();
> plist_del();
> if (!plist_node_empty())
> __set_current_state(TASK_RUNNNIG);
>
> wake_up_list();
> /* waking an already running task-0 */
>
>
> I guess the biggest question is, do we care? Ideally everything should
> be able to deal with spurious wakeups, although we generally try to
> avoid them.
>
>
The sem patch also causes such wakeups:
Task-0 Task-1
semtimedop()
schedule_timeout()
semtimedop()
wake_list_add();
q->status = 0;
<Timeout>
schedule_timeout() returns
if (q->status==0)
return;
semtimedop() returns
random user space/kernel space code
spin_unlock();
wake_up_list();
It's a rare event, but that does happen.
Which means:
How do we verify that everything is able to deal with spurious wakeups?
--
Manfred
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists