[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.1.10.0805242031560.3295@apollo.tec.linutronix.de>
Date: Sat, 24 May 2008 20:35:40 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Daniel Walker <dwalker@...sta.com>
cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] futex: fix miss ordered wakeups
On Sat, 24 May 2008, Daniel Walker wrote:
> On Sat, 2008-05-24 at 19:03 +0200, Thomas Gleixner wrote:
> > On Sat, 24 May 2008, Daniel Walker wrote:
> > > On Sat, 2008-05-24 at 10:55 +0200, Thomas Gleixner wrote:
> > >
> > > > Normal futexes have no ordering guarantees at all. There is no
> > > > mechanism to prevent lock stealing from lower priority tasks. So why
> > > > should we care about the once a year case, where a sleepers priority
> > > > is modified ?
> > >
> > > Lock stealing?
> >
> > Do you have the faintest idea how the futex code works at all ? There
> > is no guarantee that the task which is woken up first gets the futex.
>
> Thomas if you want to be abusive, talk to someone else.
See below.
> > A) A task on another CPU can get it independent of its priority
> > B) In case of multiple waiters wakeup there is no guarantee either
>
> This is how I would imagine the pre-plist code would work.
And it works this way even after the plist code.
May I politely suggest, that you carefully read futex_wake() and the
corresponding libc implementation and figure out why there is no
guarantee and why there can't be one?
Sorry, I'm not abusive. You make claims about correctness and you seem
to believe that the plist code gives guarantees except for the
setscheduler corner case, but your hypothesis is simply wrong:
There is no kernel side controlled handover of a normal futex. The
woken up waiters race for it and a low prio thread on another CPU can
steal it even if there is a high prio waiter woken up.
So you try to tell me about the correctness of code where you just
imagine how it works.
The plist add on works correct in most of the cases, nothing else. To
achieve full correctness there is much more necessary than this
setscheduler issue. The plist changes were accepted because the
overhead is really minimal, but achieving full correctness would hurt
performance badly.
> > > > There are more issues vs. pi futexes as well. The simple case of
> > > > futex_wait() vs. futex_adjust_waiters will just upset lockdep, but
> > > > there are real dealocks vs. unqueue_me_pi waiting.
> > >
> > > You mean the lock ordering would cause the deadlock vs. unqueue_me_pi ,
> > > or are you talking about something else?
> >
> > Do I write Chinese or what ?
>
> I guess so ..
So maybe you should take some private lessons in Chinese. Then we
could easier communicate.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists