[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170313092542.GJ3343@twins.programming.kicks-ass.net>
Date: Mon, 13 Mar 2017 10:25:42 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: mingo@...nel.org, juri.lelli@....com, rostedt@...dmis.org,
xlpang@...hat.com, bigeasy@...utronix.de,
linux-kernel@...r.kernel.org, mathieu.desnoyers@...icios.com,
jdesfossez@...icios.com, bristot@...hat.com, dvhart@...radead.org
Subject: Re: [PATCH -v5 14/14] futex: futex_unlock_pi() determinism
On Tue, Mar 07, 2017 at 03:31:50PM +0100, Thomas Gleixner wrote:
> On Sat, 4 Mar 2017, Peter Zijlstra wrote:
>
> > The problem with returning -EAGAIN when the waiter state mismatches is
> > that it becomes very hard to proof a bounded execution time on the
> > operation. And seeing that this is a RT operation, this is somewhat
> > important.
> >
> > While in practise it will be very unlikely to ever really take more
> > than one or two rounds, proving so becomes rather hard.
>
> Oh no. Assume the following:
>
> T1 and T2 are both pinned to CPU0. prio(T2) > prio(T1)
>
> CPU0
>
> T1
> lock_pi()
> queue_me() <- Waiter is visible
>
> preemption
>
> T2
> unlock_pi()
> loops with -EAGAIN forever
So this is true before the last patch; but if we look at the locking
changes brought by that (pasting its changelog here):
Before:
futex_lock_pi() futex_unlock_pi()
unlock hb->lock
lock hb->lock
unlock hb->lock
lock rt_mutex->wait_lock
unlock rt_mutex_wait_lock
-EAGAIN
lock rt_mutex->wait_lock
list_add
unlock rt_mutex->wait_lock
schedule()
lock rt_mutex->wait_lock
list_del
unlock rt_mutex->wait_lock
<idem>
-EAGAIN
lock hb->lock
After:
futex_lock_pi() futex_unlock_pi()
lock hb->lock
lock rt_mutex->wait_lock
list_add
unlock rt_mutex->wait_lock
unlock hb->lock
schedule()
lock hb->lock
unlock hb->lock
lock hb->lock
lock rt_mutex->wait_lock
list_del
unlock rt_mutex->wait_lock
lock rt_mutex->wait_lock
unlock rt_mutex_wait_lock
-EAGAIN
unlock hb->lock
Your T2 (of higher prio) would block on T1's hb->lock and boost T1
(since hb->lock is an rt_mutex).
Alternatively (!PREEMPT_FULL), the interleave cannot happen (when pinned
to a single CPU) because then hb->lock disables preemption, it being a
spinlock.
Unless I need to go drink more wake-up-juice..
Powered by blists - more mailing lists