[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1703071419290.3584@nanos>
Date: Tue, 7 Mar 2017 14:22:14 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Peter Zijlstra <peterz@...radead.org>
cc: mingo@...nel.org, juri.lelli@....com, rostedt@...dmis.org,
xlpang@...hat.com, bigeasy@...utronix.de,
linux-kernel@...r.kernel.org, mathieu.desnoyers@...icios.com,
jdesfossez@...icios.com, bristot@...hat.com, dvhart@...radead.org
Subject: Re: [PATCH -v5 07/14] futex: Change locking rules
On Sat, 4 Mar 2017, Peter Zijlstra wrote:
> @@ -2166,36 +2252,43 @@ static int fixup_pi_state_owner(u32 __us
> /*
> - * To handle the page fault we need to drop the hash bucket
> - * lock here. That gives the other task (either the highest priority
> - * waiter itself or the task which stole the rtmutex) the
> - * chance to try the fixup of the pi_state. So once we are
> - * back from handling the fault we need to check the pi_state
> - * after reacquiring the hash bucket lock and before trying to
> - * do another fixup. When the fixup has been done already we
> - * simply return.
> + * To handle the page fault we need to drop the locks here. That gives
> + * the other task (either the highest priority waiter itself or the
> + * task which stole the rtmutex) the chance to try the fixup of the
> + * pi_state. So once we are back from handling the fault we need to
> + * check the pi_state after reacquiring the locks and before trying to
> + * do another fixup. When the fixup has been done already we simply
> + * return.
> + *
> + * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely
> + * drop hb->lock since the caller owns the hb -> futex_q relation.
> + * Dropping the pi_mutex->wait_lock requires the state revalidate.
> */
> handle_fault:
> + raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
> spin_unlock(q->lock_ptr);
>
> ret = fault_in_user_writeable(uaddr);
>
> spin_lock(q->lock_ptr);
> + raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
>
> /*
> * Check if someone else fixed it for us:
Adding context:
*/
if (pi_state->owner != oldowner)
return 0;
if (ret)
return ret;
goto retry;
Both 'return' statements leak &pi_state->pi_mutex.wait_lock ....
Thanks,
tglx
Powered by blists - more mailing lists