[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161003113624.04f1f9f2@gandalf.local.home>
Date: Mon, 3 Oct 2016 11:36:24 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...nel.org, tglx@...utronix.de, juri.lelli@....com,
xlpang@...hat.com, bigeasy@...utronix.de,
linux-kernel@...r.kernel.org, mathieu.desnoyers@...icios.com,
jdesfossez@...icios.com, bristot@...hat.com
Subject: Re: [RFC][PATCH 4/4] futex: Rewrite FUTEX_UNLOCK_PI
On Mon, 03 Oct 2016 11:12:38 +0200
Peter Zijlstra <peterz@...radead.org> wrote:
> There's a number of 'interesting' problems with FUTEX_UNLOCK_PI, all
> caused by holding hb->lock while doing the rt_mutex_unlock()
> equivalient.
>
> This patch doesn't attempt to fix any of the actual problems, but
> instead reworks the code to not hold hb->lock across the unlock,
> paving the way to actually fix the problems later.
>
> The current reason we hold hb->lock over unlock is that it serializes
> against FUTEX_LOCK_PI and avoids new waiters from coming in, this then
> ensures the rt_mutex_next_owner() value is stable and can be written
> into the user-space futex value before doing the unlock. Such that the
> unlock will indeed end up at new_owner.
>
> This patch recognises that holding rt_mutex::wait_lock results in the
> very same guarantee, no new waiters can come in while we hold that
> lock -- after all, waiters would need this lock to queue themselves.
>
> It therefore restructures the code to keep rt_mutex::wait_lock held.
>
> This (of course) is not entirely straight forward either, see the
> comment in rt_mutex_slowunlock(), doing the unlock itself might drop
> wait_lock, letting new waiters in. To cure this
> rt_mutex_futex_unlock() becomes a variant of rt_mutex_slowunlock()
> that return -EAGAIN instead. This ensures the FUTEX_UNLOCK_PI code
> aborts and restarts the entire operation.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> kernel/futex.c | 63 +++++++++++++++++--------------
> kernel/locking/rtmutex.c | 81 ++++++++++++++++++++++++++++++++++++----
> kernel/locking/rtmutex_common.h | 4 -
> 3 files changed, 110 insertions(+), 38 deletions(-)
>
> --- a/kernel/futex.c
> +++ b/kernel/futex.c
> @@ -1294,24 +1294,21 @@ static void mark_wake_futex(struct wake_
> static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *top_waiter,
> struct futex_hash_bucket *hb)
> {
> - struct task_struct *new_owner;
> struct futex_pi_state *pi_state = top_waiter->pi_state;
> u32 uninitialized_var(curval), newval;
> + struct task_struct *new_owner;
> WAKE_Q(wake_q);
> - bool deboost;
> int ret = 0;
>
> - if (!pi_state)
> - return -EINVAL;
> + raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
>
> + WARN_ON_ONCE(!atomic_inc_not_zero(&pi_state->refcount));
Don't we have a rule where WARN_ON() and BUG_ON() should never have
"side effects"? That is, they should only check values, but their
contents should not update values.
hence have:
ret = atomic_inc_not_zero(&pi_state->refcount);
WARN_ON_ONCE(!ret);
> /*
> - * If current does not own the pi_state then the futex is
> - * inconsistent and user space fiddled with the futex value.
> + * Now that we hold wait_lock, no new waiters can happen on the
> + * rt_mutex and new owner is stable. Drop hb->lock.
> */
> - if (pi_state->owner != current)
> - return -EINVAL;
> + spin_unlock(&hb->lock);
>
Also, as Sebastian has said before, I believe this breaks rt's migrate
disable code. As migrate disable and migrate_enable are a nop if
preemption is disabled, thus if you hold a raw_spin_lock across a
spin_unlock() when the migrate enable will be a nop, and the
migrate_disable() will never stop.
-- Steve
> - raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
> new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
>
> /*
Powered by blists - more mailing lists