lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 3 Oct 2016 17:44:22 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     mingo@...nel.org, tglx@...utronix.de, juri.lelli@....com,
        xlpang@...hat.com, bigeasy@...utronix.de,
        linux-kernel@...r.kernel.org, mathieu.desnoyers@...icios.com,
        jdesfossez@...icios.com, bristot@...hat.com
Subject: Re: [RFC][PATCH 4/4] futex: Rewrite FUTEX_UNLOCK_PI

On Mon, Oct 03, 2016 at 11:36:24AM -0400, Steven Rostedt wrote:
> >  	/*
> > -	 * If current does not own the pi_state then the futex is
> > -	 * inconsistent and user space fiddled with the futex value.
> > +	 * Now that we hold wait_lock, no new waiters can happen on the
> > +	 * rt_mutex and new owner is stable. Drop hb->lock.
> >  	 */
> > -	if (pi_state->owner != current)
> > -		return -EINVAL;
> > +	spin_unlock(&hb->lock);
> >  
> 
> Also, as Sebastian has said before, I believe this breaks rt's migrate
> disable code. As migrate disable and migrate_enable are a nop if
> preemption is disabled, thus if you hold a raw_spin_lock across a
> spin_unlock() when the migrate enable will be a nop, and the
> migrate_disable() will never stop.

Its too long since I looked at that trainwreck, but yuck, that would
make lock unlock order important :-(

Now I think we could do something like so.. but I'm not entirely sure on
the various lifetime rules here, its not overly documented.

--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -1300,15 +1300,14 @@ static int wake_futex_pi(u32 __user *uad
 	WAKE_Q(wake_q);
 	int ret = 0;
 
-	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
-
 	WARN_ON_ONCE(!atomic_inc_not_zero(&pi_state->refcount));
 	/*
-	 * Now that we hold wait_lock, no new waiters can happen on the
-	 * rt_mutex and new owner is stable. Drop hb->lock.
+	 * XXX
 	 */
 	spin_unlock(&hb->lock);
 
+	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+
 	new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
 
 	/*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ