lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170222110244.GP6536@twins.programming.kicks-ass.net>
Date:   Wed, 22 Feb 2017 12:02:44 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     tglx@...utronix.de
Cc:     mingo@...nel.org, juri.lelli@....com, rostedt@...dmis.org,
        xlpang@...hat.com, bigeasy@...utronix.de,
        linux-kernel@...r.kernel.org, mathieu.desnoyers@...icios.com,
        jdesfossez@...icios.com, bristot@...hat.com, dvhart@...radead.org
Subject: Re: [PATCH -v4 00/10] FUTEX_UNLOCK_PI wobbles

On Tue, Dec 13, 2016 at 05:07:14PM +0100, Peter Zijlstra wrote:
> On Tue, Dec 13, 2016 at 09:36:38AM +0100, Peter Zijlstra wrote:
> 
> > The basic idea is to, like requeue PI, break the rt_mutex_lock() function into
> > pieces, such that we can enqueue the waiter while holding hb->lock, wait for
> > acquisition without hb->lock and can remove the waiter, on failure, while
> > holding hb->lock again.
> > 
> > That way, when we drop hb->lock to wait, futex and rt_mutex wait state is
> > consistent.
> 
> And of course, there's a hole in...
> 
> There is a point in futex_unlock_pi() where we hold neither hb->lock nor
> wait_lock, at that point a futex_lock_pi() that had failed its
> rt_mutex_wait_proxy_lock() can sneak in and remove itself, even though
> we saw its waiter, recreating a vraiant of the initial problem.
> 
> The below plugs the hole, but its rather fragile in that it relies on
> overlapping critical sections and the specific detail that we call
> rt_mutex_cleanup_proxy_lock() immediately after (re)acquiring hb->lock.
> 
> There is another solution, but that's more involved and uglier still.
> 
> I'll give it a bit more thought.
> 

OK, so after having not thought about this, and then spend the last two
days trying to cram all this nonsense back into my head, I think I have
a slightly simpler option.

In any case, I'll go respin the patch-set and repost.


--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -1395,7 +1395,18 @@ static int wake_futex_pi(u32 __user *uad
 
 	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
 	new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
-	BUG_ON(!new_owner);
+	if (!new_owner) {
+		/*
+		 * Since we held neither hb->lock nor wait_lock when coming
+		 * into this function, we could have raced with futex_lock_pi()
+		 * such that it will have removed the waiter that brought us
+		 * here.
+		 *
+		 * In this case, retry the entire operation.
+		 */
+		ret = -EAGAIN;
+		goto out_unlock;
+	}
 
 	/*
 	 * We pass it to the next owner. The WAITERS bit is always kept
@@ -2657,8 +2668,8 @@ static int futex_lock_pi(u32 __user *uad
 	 * rt_mutex waitqueue, such that we can keep the hb and rt_mutex
 	 * wait lists consistent.
 	 */
-	if (ret)
-		rt_mutex_cleanup_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter);
+	if (ret && !rt_mutex_cleanup_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter))
+		ret = 0;
 
 did_trylock:
 	/*
@@ -3043,8 +3054,9 @@ static int futex_wait_requeue_pi(u32 __u
 		debug_rt_mutex_free_waiter(&rt_waiter);
 
 		spin_lock(q.lock_ptr);
-		if (ret)
-			rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter);
+		if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter))
+			ret = 0;
+
 		/*
 		 * Fixup the pi_state owner and possibly acquire the lock if we
 		 * haven't already.
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1781,16 +1781,29 @@ int rt_mutex_wait_proxy_lock(struct rt_m
  *
  * Clean up the failed lock acquisition as per rt_mutex_wait_proxy_lock().
  *
+ * Returns:
+ *  true  - did the cleanup, we done.
+ *  false - we acquired the lock after rt_mutex_wait_proxy_lock() returned,
+ *          caller should disregards its return value.
+ *
  * Special API call for PI-futex support
  */
-void rt_mutex_cleanup_proxy_lock(struct rt_mutex *lock,
+bool rt_mutex_cleanup_proxy_lock(struct rt_mutex *lock,
 				 struct rt_mutex_waiter *waiter)
 {
-	raw_spin_lock_irq(&lock->wait_lock);
-
-	remove_waiter(lock, waiter);
-	fixup_rt_mutex_waiters(lock);
+	bool cleanup = false;
 
+	raw_spin_lock_irq(&lock->wait_lock);
+	/*
+	 * If we acquired the lock, no cleanup required.
+	 */
+	if (rt_mutex_owner(lock) != current) {
+		remove_waiter(lock, waiter);
+		fixup_rt_mutex_waiters(lock);
+		cleanup = true;
+	}
 	raw_spin_unlock_irq(&lock->wait_lock);
+
+	return cleanup;
 }
 
--- a/kernel/locking/rtmutex_common.h
+++ b/kernel/locking/rtmutex_common.h
@@ -106,11 +106,10 @@ extern void rt_mutex_proxy_unlock(struct
 extern int rt_mutex_start_proxy_lock(struct rt_mutex *lock,
 				     struct rt_mutex_waiter *waiter,
 				     struct task_struct *task);
-
 extern int rt_mutex_wait_proxy_lock(struct rt_mutex *lock,
 			       struct hrtimer_sleeper *to,
 			       struct rt_mutex_waiter *waiter);
-extern void rt_mutex_cleanup_proxy_lock(struct rt_mutex *lock,
+extern bool rt_mutex_cleanup_proxy_lock(struct rt_mutex *lock,
 				 struct rt_mutex_waiter *waiter);
 
 extern int rt_mutex_futex_trylock(struct rt_mutex *l);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ