lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 8 Aug 2021 10:05:35 -0700
From:   Davidlohr Bueso <dave@...olabs.net>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Will Deacon <will@...nel.org>,
        Waiman Long <longman@...hat.com>,
        Boqun Feng <boqun.feng@...il.com>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Mike Galbraith <efault@....de>
Subject: Re: [patch V3 56/64] futex: Correct the number of requeued waiters
 for PI

On Thu, 05 Aug 2021, Thomas Gleixner wrote:

>From: Thomas Gleixner <tglx@...utronix.de>
>
>The accounting is wrong when either the PI sanity check or the
>requeue PI operation fails. Adjust it in the failure path.

Ok fortunately these accounting errors are benign considering they
are in error paths. This also made me wonder about the requeue PI
top-waiter wakeup from futex_proxy_trylock_atomic(), which is always
required with nr_wakers == 1. We account for it on the successful
case we acquired the lock on it's behalf (and thus requeue_pi_wake_futex
was called), but if the corresponding lookup_pi_state fails, we'll retry.
So, shouldn't the task_count++ only be considered when we know the
requeueing is next (a successful top_waiter acquiring the lock+pi state)?

@@ -2260,7 +2260,6 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
		 */
		if (ret > 0) {
			WARN_ON(pi_state);
-                       task_count++;
			/*
			 * If we acquired the lock, then the user space value
			 * of uaddr2 should be vpid. It cannot be changed by
@@ -2275,6 +2274,8 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
			 */
			ret = lookup_pi_state(uaddr2, ret, hb2, &key2,
					      &pi_state, &exiting);
+                       if (!ret)
+                               task_count++;
		}

		switch (ret) {

Also, looking at the code, I think we can use the goto retry_private
optimization for private futexes upon futex_proxy_trylock_atomic
lookup_pi_state errors:

@@ -2290,8 +2290,11 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
			double_unlock_hb(hb1, hb2);
			hb_waiters_dec(hb2);
			ret = fault_in_user_writeable(uaddr2);
-                       if (!ret)
+                       if (!ret) {
+                               if (!(flags & FLAGS_SHARED))
+                                       goto retry_private;
				goto retry;
+                       }
			return ret;
		case -EBUSY:
		case -EAGAIN:


Thanks,
Davidlohr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ