lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 26 Mar 2016 10:56:52 -0500
From:	Petros Koutoupis <petros@...roskoutoupis.com>
To:	linux-kernel@...r.kernel.org
Cc:	"petros@...roskoutoupis.com" <petros@...roskoutoupis.com>
Subject: futex: clarification needed with drop_futex_key_refs and memory
 barriers

I stumbled on an interesting scenario which I am unable to fully explain and I
was hoping to get some other opinions on why this would or wouldn't work.

In recent testing on a 48-core Haswell arch server, our multi-threaded user space
application was utilizing 60% to 100% more CPU than on our smaller 24-core servers
(running an identical load). After spending a considerable amount of time analyzing
stack dumps and straces it became immediately apparent that those exact threads
operating with the higher CPU utilization were off in futex land.

Shortly afterward I stumbled on commit 76835b0ebf8a7fe85beb03c75121419a7dec52f0
(futex: Ensure get_futex_key_refs() always implies a barrier) which addressed the
handling of private futexes and preventing a race condition by completing the
function with a memory barrier. Now, I fully understand why this patch was implemented:
to have a memory barrier before checking the "waiters." It makes sense. What doesn't
make sense (so far) is when I apply the same patch to the drop counterpart,
drop_futex_key_refs(), and the problem goes away. See the change and my notes below.


--- linux/kernel/futex.c.orig   2016-03-25 19:45:08.169563263 -0500
+++ linux/kernel/futex.c        2016-03-25 19:46:06.901562211 -0500
@@ -438,11 +438,13 @@ static void drop_futex_key_refs(union fu

        switch (key->both.offset & (FUT_OFF_INODE|FUT_OFF_MMSHARED)) {
        case FUT_OFF_INODE:
-               iput(key->shared.inode);
+               iput(key->shared.inode); /* implies smp_mb(); (B) */
                break;
        case FUT_OFF_MMSHARED:
-               mmdrop(key->private.mm);
+               mmdrop(key->private.mm); /* implies smp_mb(); (B) */
                break;
+       default:
+               smp_mb(); /* explicit smp_mb(); (B) */
        }
 }


The iput() and mmdrop() routines in the switch statement eventually use
atomic_dec_and_test() which according to the Documentation/memory-barriers.txt
implies an smp_mb() on each side of the actual operation. Notice that private
futexes aren't handled by this (read below) this switch.

Now there is a wrapper put_futex_key() which is called in a few function as a
way to clean up before before retrying, but in every case, and before it is
called, a check is made to see if the futex is private and if so, retries at
a more appropriate area of its respective function.

Now I have found two functions where this type of check/protection aren't made
and I am curious as to if I stumbled on what could potentially lead to a race
condition in a large SMP environment. Please refer to futex_wait() (called indirectly
via unqueue_me()) and futex_requeue().

Any thoughts or opinions would be greatly appreciated. Thank you in advance.

--
Petros

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ