lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 14 Jan 2019 09:27:15 +0100
From:   Hugo Lefeuvre <hle@....eu.com>
To:     Greg Kroah-Hartman <greg@...ah.com>
Cc:     Arve Hjønnevåg <arve@...roid.com>,
        Riley Andrews <riandrews@...roid.com>,
        Todd Kjos <tkjos@...roid.com>,
        Martijn Coenen <maco@...roid.com>,
        Joel Fernandes <joel@...lfernandes.org>,
        Christian Brauner <christian@...uner.io>,
        devel@...verdev.osuosl.org, linux-kernel@...r.kernel.org
Subject: staging/android: questions regarding TODO entries

Hi,

This todo entry from staging/android/TODO intriguates me:

    vsoc.c, uapi/vsoc_shm.h
     - The current driver uses the same wait queue for all of the futexes in a
       region. This will cause false wakeups in regions with a large number of
       waiting threads. We should eventually use multiple queues and select the
       queue based on the region.

I am not sure to understand it very well.

What does "select the queue based on the region" mean here ? We already
have one queue per region, right ?

What I understand: there is one wait queue per region, meaning that if
threads T1 to Tn are waiting at offsets O1 to On (same region), then a
wakeup at offset Om will wake them all. In this case there is a perf issue
because only Tm (waiting for changes at offset Om) really wants to be
waken up here, the rest is a bunch of spurious wakeups.

Does the todo suggest to have one queue per offset ?

Also, this comment (drivers/staging/android/vsoc.c) mentions a worst case
of ten threads:

    /*
     * TODO(b/73664181): Use multiple futex wait queues.
     * We need to wake every sleeper when the condition changes. Typically
     * only a single thread will be waiting on the condition, but there
     * are exceptions. The worst case is about 10 threads.
     */

It is not clear to me how this value has been obtained, nor under which
conditions it might be true. There is no maximum to the number of threads
fitting in the wait queue, so how can we be sure that at most ten threads
will wait at the same offset ?

second, unrelated question:

In the VSOC_SELF_INTERRUPT ioctl (which might be removed in the future if
VSOC_WAIT_FOR_INCOMING_INTERRUPT disappears, right ?), incoming_signalled
is set to 1 but at no other place in the driver we reset it to zero. So,
once VSOC_SELF_INTERRUPT has been executed once,
VSOC_WAIT_FOR_INCOMING_INTERRUPT doesn't work anymore ?

Thanks for your work !

cheers,
Hugo

PS: cc-ing the result of get_maintainer.pl + contacts from todo. Please
tell me if this is not the right way to go.

-- 
                Hugo Lefeuvre (hle)    |    www.owl.eu.com
RSA4096_ 360B 03B3 BF27 4F4D 7A3F D5E8 14AA 1EB8 A247 3DFD
ed25519_ 37B2 6D38 0B25 B8A2 6B9F 3A65 A36F 5357 5F2D DC4C

Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ