lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57E43EF9.8000400@hpe.com>
Date:   Thu, 22 Sep 2016 16:28:41 -0400
From:   Waiman Long <waiman.long@....com>
To:     Davidlohr Bueso <dave@...olabs.net>
CC:     Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Mike Galbraith <umgwanakikbuti@...il.com>,
        Ingo Molnar <mingo@...nel.org>,
        Jonathan Corbet <corbet@....net>,
        <linux-kernel@...r.kernel.org>, <linux-doc@...r.kernel.org>,
        Jason Low <jason.low2@....com>,
        Scott J Norton <scott.norton@....com>,
        Douglas Hatch <doug.hatch@....com>
Subject: Re: [RFC PATCH v2 3/5] futex: Throughput-optimized (TO) futexes

On 09/22/2016 04:08 PM, Waiman Long wrote:
> On 09/22/2016 11:11 AM, Davidlohr Bueso wrote:
>> On Thu, 22 Sep 2016, Thomas Gleixner wrote:
>>
>>> On Thu, 22 Sep 2016, Davidlohr Bueso wrote:
>>>> On Thu, 22 Sep 2016, Thomas Gleixner wrote:
>>>> > Also what's the reason that we can't do probabilistic spinning for
>>>> > FUTEX_WAIT and have to add yet another specialized variant of 
>>>> futexes?
>>>>
>>>> Where would this leave the respective FUTEX_WAKE? A nop? Probably 
>>>> have to
>>>> differentiate the fact that the queue was empty, but there was a 
>>>> spinning,
>>>> instead of straightforward returning 0.
>>>
>>> Sorry, but I really can't parse this answer.
>>>
>>> Can you folks please communicate with proper and coherent explanations
>>> instead of throwing a few gnawed off bones in my direction?
>>
>> I actually think that FUTEX_WAIT is the better/nicer approach. But my 
>> immediate
>> question above was how to handle the FUTEX_WAKE counter-part. If we 
>> want to
>> maintain current FIFO ordering for wakeups, now with WAIT spinners 
>> this will
>> create lock stealing scenarios (including if we even guard against 
>> starvation).
>> Or we could reduce the scope of spinners, due to the restrictions, 
>> similar to
>> the top-waiter only being able to spin for rtmutexes. This of course 
>> will hurt
>> the effectiveness of spinning in FUTEX_WAIT in the first place.
>
> Actually, there can be a lot of lock stealing going on with the 
> wait-wake futexes. If the critical section is short enough, many of 
> the lock waiters can be waiting in the hash bucket spinlock queue and 
> not sleeping yet while the futex value changes. As a result, they will 
> exit the futex syscall and back to user space with EAGAIN where one of 
> them may get the lock. So we can't assume that they will get the lock 
> in the FIFO order anyway.

BTW, my initial attempt for the new futex was to use the same workflow 
as the PI futexes, but use mutex which has optimistic spinning instead 
of rt_mutex. That version can double the throughput compared with PI 
futexes but still far short of what can be achieved with wait-wake 
futex. Looking at the performance figures from the patch:

                 wait-wake futex     PI futex        TO futex
                 ---------------     --------        --------
max time            3.49s            50.91s          2.65s
min time            3.24s            50.84s          0.07s
average time        3.41s            50.90s          1.84s
sys time          7m22.4s            55.73s        2m32.9s
lock count       3,090,294          9,999,813       698,318
unlock count     3,268,896          9,999,814           134

The problem with a PI futexes like version is that almost all the 
lock/unlock operations were done in the kernel which added overhead and 
latency. Now looking at the numbers for the TO futexes, less than 1/10 
of the lock operations were done in the kernel, the number of unlock was 
insignificant. Locking was done mostly by lock stealing. This is where 
most of the performance benefit comes from, not optimistic spinning.

This is also the reason that a lock handoff mechanism is implemented to 
prevent lock starvation which is likely to happen without one.

Cheers,
Longman


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ