lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 21 Apr 2018 19:39:35 -0700
From:   Davidlohr Bueso <dave@...olabs.net>
To:     Mike Galbraith <efault@....de>
Cc:     Peter Zijlstra <peterz@...radead.org>, tglx@...utronix.de,
        mingo@...nel.org, longman@...hat.com, linux-kernel@...r.kernel.org,
        Davidlohr Bueso <dbues@...e.de>
Subject: Re: [PATCH 2/2] rtmutex: Reduce top-waiter blocking on a lock

On Fri, 20 Apr 2018, Mike Galbraith wrote:

>On Fri, 2018-04-20 at 17:50 +0200, Peter Zijlstra wrote:
>> Is this similar to what we have in RT (which, IIRC, has an optimistic
>> spinning implementation as well)?
>
>For the RT spinlock replacement, the top waiter can spin.

Yeah and the difference with the rt-spinlock and this patch are points
(1) and (3). Probably worth mentioning and, at least at first thought,
the rt version might benefit from these breaking out of the spin loop;
lateral or normal, I doubt the rt-spinlock wants to adaptive_wait() if
the top_waiter changed.

>
>> ISTR there being some contention over the exact semantics of (3) many
>> years ago. IIRC the question was if an equal priority task was allowed
>> to steal; because lock stealing can lead to fairness issues. One would
>> expect two FIFO-50 tasks to be 'fair' wrt lock acquisition and not
>> starve one another.

Indeed.

>>
>> Therefore I think we only allowed higher prio tasks to steal and kept
>> FIFO order for equal prioty tasks.

Right. In fact this patch is a very limited version of optimistic spinning
because we have little room too break fairness and rt constraints (ie no
osq, etc).

So say waiter task A is spinning for the rtmutex when task B (with equal
prio) comes in and tries to take it as well. Because when B is being
enqueued we don't comply with rt_mutex_waiter_less(A, B), so we go rb_right.
As such the top-waiter pointer is not updated and therefore B blocks while
A keeps spinning and take the lock hopefully without blocking. And if we
do block we're still top-waiter so fifo wrt waiter B all the way.

>
>Yup, lateral steal is expressly forbidden for RT classes.

Only rt-spinlocks allow lateral stealing, this patch uses the same normal
stealing semantics as what rtmutex already provide. And either way I don't
see how rt_mutex_spin_on_owner() will influence on current rtmutex semantics
as all that changes is not calling schedule(), and we are already accounted
for and queued in the waiter tree.

Thanks,
Davidlohr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ