lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e4b21023-d916-ef8c-eec0-00726d412e10@gmail.com>
Date:   Fri, 16 Dec 2016 23:35:12 +0100
From:   Nicolai Hähnle <nhaehnle@...il.com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Nicolai Hähnle <nhaehnle@...il.com>
Cc:     linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
        Maarten Lankhorst <dev@...ankhorst.nl>,
        Daniel Vetter <daniel@...ll.ch>,
        Chris Wilson <chris@...is-wilson.co.uk>,
        dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH v2 05/11] locking/ww_mutex: Add waiters in stamp order

On 16.12.2016 21:00, Peter Zijlstra wrote:
> On Fri, Dec 16, 2016 at 07:11:41PM +0100, Nicolai Hähnle wrote:
>> mutex_optimistic_spin() already calls __mutex_trylock, and for the no-spin
>> case, __mutex_unlock_slowpath() only calls wake_up_q() after releasing the
>> wait_lock.
>
> mutex_optimistic_spin() is a no-op when !CONFIG_MUTEX_SPIN_ON_OWNER

Does this change the conclusion in a meaningful way? I did mention the 
no-spin case in the very part you quoted...

Again, AFAIU we're talking about the part of my proposal that turns what 
is effectively

	__mutex_trylock(lock, ...);
	spin_lock_mutex(&lock->wait_lock, flags);

(independent of whether the trylock succeeds or not!) into

	spin_lock_mutex(&lock->wait_lock, flags);
	__mutex_trylock(lock, ...);

in an effort to streamline the code overall.

Also AFAIU, you're concerned that spin_lock_mutex(...) has to wait for 
an unlock from mutex_unlock(), but when does that actually happen with 
relevant probability?

When we spin optimistically, that could happen -- except that 
__mutex_trylock is already called in mutex_optimistic_spin, so it 
doesn't matter. When we don't spin -- whether due to .config or !first 
-- then the chance of overlap with mutex_unlock is exceedingly small.

Even if we do overlap, we'll have to wait for mutex_unlock to release 
the wait_lock anyway! So what good does acquiring the lock first really do?

Anyway, this is really more of an argument about whether there's really 
a good reason to calling __mutex_trylock twice in that loop. I don't 
think there is, your arguments certainly haven't been convincing, but 
the issue can be side-stepped for this patch by keeping the trylock 
calls as they are and just setting first = true unconditionally for 
ww_ctx != NULL (but keep the logic for when to set the HANDOFF flag 
as-is). Should probably rename the variable s/first/handoff/ then.

Nicolai

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ