[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <98cfeb6e-f312-ba13-00b4-f5b125b24f8d@gmail.com>
Date: Fri, 16 Dec 2016 19:11:41 +0100
From: Nicolai Hähnle <nhaehnle@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org,
Nicolai Hähnle <Nicolai.Haehnle@....com>,
Ingo Molnar <mingo@...hat.com>,
Maarten Lankhorst <dev@...ankhorst.nl>,
Daniel Vetter <daniel@...ll.ch>,
Chris Wilson <chris@...is-wilson.co.uk>,
dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH v2 05/11] locking/ww_mutex: Add waiters in stamp order
On 16.12.2016 18:15, Peter Zijlstra wrote:
> On Fri, Dec 16, 2016 at 03:19:43PM +0100, Nicolai Hähnle wrote:
>> The concern about picking up a handoff that we didn't request is real,
>> though it cannot happen in the first iteration. Perhaps this __mutex_trylock
>> can be moved to the end of the loop? See below...
>
>
>>>> @@ -728,7 +800,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>>>> * or we must see its unlock and acquire.
>>>> */
>>>> if ((first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) ||
>>>> - __mutex_trylock(lock, first))
>>>> + __mutex_trylock(lock, use_ww_ctx || first))
>>>> break;
>>>>
>>>> spin_lock_mutex(&lock->wait_lock, flags);
>>
>> Change this code to:
>>
>> acquired = first &&
>> mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx,
>> &waiter);
>> spin_lock_mutex(&lock->wait_lock, flags);
>>
>> if (acquired ||
>> __mutex_trylock(lock, use_ww_ctx || first))
>> break;
>
> goto acquired;
>
> will work lots better.
Wasn't explicit enough, sorry. The idea was to get rid of the acquired
label and change things so that all paths exit the loop with wait_lock
held. That seems cleaner to me.
>> }
>>
>> This changes the trylock to always be under the wait_lock, but we previously
>> had that at the beginning of the loop anyway.
>
>> It also removes back-to-back
>> calls to __mutex_trylock when going through the loop;
>
> Yeah, I had that explicitly. It allows taking the mutex when
> mutex_unlock() is still holding the wait_lock.
mutex_optimistic_spin() already calls __mutex_trylock, and for the
no-spin case, __mutex_unlock_slowpath() only calls wake_up_q() after
releasing the wait_lock.
So I don't see the purpose of the back-to-back __mutex_trylocks,
especially considering that if the first one succeeds, we immediately
take the wait_lock anyway.
Nicolai
>> and for the first
>> iteration, there is a __mutex_trylock under wait_lock already before adding
>> ourselves to the wait list.
>
> Correct.
>
Powered by blists - more mailing lists