[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56FD8B1F.7070102@hpe.com>
Date: Thu, 31 Mar 2016 16:39:59 -0400
From: Waiman Long <waiman.long@....com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Ingo Molnar <mingo@...hat.com>, <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ding Tianhong <dingtianhong@...wei.com>,
Jason Low <jason.low2@....com>,
Davidlohr Bueso <dave@...olabs.net>,
"Paul E. McKenney" <paulmck@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <Will.Deacon@....com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Waiman Long <Waiman.Long@...com>
Subject: Re: [PATCH v3 3/3] locking/mutex: Avoid missed wakeup of mutex waiter
On 03/29/2016 12:36 PM, Peter Zijlstra wrote:
> On Tue, Mar 22, 2016 at 01:46:44PM -0400, Waiman Long wrote:
>> The current mutex code sets count to -1 and then sets the task
>> state. This is the same sequence that the mutex unlock path is checking
>> count and task state. That could lead to a missed wakeup even though
>> the problem will be cleared when a new waiter enters the waiting queue.
>>
>> This patch reverses the order in the locking slowpath so that the task
>> state is set first before setting the count. This should eliminate
>> the potential missed wakeup and improve latency.
> Is it really a problem though?
>
> So the 'race' is __mutex_lock_common() against
> __mutex_fastpath_unlock(), and that is fully serialized as per the
> atomic instructions. Either the fast unlock path does 1->0 and the lock
> acquires, or the lock sets -1, at which the unlock fails and enters
> __mutex_unlock_common_slowpath, which is fully serialised against
> __mutex_lock_common by the lock->wait_lock.
>
> I agree that the code is nicer after your patch, but I don't actually
> see a problem.
You are right again. I think I missed the spinlock serialization part
from my analysis. So this patch isn't really necessary. I can withdraw
it or mark it as a cleanup.
Cheers,
Longman
Powered by blists - more mailing lists