[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56FD8A94.9050807@hpe.com>
Date: Thu, 31 Mar 2016 16:37:40 -0400
From: Waiman Long <waiman.long@....com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Ingo Molnar <mingo@...hat.com>, <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ding Tianhong <dingtianhong@...wei.com>,
Jason Low <jason.low2@....com>,
Davidlohr Bueso <dave@...olabs.net>,
"Paul E. McKenney" <paulmck@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <Will.Deacon@....com>,
Tim Chen <tim.c.chen@...ux.intel.com>
Subject: Re: [PATCH v3 2/3] locking/mutex: Enable optimistic spinning of woken
task in wait queue
On 03/29/2016 11:39 AM, Peter Zijlstra wrote:
> On Tue, Mar 22, 2016 at 01:46:43PM -0400, Waiman Long wrote:
>> Ding Tianhong reported a live-lock situation where a constant stream
>> of incoming optimistic spinners blocked a task in the wait list from
>> getting the mutex.
>>
>> This patch attempts to fix this live-lock condition by enabling the
>> woken task in the wait queue to enter into an optimistic spinning
>> loop itself in parallel with the regular spinners in the OSQ. This
>> should prevent the live-lock condition from happening.
> I would very much like a few words on how fairness is preserved.
>
> Because while the waiter remains on the wait_list while it spins, and
> therefore unlock()s will only wake it, and we'll only contend with the
> one waiter, the fact that we have two spinners is not fair or starvation
> proof at all.
>
> By adding the waiter to the OSQ we get only a single spinner and force
> 'fairness' by queuing.
>
> I say 'fairness' because the OSQ (need_resched) cancellation can still
> take the waiter out again and let even more new spinners in.
>
In my v1 patch, I added a flag in the mutex structure to signal that the
waiter is spinning and the OSQ spinner should yield to address this
fairness issue. I took it out in my later patchs as you said you want to
make the patch simpler.
Yes, I do agree that it is not guaranteed that the waiter spinner will
have a decent chance to get the lock, but I think it is still better
than queuing at the end of the OSQ as the time slice may expire before
the waiter bubbles up to the beginning of the queue. This can be
especially problematic if the waiter has lower priority which means
shorter time slice.
What do you think about the idea of adding a flag as in my v1 patch? For
64-bit systems, there is a 4-byte hole below osq and so it won't
increase the structure size. There will be a 4-byte increase in size for
32-bit systems, though.
Alternatively, I can certainly add a bit more comments to explain the
situation and the choice that we made.
>> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
>> index 5dd6171..5c0acee 100644
>> --- a/kernel/locking/mutex.c
>> +++ b/kernel/locking/mutex.c
>> @@ -538,6 +538,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>> struct task_struct *task = current;
>> struct mutex_waiter waiter;
>> unsigned long flags;
>> + bool acquired = false; /* True if the lock is acquired */
> Superfluous space there.
OK, will remove that.
Cheers,
Longman
Powered by blists - more mailing lists