[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BB42C04.7090308@us.ibm.com>
Date: Wed, 31 Mar 2010 22:15:48 -0700
From: Darren Hart <dvhltc@...ibm.com>
To: rostedt@...dmis.org
CC: "lkml," <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Gregory Haskins <ghaskins@...ell.com>,
Sven-Thorsten Dietrich <sdietrich@...ell.com>,
Peter Morreale <pmorreale@...ell.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>,
Eric Dumazet <eric.dumazet@...il.com>,
Chris Mason <chris.mason@...cle.com>
Subject: Re: RFC: Ideal Adaptive Spinning Conditions
Steven Rostedt wrote:
> On Wed, 2010-03-31 at 19:13 -0700, Darren Hart wrote:
>> Steven Rostedt wrote:
>>> On Wed, 2010-03-31 at 16:21 -0700, Darren Hart wrote:
>>>
>>>> o What type of lock hold times do we expect to benefit?
>>> 0 (that's a zero) :-p
>>>
>>> I haven't seen your patches but you are not doing a heuristic approach,
>>> are you? That is, do not "spin" hoping the lock will suddenly become
>>> free. I was against that for -rt and I would be against that for futex
>>> too.
>> I'm not sure what you're getting at here. Adaptive spinning is indeed
>> hoping the lock will become free while you are spinning and checking
>> it's owner...
>
> I'm talking about the original idea people had of "lets spin for 50us
> and hope it is unlocked before then", which I thought was not a good
> idea.
>
>
>>>> o How much contention is a good match for adaptive spinning?
>>>> - this is related to the number of threads to run in the test
>>>> o How many spinners should be allowed?
>>>>
>>>> I can share the kernel patches if people are interested, but they are
>>>> really early, and I'm not sure they are of much value until I better
>>>> understand the conditions where this is expected to be useful.
>>> Again, I don't know how you implemented your adaptive spinners, but the
>>> trick to it in -rt was that it would only spin while the owner of the
>>> lock was actually running. If it was not running, it would sleep. No
>>> point waiting for a sleeping task to release its lock.
>> It does exactly this.
>
> OK, that's good.
>
>>> Is this what you did? Because, IIRC, this only benefited spinlocks
>>> converted to mutexes. It did not help with semaphores, because
>>> semaphores could be held for a long time. Thus, it was good for short
>>> held locks, but hurt performance on long held locks.
>> Trouble is, I'm still seeing performance penalties even on the shortest
>> critical section possible (lock();unlock();)
>
> performance penalties compared to what? not having adaptive at all?
Right. See the data in the original mail:
futex_lock: Result: 635 Kiter/s
futex_lock_adaptive: Result: 542 Kiter/s
So 15% fewer lock/unlock iterations per second with in kernel adaptive
spinning enabled for a critical section approaching 0 in length. But If
we agree I'm taking the right approach, then it's time for me to polish
things up a bit and send them out for review.
--
Darren Hart
IBM Linux Technology Center
Real-Time Linux Team
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists