[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <491D6B4EAD0A714894D8AD22F4BDE043A33A9E@SCYBEXDAG02.amd.com>
Date: Wed, 11 Apr 2012 05:04:03 +0000
From: "Chen, Dennis (SRDC SW)" <Dennis1.Chen@....com>
To: "paulmck@...ux.vnet.ibm.com" <paulmck@...ux.vnet.ibm.com>
CC: Clemens Ladisch <clemens@...isch.de>,
Ingo Molnar <mingo@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: semaphore and mutex in current Linux kernel (3.2.2)
On Tue, Apr 10, 2012 at 2:45 AM, Paul E. McKenney <paulmck@...ux.vnet.ibm.com> wrote:
> On Fri, Apr 06, 2012 at 05:47:28PM +0000, Chen, Dennis (SRDC SW) wrote:
>> On Fri, Apr 6, 2012 at 6:10 PM, Clemens Ladisch <clemens@...isch.de> wrote:
>> > Chen, Dennis (SRDC SW) wrote:
>> >
>> > "On the internet, nobody can hear you being subtle."
>> >
>> > If some other process wants to run on the same CPU, needs_resched() is set.
>> > (This might happen to make the cursor blink, for keyboard input, or when
>> > somebody starts a rogue process like ps.)
>> >
>>
>> Hmm, I forget that in each timer interrupt, __rcu_pending() will be called, it will call
>> set_need_resched() to set the TIF_NEED_RESCHED in some condition...
>> The optimization of mutex work closely with rcu, so fantastic!
>
> I must confess that you all lost me on this one.
>
> There is a call to set_need_resched() in __rcu_pending(), which is
> invoked when the current CPU has not yet responded to a non-preemptible
> RCU grace period for some time. However, in the common case where the
> CPUs all respond in reasonable time, __rcu_pending() will never call
> set_need_resched().
>
> However, we really do not want to call set_need_resched() on every call
> to __rcu_pending(). There is almost certainly a better solution to any
> problem that might be solved by a per-jiffy call to set_need_resched().
>
> So, what are you really trying to do?
>
> Thanx, Paul
Paul, I must confess that maybe you're right, I've realized the misunderstanding in the previous email.
But I don't want to pretend that I have a full understanding for your " There is almost certainly a
better solution to any problem that might be solved by a per-jiffy call to set_need_resched()", because
this is related with your last question.
I just want to measure the performance between semaphore and mutex, before that I looked at the mutex
optimization code, and the focus is on the mutex_spin_on_owner() function, I don't know how long it will
take before some components in the kernel call set_need_resched() to break the while loop. If it's on
jiffies level, given the time of a process switch possibly in microsecond level, that means current
process must spin for several jiffies before it got the mutex lock or go to sleep finally, I can't
see the benefit here...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists