lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120411173006.GB2473@linux.vnet.ibm.com>
Date:	Wed, 11 Apr 2012 10:30:06 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	"Chen, Dennis (SRDC SW)" <Dennis1.Chen@....com>
Cc:	Clemens Ladisch <clemens@...isch.de>,
	Ingo Molnar <mingo@...nel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: semaphore and mutex in current Linux kernel (3.2.2)

On Wed, Apr 11, 2012 at 05:04:03AM +0000, Chen, Dennis (SRDC SW) wrote:
> On Tue, Apr 10, 2012 at 2:45 AM, Paul E. McKenney <paulmck@...ux.vnet.ibm.com> wrote:
> > On Fri, Apr 06, 2012 at 05:47:28PM +0000, Chen, Dennis (SRDC SW) wrote:
> >> On Fri, Apr 6, 2012 at 6:10 PM, Clemens Ladisch <clemens@...isch.de> wrote:
> >> > Chen, Dennis (SRDC SW) wrote:
> >> >
> >> > "On the internet, nobody can hear you being subtle."
> >> >
> >> > If some other process wants to run on the same CPU, needs_resched() is set.
> >> > (This might happen to make the cursor blink, for keyboard input, or when
> >> > somebody starts a rogue process like ps.)
> >> >
> >>
> >> Hmm, I forget that in each timer interrupt, __rcu_pending() will be called, it will call
> >> set_need_resched() to set the TIF_NEED_RESCHED in some condition...
> >> The optimization of mutex work closely with rcu, so fantastic!
> >
> > I must confess that you all lost me on this one.
> >
> > There is a call to set_need_resched() in __rcu_pending(), which is
> > invoked when the current CPU has not yet responded to a non-preemptible
> > RCU grace period for some time.  However, in the common case where the
> > CPUs all respond in reasonable time, __rcu_pending() will never call
> > set_need_resched().
> >
> > However, we really do not want to call set_need_resched() on every call
> > to __rcu_pending().  There is almost certainly a better solution to any
> > problem that might be solved by a per-jiffy call to set_need_resched().
> >
> > So, what are you really trying to do?
> >
> >                                                        Thanx, Paul
> 
> Paul, I must confess that maybe you're right, I've realized the misunderstanding in the previous email.
> But I don't want to pretend that I have a full understanding for your " There is almost certainly a 
> better solution to any problem that might be solved by a per-jiffy call to set_need_resched()", because
> this is related with your last question.
> 
> I just want to measure the performance between semaphore and mutex, before that I looked at the mutex 
> optimization code, and the focus is on the mutex_spin_on_owner() function, I don't know how long it will
> take before some components in the kernel call set_need_resched() to break the while loop. If it's on
> jiffies level, given the time of a process switch possibly in microsecond level, that means current 
> process must spin for several jiffies before it got the mutex lock or go to sleep finally, I can't
> see the benefit here... 

The loop spins only while a given task owns the lock.  This means that
for the loop to spin for several jiffies, one of three things must happen:

1.	The task holding the lock has been running continuously for
	several jiffies.  This would very likely be a bug.  (Why are
	you running CPU bound in the kernel for several jiffies,
	whether or not you are holding a lock?)

2.	The task spinning in mutex_spin_on_owner() happens to be being
	preempted at exactly the same times as the owner is, and so
	by poor luck happens to always see the owner running.

	The probability of this is quite low, should (famous last words!)
	can be safely ignored.

3.	The task spinning in mutex_spin_on_owner() happens to be being
	preempted at exactly the same times that the owner releases
	the lock, and so again by poor luck happens to always see the
	owner running.

	The probability of this is quite low, should (famous last words!)
	can be safely ignored.

Normally, the lock holder either blocks or releases the lock quickly,
so that mutex_spin_on_owner() exits its loop.

So, are you seeing a situation where mutex_spin_on_owner() really is
spinning for multiple jiffies?

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ