lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1507020002170.3916@nanos>
Date:	Thu, 2 Jul 2015 00:27:39 +0200 (CEST)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Davidlohr Bueso <dave@...olabs.net>
cc:	Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Darren Hart <dvhart@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Mike Galbraith <umgwanakikbuti@...il.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
	linux-kernel@...r.kernel.org, Davidlohr Bueso <dbueso@...e.de>
Subject: Re: [PATCH -tip v2 1/2] locking/rtmutex: Support spin on owner

On Wed, 1 Jul 2015, Davidlohr Bueso wrote:

> Similar to what we have in other locks, particularly regular mutexes, the
> idea is that as long as the owner is running, there is a fair chance it'll
> release the lock soon, and thus a task trying to acquire the rtmutex will
> better off spinning instead of blocking immediately after the fastpath.
> Conditions to stop spinning and enter the slowpath are simple:
> 
> (1) Upon need_resched()
> (2) Current lock owner blocks
>  
> Because rtmutexes track the lock owner atomically, we can extend the fastpath
> to continue polling on the lock owner via cmpxchg(lock->owner, NULL, current).
> 
> However, this is a conservative approach, such that if there are any waiters
> in-line, we stop spinning and immediately take the traditional slowpath. This
> allows priority boosting to take precedence over spinning, as otherwise we
> could starve a higher priority queued-up task (ie: top waiter) if spinners
> constantly steal the lock.

I'm a bit wary about the whole approach. In the RT tree we spin AFTER
we've enqueued the waiter and run priority boosting. While I can see
the charm of your approach, i.e. avoiding the prio boost dance for the
simple case, this can introduce larger latencies.

T1 (prio = 0)  	      T2 (prio = 50)
 lock(RTM);
		      lock(RTM);
		       spin()
-->preemption	        
T3 (prio = 10) 		leave spin, because owner is not on cpu
   	   	       
		       enqueue();
		       boost();
		       schedule();
-->preemption
T1 (prio = 50)

So we trade two extra context switches in the worst case for an
enhancement of performance in the normal case. I cannot quantify the
impact of this, but we really need to evaluate that proper before
going there.

Aside of that, if the lock is really contended, then you force all
spinners off the cpu, if one of the spinners starts blocking simply
because you have no idea which one is the top prio spinner.

T1 (prio = 0)  	      T2 (prio = 50)  	  T3 (prio = 10)
 lock(RTM);
		      lock(RTM);	  lock(RTM);
		       spin()		   spin();
		               		  --> preemption
					   enqueue()
					   boost();
					   schedule();
		       sees waiter bit
		       enqueue();
		       boost();
		       schedule();

T2 could happily keep spinning despite T3 going to sleep. I'm not sure
if that's what we want to achieve.

Need to think about it some more, but I wanted to give you something
to think about as well :)

Thanks,

	tglx



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ