lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BB9EE72.3010902@us.ibm.com>
Date:	Mon, 05 Apr 2010 07:06:42 -0700
From:	Darren Hart <dvhltc@...ibm.com>
To:	john cooper <john.cooper@...rd-harmonic.com>
CC:	"Peter W. Morreale" <pmorreale@...ell.com>, rostedt@...dmis.org,
	"lkml," <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Gregory Haskins <ghaskins@...ell.com>,
	Sven-Thorsten Dietrich <sdietrich@...ell.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Chris Mason <chris.mason@...cle.com>,
	john cooper <john.cooper@...hat.com>
Subject: Re: RFC: Ideal Adaptive Spinning Conditions

john cooper wrote:
> Darren Hart wrote:
>> Right, and I'm looking to provide some kernel assistance for userspace
>> spinlocks here, and am targeting short lived critical sections as well.
> 
> What did you have in mind beyond existing mechanisms
> which address sibling contention?
> 
> One scenario which AFAICT isn't yet addressed is that
> of a userspace spin lock holder taking a scheduling
> preemption which may result in other threads piling up
> on the lock orders of magnitude beyond normal wait times,
> until the lock holder is rescheduled.  

That is an excellent example.

Another is the highly fragile performance characteristics of spinlocks 
with sched_yield() implementations lead to. As sched_yield 
implementations change, the scheduling behavior of the spinning tasks 
also changes. As the number of cores grows, more performance tuning is 
required. Sched_yield() essentially allows the spinner to spin for a 
time and then get off the cpu for a time - but it doesn't have any idea 
about the state of the lock owner and it's pure chance if the spinning 
task with schedule back in at an opportune time, or if it will just be 
adding to the scheduling overhead and CPU resources the owner is still 
trying to acquire.

The idea here is to leverage the additional information we have in the 
kernel to make more intelligent decisions about how long to spin (as 
well as how many tasks should spin).

-- 
Darren Hart
IBM Linux Technology Center
Real-Time Linux Team
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ