lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BBA5C00.4090703@us.ibm.com>
Date:	Mon, 05 Apr 2010 14:54:08 -0700
From:	Darren Hart <dvhltc@...ibm.com>
To:	Avi Kivity <avi@...hat.com>
CC:	linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...e.hu>,
	Eric Dumazet <eric.dumazet@...il.com>,
	"Peter W. Morreale" <pmorreale@...ell.com>,
	Rik van Riel <riel@...hat.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Gregory Haskins <ghaskins@...ell.com>,
	Sven-Thorsten Dietrich <sdietrich@...ell.com>,
	Chris Mason <chris.mason@...cle.com>,
	John Cooper <john.cooper@...rd-harmonic.com>,
	Chris Wright <chrisw@...s-sol.org>
Subject: Re: [PATCH V2 0/6][RFC] futex: FUTEX_LOCK with optional adaptive
 spinning

Avi Kivity wrote:
> On 04/05/2010 11:23 PM, Darren Hart wrote:
>> In-Reply-To:
>>
>> NOT FOR INCLUSION
>>
>> The following patch series implements a new experimental kernel side 
>> futex mutex
>> via new FUTEX_LOCK and FUTEX_LOCK_ADAPTIVE futex op codes. The 
>> adaptive spin
>> follows the kernel mutex model of allowing one spinner until the lock is
>> released or the owner is descheduled. The patch currently allows the 
>> user to
>> specify if they want no spinning, a single adaptive spinner, or multiple
>> spinners (aggressive adaptive spinning, or aas... which I have 
>> mistyped as "ass"
>> enough times to realize a better term is indeed required :-).
>>    
> 
> An interesting (but perhaps difficult to achieve) optimization would be 
> to spin in userspace.

I couldn't think of a lightweight way to determine when the owner has 
been scheduled out in userspace. Kernel assistance is required. You 
could do this on the schedule() side of things, but I figured I'd get 
some strong pushback if I tried to add a hook into descheduling that 
flipped a bit in the futex value stating the owner was about to 
deschedule(). Still, that might be something to explore.

> 
> How many cores (or hardware threads) does this machine have?  

Sorry, I meant to include that. I tested on an 8 CPU (no hardware 
threads) 2.6 GHz Opteron 2218 (2 QuadCore CPUs) system.

 > At 10%
> duty cycle you have 25 waiters behind the lock on average.  I don't 
> think this is realistic, and it means that spinning is invoked only rarely.

Perhaps some instrumentation is in order, it seems to get invoked enough 
to achieve some 20% increase in lock/unlock iterations. Perhaps another 
metric would be of more value - such as average wait time?

> I'd be interested in seeing runs where the average number of waiters is 
> 0.2, 0.5, 1, and 2, corresponding to moderate-to-bad contention.
> 25 average waiters on compute bound code means the application needs to be 
> rewritten, no amount of mutex tweaking will help it.

Perhaps something NR_CPUS threads would be of more interest? At 10% 
that's about .8 and at 25% the 2 of your upper limit. I could add a few 
more duty-cycle points and make 25% the max. I'll kick that off and post 
the results... probably tomorrow, 10M iterations takes a while, but 
makes the results relatively stable.

> Does the wakeup code select the spinning waiter, or just a random waiter?

The wakeup code selects the highest priority task in fifo order to 
wake-up - however, under contention it is most likely going to go back 
to sleep as another waiter will steal the lock out from under it. This 
locking strategy is unashamedly about as "unfair" as it gets.


-- 
Darren Hart
IBM Linux Technology Center
Real-Time Linux Team
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ