lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BBA6B6F.7040201@us.ibm.com>
Date:	Mon, 05 Apr 2010 15:59:59 -0700
From:	Darren Hart <dvhltc@...ibm.com>
To:	Avi Kivity <avi@...hat.com>
CC:	linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...e.hu>,
	Eric Dumazet <eric.dumazet@...il.com>,
	"Peter W. Morreale" <pmorreale@...ell.com>,
	Rik van Riel <riel@...hat.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Gregory Haskins <ghaskins@...ell.com>,
	Sven-Thorsten Dietrich <sdietrich@...ell.com>,
	Chris Mason <chris.mason@...cle.com>,
	John Cooper <john.cooper@...rd-harmonic.com>,
	Chris Wright <chrisw@...s-sol.org>
Subject: Re: [PATCH V2 0/6][RFC] futex: FUTEX_LOCK with optional adaptive
 spinning

Avi Kivity wrote:

>> > At 10%
>>> duty cycle you have 25 waiters behind the lock on average.  I don't 
>>> think this is realistic, and it means that spinning is invoked only 
>>> rarely.
>>
>> Perhaps some instrumentation is in order, it seems to get invoked 
>> enough to achieve some 20% increase in lock/unlock iterations. Perhaps 
>> another metric would be of more value - such as average wait time?
> 
> Why measure an unrealistic workload?

No argument there, thus my proposal for an alternate configuration below.

>>> I'd be interested in seeing runs where the average number of waiters 
>>> is 0.2, 0.5, 1, and 2, corresponding to moderate-to-bad contention.
>>> 25 average waiters on compute bound code means the application needs 
>>> to be rewritten, no amount of mutex tweaking will help it.
>>
>> Perhaps something NR_CPUS threads would be of more interest? 
> 
> That seems artificial.

How so? Several real world applications use one thread per CPU to 
dispatch work to, wait for events, etc.

> 
>> At 10% that's about .8 and at 25% the 2 of your upper limit. I could 
>> add a few more duty-cycle points and make 25% the max. I'll kick that 
>> off and post the results... probably tomorrow, 10M iterations takes a 
>> while, but makes the results relatively stable.
> 
> Thanks.  But why not vary the number of threads as well?

Absolutely, I don't disagree that all the variables should vary in order 
to get a complete picture. I'm starting with 8 - it takes several hours 
to collect the data.

>>> Does the wakeup code select the spinning waiter, or just a random 
>>> waiter?
>>
>> The wakeup code selects the highest priority task in fifo order to 
>> wake-up - however, under contention it is most likely going to go back 
>> to sleep as another waiter will steal the lock out from under it. This 
>> locking strategy is unashamedly about as "unfair" as it gets.
> 
> Best to avoid the wakeup if we notice the lock was stolen.

You really can't do this precisely. You can read the futex value at 
various points along the wakeup path, but at some point you have to 
commit to waking a task, and you still have a race between the time you 
wake_up_task() and when it is scheduled and attempts the cmpxchg itself.

-- 
Darren Hart
IBM Linux Technology Center
Real-Time Linux Team
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ