lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 12 Jan 2009 19:23:38 +0200
From:	Avi Kivity <avi@...hat.com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Gregory Haskins <ghaskins@...ell.com>,
	Matthew Wilcox <matthew@....cx>,
	Andi Kleen <andi@...stfloor.org>,
	Chris Mason <chris.mason@...cle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	linux-btrfs <linux-btrfs@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Nick Piggin <npiggin@...e.de>,
	Peter Morreale <pmorreale@...ell.com>,
	Sven Dietrich <SDietrich@...ell.com>,
	Dmitry Adamushko <dmitry.adamushko@...il.com>
Subject: Re: [PATCH -v8][RFC] mutex: implement adaptive spinning

Peter Zijlstra wrote:
> On Mon, 2009-01-12 at 18:13 +0200, Avi Kivity wrote:
>
>   
>> One thing that worries me here is that the spinners will spin on a 
>> memory location in struct mutex, which means that the cacheline holding 
>> the mutex (which is likely to be under write activity from the owner) 
>> will be continuously shared by the spinners, slowing the owner down when 
>> it needs to unshare it.  One way out of this is to spin on a location in 
>> struct mutex_waiter, and have the mutex owner touch it when it schedules 
>> out.
>>     
>
> Yeah, that is what pure MCS locks do -- however I don't think its a
> feasible strategy for this spin/sleep hybrid.
>   

Bummer.

>> So:
>> - each task_struct has an array of currently owned mutexes, appended to 
>> by mutex_lock()
>>     
>
> That's not going to fly I think. Lockdep does this but its very
> expensive and has some issues. We're currently at 48 max owners, and
> still some code paths manage to exceed that.
>   

Might make it per-cpu instead, and set a bit in the mutex when 
scheduling out so we know not to remove it from the list on unlock.

>> - mutex waiters spin on mutex_waiter.wait, which they initialize to zero
>> - when switching out of a task, walk the mutex list, and for each mutex, 
>> bump each waiter's wait variable, and clear the owner array
>>     
>
> Which is O(n).
>   

It may be better than O(n) cpus banging on the mutex for the lock 
duration.  Of course we should avoid walking the part of the list where 
non-spinning owners wait (or maybe have a separate list for spinners).

>> - when unlocking a mutex, bump the nearest waiter's wait variable, and 
>> remove from the owner array
>>
>> Something similar might be done to spinlocks to reduce cacheline 
>> contention from spinners and the owner.
>>     
>
> Spinlocks can use 'pure' MCS locks.
>   

I'll read up on those, thanks.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ