lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.1001051052150.3630@localhost.localdomain>
Date:	Tue, 5 Jan 2010 10:56:37 -0800 (PST)
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Christoph Lameter <cl@...ux-foundation.org>
cc:	Andi Kleen <andi@...stfloor.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Minchan Kim <minchan.kim@...il.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Peter Zijlstra <peterz@...radead.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"hugh.dickins" <hugh.dickins@...cali.co.uk>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC][PATCH 6/8] mm: handle_speculative_fault()



On Tue, 5 Jan 2010, Christoph Lameter wrote:
> 
> There will not be much spinning. The cacheline will be held exclusively by
> one processor. A request by other processors for shared access to the
> cacheline will effectively stop the execution on those processors until
> the cacheline is available.

AT WHICH POINT THEY ALL RACE TO GET IT TO SHARED MODE, only to then have 
one of them actually see that it got a ticket!

Don't you see the problem? The spinlock (with ticket locks) essentially 
does the "xadd" atomic increment anyway, but then it _waits_ for it. All 
totally pointless, and all just making for problems, and wasting CPU time 
and causing more cross-node traffic.

In contrast, a native xadd-based rwsem will basically do the atomic 
increment (that ticket spinlocks also do) and then just go on with its 
life. Never waiting for all the other CPU's to also do their ticket 
spinlocks.

The "short critical region" you talk about is totally meaningless, since 
the contention will mean that everybody is waiting and hitting that 
cacheline for a long time regardless - they'll be waiting for O(n) 
_other_ CPU's to get the lock first!

		Linus

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ