lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200608292018.01602.ak@suse.de>
Date:	Tue, 29 Aug 2006 20:18:01 +0200
From:	Andi Kleen <ak@...e.de>
To:	Christoph Lameter <clameter@....com>
Cc:	David Howells <dhowells@...hat.com>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Arjan van de Ven <arjan@...radead.org>,
	Dong Feng <middle.fengdong@...il.com>,
	Paul Mackerras <paulus@...ba.org>,
	linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org
Subject: Re: Why Semaphore Hardware-Dependent?

On Tuesday 29 August 2006 19:36, Christoph Lameter wrote:
> On Tue, 29 Aug 2006, Andi Kleen wrote:
> 
> > On Tuesday 29 August 2006 17:56, Christoph Lameter wrote:
> > > On Tue, 29 Aug 2006, David Howells wrote:
> > > 
> > > > Because i386 (and x86_64) can do better by using XADDL/XADDQ.
> > > 
> > > And Ia64 would like to use fetchadd....
> > 
> > This might be a dumb question, but I would expect even on altix 
> > with lots of parallel faulting threads rwsem performance be basically
> > limited by aquiring the cache line and releasing it later to another CPU.
> 
> Correct. However, a cmpxchg may have to acquire that cacheline multiple 
> times in a highly contented situation.

Hmm, thinking about it that sounds unlikely because only the first and the last
user should touch the rwsem head. But ok it might be possible.

BTW maybe it would be a good idea to switch the wait list to a hlist,
then the last user in the queue wouldn't need to 
touch the cache line of the head. Or maybe even a single linked
list then some more cache bounces might be avoidable.

That is really why we need a single C implementation. If we had that 
such optimizations would be pretty easy. Without it it's a big mess.

> We have long tuned that portion of the code and therefore we are 
> skeptical of changes. But if we cannot measure a difference to a 
> generic implemenentation then it would be okay.

Would you be willing to run numbers comparing them? (or provide a benchmark?) 

-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ