[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <22461.1158173455@warthog.cambridge.redhat.com>
Date: Wed, 13 Sep 2006 19:50:55 +0100
From: David Howells <dhowells@...hat.com>
To: Nick Piggin <nickpiggin@...oo.com.au>
Cc: David Howells <dhowells@...hat.com>,
Arjan van de Ven <arjan@...radead.org>,
Dong Feng <middle.fengdong@...il.com>, ak@...e.de,
Paul Mackerras <paulus@...ba.org>,
Christoph Lameter <clameter@....com>,
linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org
Subject: Re: Why Semaphore Hardware-Dependent?
Nick Piggin <nickpiggin@...oo.com.au> wrote:
> + while ((tmp = atomic_read(&sem->count)) >= 0) {
> + if (tmp == atomic_cmpxchg(&sem->count, tmp,
> + tmp + RWSEM_ACTIVE_READ_BIAS)) {
NAK for FRV. Do not use atomic_cmpxchg() there as it isn't strictly atomic
(FRV only has one strictly atomic operation: SWAP). Please leave FRV as using
the spinlock version which is more efficient on UP.
Please also show benchmarks to show the performance difference between your
version and the old version before Ingo messed it up and made everything
unconditionally out of line without cleaning the inline asm up.
If you are going to generalise, you should get rid of everything barring the
spinlock-based version and stick with that. It will cost you performance
under some circumstances, but it's better under others than attempting to use
atomic_cmpxchg() which may not really exist on all archs.
You've also caused another problem: the spinlock based version permits up to
2^31 - 1 readers at one time, the inline optimised version, on a 32-bit arch,
will only permit up to 2^16 - 1 at most. By doing this to x86_64, you've
reduced the number of processes it can support.
David
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists