[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <11861.1156845927@warthog.cambridge.redhat.com>
Date: Tue, 29 Aug 2006 11:05:27 +0100
From: David Howells <dhowells@...hat.com>
To: Nick Piggin <nickpiggin@...oo.com.au>
Cc: Arjan van de Ven <arjan@...radead.org>,
Dong Feng <middle.fengdong@...il.com>, ak@...e.de,
Paul Mackerras <paulus@...ba.org>,
Christoph Lameter <clameter@....com>,
linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
David Howells <dhowells@...hat.com>
Subject: Re: Why Semaphore Hardware-Dependent?
Nick Piggin <nickpiggin@...oo.com.au> wrote:
> I wonder if we can just start with the nice powerpc code that uses
> atomic_add_return and cmpxchg (should use atomic_cmpxchg)
Because i386 (and x86_64) can do better by using XADDL/XADDQ.
The problem with CMPXCHG is that it might fail and you might have to attempt
it again. This may be unlikely - it depends on the circumstances. The same
applies to LL/ST equivalents.
On i386, CMPXCHG also ties you to what registers you may use for what to some
extent. OTOH, whilst XADD does less so, the slowpath function does instead,
though with the XADD version, we can make sure that the semaphore address is
in EAX, something we can't do with CMPXCHG.
For those archs where CMPXCHG is the best available, a better algorithm than
the XADD based one is available, though I haven't submitted it. I may still
have the patch somewhere.
However! If what you have is LL/ST equivalents than emulating CMPXCHG to
emulate the XADD algorithm probably isn't the most optimal way either. Don't
get stuck on using LL/ST to emulate what other CPUs have available.
> and chuck out the "crappy" rwsem fallback implementation,
CMPXCHG is not available on all archs, and may not be implemented on all archs
through other atomic instructions.
> as well as all the arch specific code?
Using CMPXCHG is only optimal where that's the best available.
David
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists