[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFyZ8xLWiPcP3jcnVApQOqAvpo_fshSvGpdpW42iRutPcQ@mail.gmail.com>
Date: Thu, 5 Sep 2013 08:31:19 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Heiko Carstens <heiko.carstens@...ibm.com>
Cc: Tony Luck <tony.luck@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] lockref: remove cpu_relax() again
On Thu, Sep 5, 2013 at 6:18 AM, Heiko Carstens
<heiko.carstens@...ibm.com> wrote:
>
> *If* however the cpu_relax() makes sense on other platforms maybe we could
> add something like we have already with "arch_mutex_cpu_relax()":
I actually think it won't.
The lockref cmpxchg isn't waiting for something to change - it only
loops _if_ something has changed, and rather than cpu_relax(), we most
likely want to try to take advantage of the fact that we have the
changed data in our exclusive cacheline, and try to get our ref update
out as soon as possible.
IOW, the lockref loop is not an idle loop like a spinlock "wait for
lock to be released", it's very much an active loop of "oops,
something changed".
And there can't be any livelock, since by definition somebody else
_did_ make progress. In fact, adding the cpu_relax() probably just
makes things much less fair - once somebody else raced on you, the
cpu_relax() now makes it more likely that _another_ cpu does so too.
That said, let's see Tony's numbers are. On x86, it doesn't seem to
matter, but as Tony noticed, the variability can be quite high (for
me, the numbers tend to be quite stable when running the test program
multiple times in a loop, but then variation between boots or after
having done something else can be quite big - I suspect the cache
access patterns end up varying wildly with different dentry layout and
hash chain depth).
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists