[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 10 Oct 2011 12:44:01 -0700
From: Jeremy Fitzhardinge <jeremy@...p.org>
To: Stephan Diestelhorst <stephan.diestelhorst@....com>
CC: xen-devel@...ts.xensource.com,
Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>,
"Andi@...ain.invalid" <Andi@...ain.invalid>,
Nick Piggin <npiggin@...nel.dk>, KVM <kvm@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
maintainers <x86@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Marcelo Tosatti <mtosatti@...hat.com>,
Kleen <andi@...stfloor.org>, Avi Kivity <avi@...hat.com>,
Jan Beulich <JBeulich@...e.com>,
"H. Peter Anvin" <hpa@...or.com>,
"the@...ain.invalid" <the@...ain.invalid>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
On 10/10/2011 07:01 AM, Stephan Diestelhorst wrote:
> On Monday 10 October 2011, 07:00:50 Stephan Diestelhorst wrote:
>> On Thursday 06 October 2011, 13:40:01 Jeremy Fitzhardinge wrote:
>>> On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
>>>> On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
>>>>> Which certainly should *work*, but from a conceptual standpoint, isn't
>>>>> it just *much* nicer to say "we actually know *exactly* what the upper
>>>>> bits were".
>>>> Well, we really do NOT want atomicity here. What we really rather want
>>>> is sequentiality: free the lock, make the update visible, and THEN
>>>> check if someone has gone sleeping on it.
>>>>
>>>> Atomicity only conveniently enforces that the three do not happen in a
>>>> different order (with the store becoming visible after the checking
>>>> load).
>>>>
>>>> This does not have to be atomic, since spurious wakeups are not a
>>>> problem, in particular not with the FIFO-ness of ticket locks.
>>>>
>>>> For that the fence, additional atomic etc. would be IMHO much cleaner
>>>> than the crazy overflow logic.
>>> All things being equal I'd prefer lock-xadd just because its easier to
>>> analyze the concurrency for, crazy overflow tests or no. But if
>>> add+mfence turned out to be a performance win, then that would obviously
>>> tip the scales.
>>>
>>> However, it looks like locked xadd is also has better performance: on
>>> my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
>>> than locked xadd, so that pretty much settles it unless you think
>>> there'd be a dramatic difference on an AMD system.
>> Indeed, the fences are usually slower than locked RMWs, in particular,
>> if you do not need to add an instruction. I originally missed that
>> amazing stunt the GCC pulled off with replacing the branch with carry
>> flag magic. It seems that two twisted minds have found each other
>> here :)
>>
>> One of my concerns was adding a branch in here... so that is settled,
>> and if everybody else feels like this is easier to reason about...
>> go ahead :) (I'll keep my itch to myself then.)
> Just that I can't... if performance is a concern, adding the LOCK
> prefix to the addb outperforms the xadd significantly:
Hm, yes. So using the lock prefix on add instead of the mfence? Hm.
J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists