[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87tvp3xonl.fsf@concordia.ellerman.id.au>
Date: Fri, 13 Jul 2018 23:15:26 +1000
From: Michael Ellerman <mpe@...erman.id.au>
To: Peter Zijlstra <peterz@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Alan Stern <stern@...land.harvard.edu>,
andrea.parri@...rulasolutions.com,
Will Deacon <will.deacon@....com>,
Akira Yokosawa <akiyks@...il.com>,
Boqun Feng <boqun.feng@...il.com>,
Daniel Lustig <dlustig@...dia.com>,
David Howells <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Luc Maranget <luc.maranget@...ia.fr>,
Nick Piggin <npiggin@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
Peter Zijlstra <peterz@...radead.org> writes:
> On Thu, Jul 12, 2018 at 11:10:58AM -0700, Linus Torvalds wrote:
>> On Thu, Jul 12, 2018 at 11:05 AM Peter Zijlstra <peterz@...radead.org> wrote:
>> >
>> > The locking pattern is fairly simple and shows where RCpc comes apart
>> > from expectation real nice.
>>
>> So who does RCpc right now for the unlock-lock sequence? Somebody
>> mentioned powerpc. Anybody else?
>
> RISC-V followed, because the LKMM currently states it is allowed, in
> fact LKMM is currently weaker than even PowerPC, which is what this
> current discussion is about.
>
> A number of people, including myself, are arguing for stronger lock
> ordering (RCsc) but getting the LKMM to be at least as strong as Power
> (RCtsc as coined by Daniel) which disallows full RCpc.
>
>> How nasty would be be to make powerpc conform? I will always advocate
>> tighter locking and ordering rules over looser ones..
>
> mpe did a micro-bench a little while ago:
>
> http://lkml.iu.edu/hypermail/linux/kernel/1804.0/01990.html
>
> which says 30% more expensive for uncontended lock+unlock. Which I admit
> is fairly yuck. No macro bench results though.
I reran some numbers today with some slightly updated tests.
It varies quite a bit across machines and CPU revisions.
On one I get:
Lock/Unlock Time Time % Total Cycles Cycles Cycles Delta
lwsync/lwsync 79,290,859,955 100.0 % 290,160,065,087 145 -
lwsync/sync 104,903,703,237 132.3 % 383,966,199,430 192 47
Another:
Lock/Unlock Time Time % Total Cycles Cycles Cycles Delta
lwsync/lwsync 71,662,395,722 100.0 % 252,403,777,715 126 -
lwsync/sync 84,932,987,977 118.5 % 299,141,951,285 150 23
So 18-32% slower, or 23-47 cycles.
Next week I can do some macro benchmarks, to see if it's actually
detectable at all.
The other question is how they behave on a heavily loaded system.
My personal preference would be to switch to sync, we don't want to be
the only arch finding (or not finding!) exotic ordering bugs.
But we'd also rather not make our slow locks any slower than they have
to be.
cheers
Powered by blists - more mailing lists