[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170706171832.GH2393@linux.vnet.ibm.com>
Date: Thu, 6 Jul 2017 10:18:32 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: David Laight <David.Laight@...LAB.COM>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"netfilter-devel@...r.kernel.org" <netfilter-devel@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"oleg@...hat.com" <oleg@...hat.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"dave@...olabs.net" <dave@...olabs.net>,
"manfred@...orfullife.com" <manfred@...orfullife.com>,
"tj@...nel.org" <tj@...nel.org>, "arnd@...db.de" <arnd@...db.de>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"will.deacon@....com" <will.deacon@....com>,
"stern@...land.harvard.edu" <stern@...land.harvard.edu>,
"parri.andrea@...il.com" <parri.andrea@...il.com>,
"torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v2 0/9] Remove spin_unlock_wait()
On Thu, Jul 06, 2017 at 06:50:36PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 06, 2017 at 09:20:24AM -0700, Paul E. McKenney wrote:
> > On Thu, Jul 06, 2017 at 06:05:55PM +0200, Peter Zijlstra wrote:
> > > On Thu, Jul 06, 2017 at 02:12:24PM +0000, David Laight wrote:
> > > > From: Paul E. McKenney
> >
> > [ . . . ]
> >
> > > Now on the one hand I feel like Oleg that it would be a shame to loose
> > > the optimization, OTOH this thing is really really tricky to use,
> > > and has lead to a number of bugs already.
> >
> > I do agree, it is a bit sad to see these optimizations go. So, should
> > this make mainline, I will be tagging the commits that spin_unlock_wait()
> > so that they can be easily reverted should someone come up with good
> > semantics and a compelling use case with compelling performance benefits.
>
> Ha!, but what would constitute 'good semantics' ?
At this point, it beats the heck out of me! ;-)
> The current thing is something along the lines of:
>
> "Waits for the currently observed critical section
> to complete with ACQUIRE ordering such that it will observe
> whatever state was left by said critical section."
>
> With the 'obvious' benefit of limited interference on those actually
> wanting to acquire the lock, and a shorter wait time on our side too,
> since we only need to wait for completion of the current section, and
> not for however many contender are before us.
>
> Not sure I have an actual (micro) benchmark that shows a difference
> though.
>
>
>
> Is this all good enough to retain the thing, I dunno. Like I said, I'm
> conflicted on the whole thing. On the one hand its a nice optimization,
> on the other hand I don't want to have to keep fixing these bugs.
Yeah, if I had seen a compelling use case... Oleg's task_work case was
closest, but given that it involved a task-local lock that shouldn't
be all -that- heavily contended, it is hard to see there being all that
much difference.
But maybe I am missing something here? Wouldn't be the first time...
Thanx, Paul
Powered by blists - more mailing lists