[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160601140713.GE355@arm.com>
Date: Wed, 1 Jun 2016 15:07:14 +0100
From: Will Deacon <will.deacon@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Waiman Long <waiman.long@....com>, linux-kernel@...r.kernel.org,
torvalds@...ux-foundation.org, manfred@...orfullife.com,
dave@...olabs.net, paulmck@...ux.vnet.ibm.com,
boqun.feng@...il.com, tj@...nel.org, pablo@...filter.org,
kaber@...sh.net, davem@...emloft.net, oleg@...hat.com,
netfilter-devel@...r.kernel.org, sasha.levin@...cle.com,
hofrat@...dl.org
Subject: Re: [PATCH -v3 7/8] locking: Move smp_cond_load_acquire() and
friends into asm-generic/barrier.h
On Wed, Jun 01, 2016 at 02:45:41PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 01, 2016 at 01:13:33PM +0100, Will Deacon wrote:
> > On Wed, Jun 01, 2016 at 02:06:54PM +0200, Peter Zijlstra wrote:
>
> > > Works for me; but that would loose using cmpwait() for
> > > !smp_cond_load_acquire() spins, you fine with that?
> > >
> > > The two conversions in the patch were both !acquire spins.
> >
> > Maybe we could go the whole hog and add smp_cond_load_relaxed?
>
> What about say the cmpxchg loops in queued_write_lock_slowpath()
> ? Would that be something you'd like to use wfe for?
Without actually running the code on real hardware, it's hard to say
for sure. I notice that those loops are using cpu_relax_lowlatency
at present and we *know* that we're next in the queue (i.e. we're just
waiting for existing readers to drain), so the benefit of wfe is somewhat
questionable here and I don't think we'd want to add that initially.
Will
Powered by blists - more mailing lists