[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87k1hbv7ba.fsf@concordia.ellerman.id.au>
Date: Thu, 07 Mar 2019 11:47:53 +1100
From: Michael Ellerman <mpe@...erman.id.au>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Nicholas Piggin <npiggin@...il.com>,
linux-arch <linux-arch@...r.kernel.org>,
Will Deacon <will.deacon@....com>,
Andrea Parri <andrea.parri@...rulasolutions.com>,
Arnd Bergmann <arnd@...db.de>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Rich Felker <dalias@...c.org>,
David Howells <dhowells@...hat.com>,
Daniel Lustig <dlustig@...dia.com>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
"Maciej W. Rozycki" <macro@...ux-mips.org>,
Ingo Molnar <mingo@...nel.org>,
Palmer Dabbelt <palmer@...ive.com>,
Paul Burton <paul.burton@...s.com>,
"Paul E. McKenney" <paulmck@...ux.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Alan Stern <stern@...land.harvard.edu>,
Tony Luck <tony.luck@...el.com>,
Yoshinori Sato <ysato@...rs.sourceforge.jp>
Subject: Re: [PATCH 01/20] asm-generic/mmiowb: Add generic implementation of mmiowb() tracking
Linus Torvalds <torvalds@...ux-foundation.org> writes:
> On Mon, Mar 4, 2019 at 2:24 AM Michael Ellerman <mpe@...erman.id.au> wrote:
>>
>> Without wading into the rest of the discussion, this does raise an
>> interesting point, ie. what about eg. rwlock's?
>>
>> They're basically equivalent to spinlocks, and so could reasonably be
>> expected to have the same behaviour.
>>
>> But we don't check the io_sync flag in arch_read/write_unlock() etc. and
>> both of those use lwsync.
>
> I think technically rwlocks should do the same thing, at least when
> they are used for exclusion.
OK.
> Because of the exclusion argument, we can presubably limit it to just
> write_unlock(), although at least in theory I guess you could have
> some "one reader does IO, then a writer comes in" situation..
It's a bit hard to grep for, but I did find one case:
static void netxen_nic_io_write_128M(struct netxen_adapter *adapter,
void __iomem *addr, u32 data)
{
read_lock(&adapter->ahw.crb_lock);
writel(data, addr);
read_unlock(&adapter->ahw.crb_lock);
}
It looks like that driver is using the rwlock to exclude cases that can
just do a readl()/writel() (readers) vs another case that has to reconfigure a
window or something, before doing readl()/writel() and then configuring
the window back. So that seems like a valid use for a rwlock.
Whether we want to penalise all read_unlock() usages with a mmiowb()
check just to support that one driver is another question.
> Perhaps more importantly, what about sleeping locks? When they
> actually *block*, they get the barrier thanks to the scheduler, but
> you can have a nice non-contended sequence that never does that.
Yeah.
The mutex unlock fast path is just:
if (atomic_long_cmpxchg_release(&lock->owner, curr, 0UL) == curr)
return true;
And because it's the "release" variant we just use lwsync, which doesn't
order MMIO. If it was just atomic_long_cmpxchg() that would work because
we use sync for those.
__up_write() uses atomic_long_sub_return_release(), so same story.
> I guess the fact that these cases have never even shown up as an issue
> means that we could just continue to ignore it.
>
> We could even give that approach some fancy name, and claim it as a
> revolutionary new programming paradigm ("ostrich programming" to go
> with "agile" and "pair programming").
Maybe. On power we have the double whammy of weaker ordering than
other arches and infinitesimal market share, which makes me worry that
there are bugs lurking that we just haven't found, it's happened before.
cheers
Powered by blists - more mailing lists