[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1211926410.3286.95.camel@pasglop>
Date: Wed, 28 May 2008 08:13:30 +1000
From: Benjamin Herrenschmidt <benh@...nel.crashing.org>
To: Roland Dreier <rdreier@...co.com>
Cc: Arjan van de Ven <arjan@...radead.org>, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org, tpiepho@...escale.com,
linuxppc-dev@...abs.org, scottwood@...escale.com,
torvalds@...ux-foundation.org, David Miller <davem@...emloft.net>,
alan@...rguk.ukuu.org.uk
Subject: Re: MMIO and gcc re-ordering issue
On Tue, 2008-05-27 at 14:33 -0700, Roland Dreier wrote:
> > This is a different issue. We deal with it on powerpc by having writel
> > set a per-cpu flag and spin_unlock() test it, and do the barrier if
> > needed there.
>
> Cool... I assume you do this for mutex_unlock() etc?
That's a good point... I don't think we do. Maybe we should.
> Is there any reason why ia64 can't do this too so we can kill mmiowb and
> save everyone a lot of hassle? (mips, sh and frv have non-empty
> mmiowb() definitions too but I'd guess that these are all bugs based on
> misunderstandings of the mmiowb() semantics...)
Well, basically our approach was that mmiowb() is a pain in the neck,
nobody (ie. driver writers) really understands what it's for, and so
it's either not there or misused. So we didn't want to introduce it for
powerpc, but instead did the trick above in order to -slightly- improve
our writel (ie avoid a sync -after- the write) .
> > However, drivers such as e1000 -also- have a wmb() between filling the
> > ring buffer and kicking the DMA with MMIO, with a comment about this
> > being needed for ia64 relaxed ordering.
>
> I put these barriers into mthca, mlx4 etc, although it came from my
> possible misunderstanding of the memory ordering rules in the kernel
> more than any experience of problems (as opposed the the mmiowb()s,
> which all came from real world bugs).
Ben.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists