[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0805201535290.7453@t2.domain.actdsltmp>
Date: Tue, 20 May 2008 15:55:46 -0700 (PDT)
From: Trent Piepho <tpiepho@...escale.com>
To: Scott Wood <scottwood@...escale.com>
cc: Alan Cox <alan@...rguk.ukuu.org.uk>, benh@...nel.crashing.org,
linuxppc-dev@...abs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] [POWERPC] Improve (in|out)_beXX() asm code
On Tue, 20 May 2008, Scott Wood wrote:
> Alan Cox wrote:
>> > It looks like we rely on -fno-strict-aliasing to prevent reordering
>> > ordinary memory accesses (such as to DMA descriptors) past the I/O
>>
>> DMA descriptors in main memory are dependant on cache behaviour anyway
>> and the dma_* operators should be the ones enforcing the needed behaviour.
>
> What about memory obtained from dma_alloc_coherent()? We still need a sync
> and a compiler barrier. The current I/O accessors have the former, but not
> the latter.
There doesn't appear to be any barriers to use for coherent dma other than
mb() and wmb().
Correct me if I'm wrong, but I think the sync isn't actually _required_ (by
memory-barriers.txt's definitions), and it would be enough to use eieio,
except there is code that doesn't use mmiowb() between I/O access and
unlocking.
So, as I understand it, the minimum needed is eieio. To provide strict
ordering w.r.t. spin locks without using mmiowb(), you need sync. To provide
strict ordering w.r.t. normal memory, you need sync and a compiler barrier.
Right now no archs provide the last option. powerpc is currently the middle
option. I don't know if anything uses the first option, maybe alpha? I'm
almost certain x86 is the middle option (the first isn't possible, the arch
already has more ordering than that), which is probably why powerpc used that
option and not the first.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists