[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1205802540.8195.30.camel@concordia.ozlabs.ibm.com>
Date: Tue, 18 Mar 2008 12:08:59 +1100
From: Michael Ellerman <michael@...erman.id.au>
To: Grant Grundler <grundler@...isc-linux.org>
Cc: akepner@....com,
James Bottomley <James.Bottomley@...senPartnership.com>,
Tony Luck <tony.luck@...el.com>,
Jesse Barnes <jbarnes@...tuousgeek.org>,
Jes Sorensen <jes@....com>,
Randy Dunlap <randy.dunlap@...cle.com>,
Roland Dreier <rdreier@...co.com>,
David Miller <davem@...emloft.net>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
linux-kernel@...r.kernel.org, Mark Nelson <mdnelson@....ibm.com>
Subject: Re: [PATCH 1/3 v3] dma: document dma_{un}map_{single|sg}_attrs()
interface
On Thu, 2008-03-13 at 23:21 -0600, Grant Grundler wrote:
> On Fri, Mar 14, 2008 at 03:30:29PM +1100, Michael Ellerman wrote:
> ...
> > > +DMA_ATTR_BARRIER is a barrier attribute for DMA. DMA to a memory
> > > +region with the DMA_ATTR_BARRIER attribute forces all pending DMA
> > > +writes to complete, and thus provides a mechanism to strictly order
> > > +DMA from a device across all intervening busses and bridges. This
> > > +barrier is not specific to a particular type of interconnect, it
> > > +applies to the system as a whole, and so its implementation must
> > > +account for the idiosyncracies of the system all the way from the
> > > +DMA device to memory.
> >
> > You say a "DMA to a memory region with the DMA_ATTR_BARRIER attribute
> > forces all pending DMA writes to complete". Does it force _all_ DMA
> > writes to complete, or just writes to the region, or just writes coming
> > from devices?
>
> Yes to "_all_ DMA writes". re-read the second sentence.
Yeah, just checking.
> > What if something is writing to a device?
>
> aka MMIO writes. Not defined and in general unrelated. AFAIK, no
> version of PCI has ordering rules for traffic going in opposite
> directions. I'm 99% sure about this but haven't reviewed any
> PCI docs in about 2 years.
I'll take you word for it :) But there's also non-PCI things between
the device and the cpu, so anything's potentially possible.
> On a related note, I always think of MSI/MSI-X as DMA Writes.
> Michael's got me thinking we need to explicitly state that.
> In "normal" use, the device will not issue an MSI until after
> the "completion DMA write" has been issued and thus the MSI/MSI-X
> transactions are NOT subject to the ordering requirement...but
> that's not exactly true. We don't want the MSI to ever
> pass the "completion DMA write".
>
> Do we need to state the platform interconnect can NOT allow
> successive DMA writes to pass the ordered DMA writes?
> Or state MSI DMA writes are implied to be to an "ordered DMA region"?
> Both statements?
I think the 2nd at least. If MSIs can pass the DMAs they're signalling
completion for then they're no better than LSI, the driver will still
need to go check the DMA has finished somehow.
> > Does DMA_ATTR_BARRIER have any effect on reads?
>
> It could but it's not defined to. :)
> I thought about this before but wanted to keep the document short.
>
> Perhaps two things should be done to address this:
> 1) rename this to DMA_ATTR_WRITE_BARRIER
> 2) state it has no explicit effect on DMA Reads.
>
> If someone needs that changed in the future, we can then
> define how reading from a region will trigger ordering rules.
I'd vote for 1), that way if someone comes up with a semantic for
read/write/both we will have sensible flag names.
cheers
--
Michael Ellerman
OzLabs, IBM Australia Development Lab
wwweb: http://michael.ellerman.id.au
phone: +61 2 6212 1183 (tie line 70 21183)
We do not inherit the earth from our ancestors,
we borrow it from our children. - S.M.A.R.T Person
Download attachment "signature.asc" of type "application/pgp-signature" (190 bytes)
Powered by blists - more mailing lists