lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 3 Feb 2015 15:54:04 +0000
From:	Russell King - ARM Linux <linux@....linux.org.uk>
To:	Arnd Bergmann <arnd@...db.de>
Cc:	linaro-mm-sig@...ts.linaro.org,
	Linaro Kernel Mailman List <linaro-kernel@...ts.linaro.org>,
	Robin Murphy <robin.murphy@....com>,
	LKML <linux-kernel@...r.kernel.org>,
	DRI mailing list <dri-devel@...ts.freedesktop.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	Rob Clark <robdclark@...il.com>,
	Daniel Vetter <daniel@...ll.ch>,
	Tomasz Stanislawski <stanislawski.tomasz@...glemail.com>,
	linux-arm-kernel@...ts.infradead.org,
	"linux-media@...r.kernel.org" <linux-media@...r.kernel.org>
Subject: Re: [Linaro-mm-sig] [RFCv3 2/2] dma-buf: add helpers for sharing
 attacher constraints with dma-parms

On Tue, Feb 03, 2015 at 04:31:13PM +0100, Arnd Bergmann wrote:
> On Tuesday 03 February 2015 15:22:05 Russell King - ARM Linux wrote:
> > Don't we already have those in the DMA API?  dma_sync_*() ?
> >
> > dma_map_sg() - sets up the system MMU and deals with initial cache
> > coherency handling.  Device IOMMU being the responsibility of the
> > GPU driver.
> 
> dma_sync_*() works with whatever comes out of dma_map_*(), true,
> but this is not what they want to do here.
>  
> > The GPU can then do dma_sync_*() on the scatterlist as is necessary
> > to synchronise the cache coherency (while respecting the ownership
> > rules - which are very important on ARM to follow as some sync()s are
> > destructive to any dirty data in the CPU cache.)
> > 
> > dma_unmap_sg() tears down the system MMU and deals with the final cache
> > handling.
> > 
> > Why do we need more DMA API interfaces?
> 
> The dma_map_* interfaces assign the virtual addresses internally,
> using typically either a global address space for all devices, or one
> address space per device.

We shouldn't be doing one address space per device for precisely this
reason.  We should be doing one address space per *bus*.  I did have
a nice diagram to illustrate the point in my previous email, but I
deleted it, I wish I hadn't... briefly:

Fig. 1.
                                                 +------------------+
                                                 |+-----+  device   |
CPU--L1cache--L2cache--Memory--SysMMU---<iobus>----IOMMU-->         |
                                                 |+-----+           |
                                                 +------------------+

Fig.1 represents what I'd call the "GPU" issue that we're talking about
in this thread.

Fig. 2.
CPU--L1cache--L2cache--Memory--SysMMU---<iobus>--IOMMU--device

The DMA API should be responsible (at the very least) for everything on
the left of "<iobus>" in and should be providing a dma_addr_t which is
representative of what the device (in Fig.1) as a whole sees.  That's
the "system" part.  

I believe this is the approach which is taken by x86 and similar platforms,
simply because they tend not to have an IOMMU on individual devices (and
if they did, eg, on a PCI card, it's clearly the responsibility of the
device driver.)

Whether the DMA API also handles the IOMMU in Fig.1 or 2 is questionable.
For fig.2, it is entirely possible that the same device could appear
without an IOMMU, and in that scenario, you would want the IOMMU to be
handled transparently.

However, by doing so for everything, you run into exactly the problem
which is being discussed here - the need to separate out the cache
coherency from the IOMMU aspects.  You probably also have a setup very
similar to fig.1 (which is certainly true of Vivante GPUs.)

If you have the need to separately control both, then using the DMA API
to encapsulate both does not make sense - at which point, the DMA API
should be responsible for the minimum only - in other words, everything
to the left of <iobus> (so including the system MMU.)  The control of
the device IOMMU should be the responsibility of device driver in this
case.

So, dma_map_sg() would be responsible for dealing with the CPU cache
coherency issues, and setting up the system MMU.  dma_sync_*() would
be responsible for the CPU cache coherency issues, and dma_unmap_sg()
would (again) deal with the CPU cache and tear down the system MMU
mappings.

Meanwhile, the device driver has ultimate control over its IOMMU, the
creation and destruction of mappings and context switches at the
appropriate times.

-- 
FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
according to speedtest.net.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ