[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150203122813.GN8656@n2100.arm.linux.org.uk>
Date: Tue, 3 Feb 2015 12:28:14 +0000
From: Russell King - ARM Linux <linux@....linux.org.uk>
To: Daniel Vetter <daniel@...ll.ch>
Cc: Rob Clark <robdclark@...il.com>,
Sumit Semwal <sumit.semwal@...aro.org>,
LKML <linux-kernel@...r.kernel.org>,
"linux-media@...r.kernel.org" <linux-media@...r.kernel.org>,
DRI mailing list <dri-devel@...ts.freedesktop.org>,
Linaro MM SIG Mailman List <linaro-mm-sig@...ts.linaro.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Linaro Kernel Mailman List <linaro-kernel@...ts.linaro.org>,
Tomasz Stanislawski <stanislawski.tomasz@...glemail.com>,
Robin Murphy <robin.murphy@....com>,
Marek Szyprowski <m.szyprowski@...sung.com>
Subject: Re: [RFCv3 2/2] dma-buf: add helpers for sharing attacher
constraints with dma-parms
On Tue, Feb 03, 2015 at 08:48:56AM +0100, Daniel Vetter wrote:
> On Mon, Feb 02, 2015 at 03:30:21PM -0500, Rob Clark wrote:
> > On Mon, Feb 2, 2015 at 11:54 AM, Daniel Vetter <daniel@...ll.ch> wrote:
> > >> My initial thought is for dma-buf to not try to prevent something than
> > >> an exporter can actually do.. I think the scenario you describe could
> > >> be handled by two sg-lists, if the exporter was clever enough.
> > >
> > > That's already needed, each attachment has it's own sg-list. After all
> > > there's no array of dma_addr_t in the sg tables, so you can't use one sg
> > > for more than one mapping. And due to different iommu different devices
> > > can easily end up with different addresses.
> >
> >
> > Well, to be fair it may not be explicitly stated, but currently one
> > should assume the dma_addr_t's in the dmabuf sglist are bogus. With
> > gpu's that implement per-process/context page tables, I'm not really
> > sure that there is a sane way to actually do anything else..
>
> Hm, what does per-process/context page tables have to do here? At least on
> i915 we have a two levels of page tables:
> - first level for vm/device isolation, used through dma api
> - 2nd level for per-gpu-context isolation and context switching, handled
> internally.
>
> Since atm the dma api doesn't have any context of contexts or different
> pagetables, I don't see who you could use that at all.
What I've found with *my* etnaviv drm implementation (not Christian's - I
found it impossible to work with Christian, especially with the endless
"msm doesn't do it that way, so we shouldn't" responses and his attitude
towards cherry-picking my development work [*]) is that it's much easier to
keep the GPU MMU local to the GPU and under the control of the DRM MM code,
rather than attaching the IOMMU to the DMA API and handling it that way.
There are several reasons for that:
1. DRM has a better idea about when the memory needs to be mapped to the
GPU, and it can more effectively manage the GPU MMU.
2. The GPU MMU may have TLBs which can only be flushed via a command in
the GPU command stream, so it's fundamentally necessary for the MMU to
be managed by the GPU driver so that it knows when (and how) to insert
the flushes.
* - as a direct result of that, I've stopped all further development of
etnaviv drm, and I'm intending to strip it out from my Xorg DDX driver
as the etnaviv drm API which Christian wants is completely incompatible
with the non-etnaviv drm, and that just creates far too much pain in the
DDX driver.
--
FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
according to speedtest.net.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists