[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121129121430.GA3846@avionic-0098.adnet.avionic-design.de>
Date: Thu, 29 Nov 2012 13:14:30 +0100
From: Thierry Reding <thierry.reding@...onic-design.de>
To: Lucas Stach <dev@...xeye.de>
Cc: Terje Bergström <tbergstrom@...dia.com>,
Dave Airlie <airlied@...il.com>,
"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Arto Merilainen <amerilainen@...dia.com>
Subject: Re: [RFC v2 8/8] drm: tegra: Add gr2d device
On Thu, Nov 29, 2012 at 10:09:13AM +0100, Lucas Stach wrote:
> Am Donnerstag, den 29.11.2012, 10:17 +0200 schrieb Terje Bergström:
> > On 28.11.2012 20:46, Lucas Stach wrote:
> > > Am Mittwoch, den 28.11.2012, 18:23 +0200 schrieb Terje Bergström:
> > >> Sorry. I promised in another thread a write-up explaining the design. I
> > >> still owe you guys that.
> > > That would be really nice to have. I'm also particularly interested in
> > > how you plan to do synchronization of command streams to different
> > > engines working together, if that's not too much to ask for now. Like
> > > userspace uploading a texture in a buffer, 2D engine doing mipmap
> > > generation, 3D engine using mipmapped texture.
> >
> > I can briefly explain (and then copy-paste to a coherent text once I get
> > to it) how inter-engine synchronization is done. It's not specifically
> > for 2D or 3D, but generic to any host1x client.
> [...]
> Thanks for that.
> [...]
>
> > > 2. Move the exposed DRM interface more in line with other DRM drivers.
> > > Please take a look at how for example the GEM_EXECBUF ioctl works on
> > > other drivers to get a feeling of what I'm talking about. Everything
> > > using the display, 2D and maybe later on the 3D engine should only deal
> > > with GEM handles. I really don't like the idea of having a single
> > > userspace application, which uses engines with similar and known
> > > requirements (DDX) dealing with dma-buf handles or other similar high
> > > overhead stuff to do the most basic tasks.
> > > If we move down the allocator into nvhost we can use buffers allocated
> > > from this to back GEM or V4L2 buffers transparently. The ioctl to
> > > allocate a GEM buffer shouldn't do much more than wrapping the nvhost
> > > buffer.
> >
> > Ok, this is actually what we do downstream. We use dma-buf handles only
> > for purposes where they're really needed (in fact, none yet), and use
> > our downstream allocator handles for the rest. I did this, because
> > benchmarks were showing that memory management overhead shoot through
> > the roof if I tried doing everything via dma-buf.
> >
> > We can move support for allocating GEM handles to nvhost, and GEM
> > handles can be treated just as another memory handle type in nvhost.
> > tegradrm would then call nvhost for allocation.
> >
> We should aim for a clean split here. GEM handles are something which is
> really specific to how DRM works and as such should be constructed by
> tegradrm. nvhost should really just manage allocations/virtual address
> space and provide something that is able to back all the GEM handle
> operations.
>
> nvhost has really no reason at all to even know about GEM handles. If
> you back a GEM object by a nvhost object you can just peel out the
> nvhost handles from the GEM wrapper in the tegradrm submit ioctl handler
> and queue the job to nvhost using it's native handles.
That certainly sounds sensible to me. We would obviously no longer be
able to reuse the CMA GEM helpers, but if it makes things easier to
handle in general that's definitely something we can live with.
If I understand this correctly it would also allow us to do the buffer
management within host1x and therefore allow the differences between
Tegra20 (CMA) and Tegra30 (IOMMU) allocations to be handled in one
central place. That would indeed make things a lot easier in the host1x
client drivers.
> This way you would also be able to construct different handles (like GEM
> obj or V4L2 buffers) from the same backing nvhost object. Note that I'm
> not sure how useful this would be, but it seems like a reasonable design
> to me being able to do so.
Wouldn't that be useful for sharing buffers between DRM and V4L2 using
dma-buf? I'm not very familiar with how exactly importing and exporting
work with dma-buf, so maybe I need to read up some more.
Thierry
Content of type "application/pgp-signature" skipped
Powered by blists - more mailing lists