[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160815160214.GK6232@phenom.ffwll.local>
Date: Mon, 15 Aug 2016 18:02:14 +0200
From: Daniel Vetter <daniel@...ll.ch>
To: Chris Wilson <chris@...is-wilson.co.uk>
Cc: dri-devel@...ts.freedesktop.org, intel-gfx@...ts.freedesktop.org,
Sumit Semwal <sumit.semwal@...aro.org>,
Daniel Vetter <daniel.vetter@...ll.ch>,
Eric Anholt <eric@...olt.net>, linux-media@...r.kernel.org,
linaro-mm-sig@...ts.linaro.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] dma-buf: Wait on the reservation object when sync'ing
before CPU access
On Mon, Aug 15, 2016 at 04:42:18PM +0100, Chris Wilson wrote:
> Rendering operations to the dma-buf are tracked implicitly via the
> reservation_object (dmabuf->resv). This is used to allow poll() to
> wait upon outstanding rendering (or just query the current status of
> rendering). The dma-buf sync ioctl allows userspace to prepare the
> dma-buf for CPU access, which should include waiting upon rendering.
> (Some drivers may need to do more work to ensure that the dma-buf mmap
> is coherent as well as complete.)
>
> v2: Always wait upon the reservation object implicitly. We choose to do
> it after the native handler in case it can do so more efficiently.
>
> Testcase: igt/prime_vgem
> Testcase: igt/gem_concurrent_blit # *vgem*
> Signed-off-by: Chris Wilson <chris@...is-wilson.co.uk>
> Cc: Sumit Semwal <sumit.semwal@...aro.org>
> Cc: Daniel Vetter <daniel.vetter@...ll.ch>
> Cc: Eric Anholt <eric@...olt.net>
> Cc: linux-media@...r.kernel.org
> Cc: dri-devel@...ts.freedesktop.org
> Cc: linaro-mm-sig@...ts.linaro.org
> Cc: linux-kernel@...r.kernel.org
> ---
> drivers/dma-buf/dma-buf.c | 23 +++++++++++++++++++++++
> 1 file changed, 23 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index ddaee60ae52a..cf04d249a6a4 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -586,6 +586,22 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
> }
> EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
>
> +static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> + enum dma_data_direction direction)
> +{
> + bool write = (direction == DMA_BIDIRECTIONAL ||
> + direction == DMA_TO_DEVICE);
> + struct reservation_object *resv = dmabuf->resv;
> + long ret;
> +
> + /* Wait on any implicit rendering fences */
> + ret = reservation_object_wait_timeout_rcu(resv, write, true,
> + MAX_SCHEDULE_TIMEOUT);
> + if (ret < 0)
> + return ret;
> +
> + return 0;
> +}
>
> /**
> * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
> @@ -608,6 +624,13 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> if (dmabuf->ops->begin_cpu_access)
> ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
>
> + /* Ensure that all fences are waited upon - but we first allow
> + * the native handler the chance to do so more efficiently if it
> + * chooses. A double invocation here will be reasonably cheap no-op.
> + */
> + if (ret == 0)
> + ret = __dma_buf_begin_cpu_access(dmabuf, direction);
Not sure we should wait first and the flush or the other way round. But I
don't think it'll matter for any current dma-buf exporter, so meh.
Reviewed-by: Daniel Vetter <daniel.vetter@...ll.ch>
Sumits, can you pls pick this one up and put into drm-misc?
-Daniel
> +
> return ret;
> }
> EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
> --
> 2.8.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists