lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <58573A99.2050809@samsung.com>
Date:   Mon, 19 Dec 2016 10:40:41 +0900
From:   Inki Dae <inki.dae@...sung.com>
To:     Chris Wilson <chris@...is-wilson.co.uk>,
        dri-devel@...ts.freedesktop.org, intel-gfx@...ts.freedesktop.org,
        Sumit Semwal <sumit.semwal@...aro.org>,
        Eric Anholt <eric@...olt.net>, linux-media@...r.kernel.org,
        linaro-mm-sig@...ts.linaro.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] dma-buf: Wait on the reservation object when sync'ing
 before CPU access



2016년 08월 16일 01:02에 Daniel Vetter 이(가) 쓴 글:
> On Mon, Aug 15, 2016 at 04:42:18PM +0100, Chris Wilson wrote:
>> Rendering operations to the dma-buf are tracked implicitly via the
>> reservation_object (dmabuf->resv). This is used to allow poll() to
>> wait upon outstanding rendering (or just query the current status of
>> rendering). The dma-buf sync ioctl allows userspace to prepare the
>> dma-buf for CPU access, which should include waiting upon rendering.
>> (Some drivers may need to do more work to ensure that the dma-buf mmap
>> is coherent as well as complete.)
>>
>> v2: Always wait upon the reservation object implicitly. We choose to do
>> it after the native handler in case it can do so more efficiently.
>>
>> Testcase: igt/prime_vgem
>> Testcase: igt/gem_concurrent_blit # *vgem*
>> Signed-off-by: Chris Wilson <chris@...is-wilson.co.uk>
>> Cc: Sumit Semwal <sumit.semwal@...aro.org>
>> Cc: Daniel Vetter <daniel.vetter@...ll.ch>
>> Cc: Eric Anholt <eric@...olt.net>
>> Cc: linux-media@...r.kernel.org
>> Cc: dri-devel@...ts.freedesktop.org
>> Cc: linaro-mm-sig@...ts.linaro.org
>> Cc: linux-kernel@...r.kernel.org
>> ---
>>  drivers/dma-buf/dma-buf.c | 23 +++++++++++++++++++++++
>>  1 file changed, 23 insertions(+)
>>
>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>> index ddaee60ae52a..cf04d249a6a4 100644
>> --- a/drivers/dma-buf/dma-buf.c
>> +++ b/drivers/dma-buf/dma-buf.c
>> @@ -586,6 +586,22 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
>>  }
>>  EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
>>  
>> +static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>> +				      enum dma_data_direction direction)
>> +{
>> +	bool write = (direction == DMA_BIDIRECTIONAL ||
>> +		      direction == DMA_TO_DEVICE);
>> +	struct reservation_object *resv = dmabuf->resv;
>> +	long ret;
>> +
>> +	/* Wait on any implicit rendering fences */
>> +	ret = reservation_object_wait_timeout_rcu(resv, write, true,
>> +						  MAX_SCHEDULE_TIMEOUT);
>> +	if (ret < 0)
>> +		return ret;
>> +
>> +	return 0;
>> +}
>>  
>>  /**
>>   * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
>> @@ -608,6 +624,13 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>  	if (dmabuf->ops->begin_cpu_access)
>>  		ret = dmabuf->ops->begin_cpu_access(dmabuf, direction);
>>  
>> +	/* Ensure that all fences are waited upon - but we first allow
>> +	 * the native handler the chance to do so more efficiently if it
>> +	 * chooses. A double invocation here will be reasonably cheap no-op.
>> +	 */
>> +	if (ret == 0)
>> +		ret = __dma_buf_begin_cpu_access(dmabuf, direction);
> 
> Not sure we should wait first and the flush or the other way round. But I
> don't think it'll matter for any current dma-buf exporter, so meh.
> 

Sorry for late comment. I wonder there is no problem in case that GPU or other DMA device tries to access this dma buffer after dma_buf_begin_cpu_access call.
I think in this case, they - GPU or DMA devices - would make a mess of the dma buffer while CPU is accessing the buffer.

This patch is in mainline already so if this is real problem then I think we sould choose,
1. revert this patch from mainline
2. make sure to prevent other DMA devices to try to access the buffer while CPU is accessing the buffer.

Thanks.

> Reviewed-by: Daniel Vetter <daniel.vetter@...ll.ch>
> 
> Sumits, can you pls pick this one up and put into drm-misc?
> -Daniel
> 
>> +
>>  	return ret;
>>  }
>>  EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
>> -- 
>> 2.8.1
>>
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ