lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 18 Dec 2018 18:24:39 +0200
From:   Alexey Skidanov <alexey.skidanov@...el.com>
To:     Liam Mark <lmark@...eaurora.org>
Cc:     Laura Abbott <labbott@...hat.com>,
        Greg KH <gregkh@...uxfoundation.org>,
        devel@...verdev.osuosl.org, tkjos@...roid.com, rve@...roid.com,
        linux-kernel@...r.kernel.org, maco@...roid.com,
        sumit.semwal@...aro.org
Subject: Re: [PATCH v3] staging: android: ion: Add implementation of
 dma_buf_vmap and dma_buf_vunmap



On 12/17/18 20:42, Liam Mark wrote:
> On Sun, 16 Dec 2018, Alexey Skidanov wrote:
> 
>>
>>
>> On 12/16/18 7:20 AM, Liam Mark wrote:
>>> On Tue, 6 Feb 2018, Alexey Skidanov wrote:
>>>
>>>>
>>>>
>>>> On 02/07/2018 01:56 AM, Laura Abbott wrote:
>>>>> On 01/31/2018 10:10 PM, Alexey Skidanov wrote:
>>>>>>
>>>>>> On 01/31/2018 03:00 PM, Greg KH wrote:
>>>>>>> On Wed, Jan 31, 2018 at 02:03:42PM +0200, Alexey Skidanov wrote:
>>>>>>>> Any driver may access shared buffers, created by ion, using
>>>>>>>> dma_buf_vmap and
>>>>>>>> dma_buf_vunmap dma-buf API that maps/unmaps previosuly allocated
>>>>>>>> buffers into
>>>>>>>> the kernel virtual address space. The implementation of these API is
>>>>>>>> missing in
>>>>>>>> the current ion implementation.
>>>>>>>>
>>>>>>>> Signed-off-by: Alexey Skidanov <alexey.skidanov@...el.com>
>>>>>>>> ---
>>>>>>>
>>>>>>> No review from any other Intel developers? :(
>>>>>> Will add.
>>>>>>>
>>>>>>> Anyway, what in-tree driver needs access to these functions?
>>>>>> I'm not sure that there are the in-tree drivers using these functions
>>>>>> and ion as> buffer exporter because they are not implemented in ion :)
>>>>>> But there are some in-tre> drivers using these APIs (gpu drivers) with
>>>>>> other buffer exporters.
>>>>>
>>>>> It's still not clear why you need to implement these APIs.
>>>> How the importing kernel module may access the content of the buffer? :)
>>>> With the current ion implementation it's only possible by dma_buf_kmap,
>>>> mapping one page at a time. For pretty large buffers, it might have some
>>>> performance impact.
>>>> (Probably, the page by page mapping is the only way to access large
>>>> buffers on 32 bit systems, where the vmalloc range is very small. By the
>>>> way, the current ion dma_map_kmap doesn't really map only 1 page at a
>>>> time - it uses the result of vmap() that might fail on 32 bit systems.)
>>>>
>>>>> Are you planning to use Ion with GPU drivers? I'm especially
>>>>> interested in this if you have a non-Android use case.
>>>> Yes, my use case is the non-Android one. But not with GPU drivers.
>>>>>
>>>>> Thanks,
>>>>> Laura
>>>>
>>>> Thanks,
>>>> Alexey
>>>
>>> I was wondering if we could re-open the discussion on adding support to 
>>> ION for dma_buf_vmap.
>>> It seems like the patch was not taken as the reviewers wanted more 
>>> evidence of an upstream use case.
>>>
>>> Here would be my upstream usage argument for including dma_buf_vmap 
>>> support in ION.
>>>
>>> Currently all calls to ion_dma_buf_begin_cpu_access result in the creation 
>>> of a kernel mapping for the buffer, unfortunately the resulting call to 
>>> alloc_vmap_area can be quite expensive and this has caused a performance 
>>> regression for certain clients when they have moved to the new version of 
>>> ION.
>>>
>>> The kernel mapping is not actually needed in ion_dma_buf_begin_cpu_access, 
>>> and generally isn't needed by clients. So if we remove the creation of the 
>>> kernel mapping in ion_dma_buf_begin_cpu_access and only create it when 
>>> needed we can speed up the calls to ion_dma_buf_begin_cpu_access.
>>>
>>> An additional benefit of removing the creation of kernel mappings from 
>>> ion_dma_buf_begin_cpu_access is that it makes the ION code more secure.
>>> Currently a malicious client could call the DMA_BUF_IOCTL_SYNC IOCTL with 
>>> flags DMA_BUF_SYNC_END multiple times to cause the ION buffer kmap_cnt to 
>>> go negative which could lead to undesired behavior.
>>>
>>> One disadvantage of the above change is that a kernel mapping is not 
>>> already created when a client calls dma_buf_kmap. So the following 
>>> dma_buf_kmap contract can't be satisfied.
>>>
>>> /**
>>> * dma_buf_kmap - Map a page of the buffer object into kernel address 
>>> space. The
>>> * same restrictions as for kmap and friends apply.
>>> * @dmabuf:	[in]	buffer to map page from.
>>> * @page_num:	[in]	page in PAGE_SIZE units to map.
>>> *
>>> * This call must always succeed, any necessary preparations that might 
>>> fail
>>> * need to be done in begin_cpu_access.
>>> */
>>>
>>> But hopefully we can work around this by moving clients to dma_buf_vmap.
>> I think the problem is with the contract. We can't ensure that the call
>> is always succeeds regardless the implementation - any mapping might
>> fail. Probably this is why  *all* clients of dma_buf_kmap() check the
>> return value (so it's safe to return NULL in case of failure).
>>
> 
> I think currently the call to dma_buf_kmap will always succeed since the 
> DMA-Buf contract requires that the client first successfully call 
> dma_buf_begin_cpu_access(), and if dma_buf_begin_cpu_access() succeeds 
> then dma_buf_kmap will succeed.
> 
>> I would suggest to fix the contract and to keep the dma_buf_kmap()
>> support in ION.
> 
> I will leave it to the DMA-Buf maintainers as to whether they want to 
> change their contract.
> 
> Liam
> 
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
> a Linux Foundation Collaborative Project
> 

Ok. We need the list of the clients using the ION in the mainline tree.

Alexey

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ