[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b9e65f47-b5c8-db09-117a-a8e22a5b6c71@amd.com>
Date: Thu, 17 Sep 2020 18:06:14 +0200
From: Christian König <christian.koenig@....com>
To: Daniel Vetter <daniel@...ll.ch>, Jason Gunthorpe <jgg@...pe.ca>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
dri-devel <dri-devel@...ts.freedesktop.org>,
"moderated list:DMA BUFFER SHARING FRAMEWORK"
<linaro-mm-sig@...ts.linaro.org>, Linux MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"open list:DMA BUFFER SHARING FRAMEWORK"
<linux-media@...r.kernel.org>
Subject: Re: [Linaro-mm-sig] Changing vma->vm_file in dma_buf_mmap()
Am 17.09.20 um 17:37 schrieb Daniel Vetter:
> On Thu, Sep 17, 2020 at 5:24 PM Jason Gunthorpe <jgg@...pe.ca> wrote:
>> On Thu, Sep 17, 2020 at 04:54:44PM +0200, Christian König wrote:
>>> Am 17.09.20 um 16:35 schrieb Jason Gunthorpe:
>>>> On Thu, Sep 17, 2020 at 02:24:29PM +0200, Christian König wrote:
>>>>> Am 17.09.20 um 14:18 schrieb Jason Gunthorpe:
>>>>>> On Thu, Sep 17, 2020 at 02:03:48PM +0200, Christian König wrote:
>>>>>>> Am 17.09.20 um 13:31 schrieb Jason Gunthorpe:
>>>>>>>> On Thu, Sep 17, 2020 at 10:09:12AM +0200, Daniel Vetter wrote:
>>>>>>>>
>>>>>>>>> Yeah, but it doesn't work when forwarding from the drm chardev to the
>>>>>>>>> dma-buf on the importer side, since you'd need a ton of different
>>>>>>>>> address spaces. And you still rely on the core code picking up your
>>>>>>>>> pgoff mangling, which feels about as risky to me as the vma file
>>>>>>>>> pointer wrangling - if it's not consistently applied the reverse map
>>>>>>>>> is toast and unmap_mapping_range doesn't work correctly for our needs.
>>>>>>>> I would think the pgoff has to be translated at the same time the
>>>>>>>> vm->vm_file is changed?
>>>>>>>>
>>>>>>>> The owner of the dma_buf should have one virtual address space and FD,
>>>>>>>> all its dma bufs should be linked to it, and all pgoffs translated to
>>>>>>>> that space.
>>>>>>> Yeah, that is exactly like amdgpu is doing it.
>>>>>>>
>>>>>>> Going to document that somehow when I'm done with TTM cleanups.
>>>>>> BTW, while people are looking at this, is there a way to go from a VMA
>>>>>> to a dma_buf that owns it?
>>>>> Only a driver specific one.
>>>> Sounds OK
>>>>
>>>>> For TTM drivers vma->vm_private_data points to the buffer object. Not sure
>>>>> about the drivers using GEM only.
>>>> Why are drivers in control of the vma? I would think dma_buf should be
>>>> the vma owner. IIRC module lifetime correctness essentially hings on
>>>> the module owner of the struct file
>>> Because the page fault handling is completely driver specific.
>>>
>>> We could install some DMA-buf vmops, but that would just be another layer of
>>> redirection.
> Uh geez I didn't know amdgpu was doing that :-/
>
> Since this is on, I guess the inverse of trying to convert a userptr
> into a dma-buf is properly rejected?
My fault, I wasn't specific enough in my description :)
Amdgpu is NOT doing this with mmaped DMA-bufs, but rather with it's own
mmaped BOs.
In other words when userspace call the userptr IOCTL and we get an error
because we can't make an userptr from some random device memory we
instead check all CPU mappings if the application was brain dead enough
to provide us our own pointer back.
IIRC this is even done in userspace and not the kernel. But we talked
about doing it in the kernel with the private_data as well.
>
>> If it is already taking a page fault I'm not sure the extra function
>> call indirection is going to be a big deal. Having a uniform VMA
>> sounds saner than every driver custom rolling something.
>>
>> When I unwound a similar mess in RDMA all the custom VMA stuff in the
>> drivers turned out to be generally buggy, at least.
>>
>> Is vma->vm_file->private_data universally a dma_buf pointer at least?
> Nope. I think if you want this without some large scale rewrite of a
> lot of code we'd need a vmops->get_dmabuf or similar. Not pretty, but
> would get the job done.
Yeah, agree that sounds like the simplest approach.
Regards,
Christian.
>
>>>> So, user VA -> find_vma -> dma_buf object -> dma_buf operations on the
>>>> memory it represents
>>> Ah, yes we are already doing this in amdgpu as well. But only for DMA-bufs
>>> or more generally buffers which are mmaped by this driver instance.
>> So there is no general dma_buf service? That is a real bummer
> Mostly historical reasons and "it's complicated". One problem is that
> dma-buf isn't a powerful enough interface that drivers could use it
> for all their native objects, e.g. userptr doesn't pass through it,
> and clever cache flushing tricks aren't allowed and a bunch of other
> things. So there's some serious roadblocks before we could have a
> common allocator (or set of allocators) behind dma-buf.
> -Daniel
Powered by blists - more mailing lists