[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF6AEGtesj5hDQtBQgTAJFnMi0euY+Xm+wbUupMc660VPVtmvg@mail.gmail.com>
Date: Fri, 25 Mar 2016 07:58:40 -0400
From: Rob Clark <robdclark@...il.com>
To: Inki Dae <inki.dae@...sung.com>
Cc: Gustavo Padovan <gustavo@...ovan.org>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
Daniel Stone <daniels@...labora.com>,
Daniel Vetter <daniel.vetter@...ll.ch>,
Arve Hjønnevåg <arve@...roid.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Riley Andrews <riandrews@...roid.com>,
Gustavo Padovan <gustavo.padovan@...labora.co.uk>,
John Harrison <John.C.Harrison@...el.com>
Subject: Re: [RFC 0/6] drm/fences: add in-fences to DRM
On Thu, Mar 24, 2016 at 7:49 PM, Inki Dae <inki.dae@...sung.com> wrote:
>
>
> 2016년 03월 25일 00:40에 Rob Clark 이(가) 쓴 글:
>> On Thu, Mar 24, 2016 at 4:18 AM, Inki Dae <inki.dae@...sung.com> wrote:
>>> Hi,
>>>
>>> 2016년 03월 24일 03:47에 Gustavo Padovan 이(가) 쓴 글:
>>>> From: Gustavo Padovan <gustavo.padovan@...labora.co.uk>
>>>>
>>>> Hi,
>>>>
>>>> This is a first proposal to discuss the addition of in-fences support
>>>> to DRM. It adds a new struct to fence.c to abstract the use of sync_file
>>>> in DRM drivers. The new struct fence_collection contains a array with all
>>>> fences that a atomic commit needs to wait on
>>>
>>> As I mentioned already like below,
>>> http://www.spinics.net/lists/dri-devel/msg103225.html
>>>
>>> I don't see why Android specific thing is tried to propagate to Linux DRM. In Linux mainline, it has already implicit sync interfaces for DMA devices called dma fence which forces registering a fence obejct to DMABUF through a reservation obejct when a dmabuf object is created. However, Android sync driver creates a new file for a sync object and this would have different point of view.
>>>
>>> Is there anyone who can explan why Android specific thing is tried to spread into Linux DRM? Was there any consensus to use Android sync driver - which uses explicit sync interfaces - as Linux standard?
>>>
>>
>> btw, there is already plane_state->fence .. which I don't think has
>> any users yet, but I start to use it in my patchset that converts
>> drm/msm to 'struct fence'
>
> Yes, Exynos also started it.
>
>>
>> That said, we do need syncpt as the way to expose fences to userspace
>> for explicit synchronization, but I'm not entirely sure that the
>
> It's definitely different case. This tries to add new user-space interfaces to expose fences to user-space. At least, implicit interfaces are embedded into drivers.
> So I'd like to give you a question. Why exposing fences to user-space is required? To provide easy-to-debug solution to rendering pipleline? To provide merge fence feature?
>
Well, implicit sync and explicit sync are two different cases.
Implicit sync ofc remains the default, but userspace could opt-in to
explicit sync instead. For example, on the gpu side of things,
depending on flags userspace passes in to the submit ioctl we would
either attach the fence to all the written buffers (implicit) or
return it as a fence fd to userspace (explicit), which userspace could
then pass in to atomic ioctl to synchronize pageflip.
And visa-versa, we can pass the pageflip (atomic) completion fence
back in to gpu so it doesn't start rendering the next frame until the
buffer is off screen.
fwiw, currently android is the first user of explicit sync (although I
expect wayland/weston to follow suit). A couple linaro folks have
android running with an upstream kernel + mesa + atomic/kms hwc on a
couple devices (nexus7 and db410c with freedreno, and qemu with
virgl). But there are some limitations due to missing the
EGL_ANDROID_native_fence_sync extension in mesa. I plan to implement
that, but I ofc need the fence fd stuff in order to do so ;-)
> And if we need really to expose fences to user-space and there is really real user, then we have already good candidates, DMA-BUF-IOCTL-SYNC or maybe fcntl system call because we share already DMA buffer between CPU <-> DMA and DMA <-> DMA using DMABUF.
> For DMA-BUF-IOCTL-SYNC, I think you remember that was what I tried long time ago because you was there. Several years ago, I tried to couple exposing the fences to user-space with cache operation although at that time, I really misleaded the fence machnism. My trying was also for the potential users.
Note that this is not (just) about sw sync, but also sync between
multiple hw devices.
BR,
-R
> Anyway, my opinion is that we could expose the fences hided by DMABUF to user-space using interfaces it exists already around us. And for this, below Chromium solution would also give us some helps,
> https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-3.18/drivers/gpu/drm/drm_sync_helper.c
>
> And in /driver/dma-buf/, there are DMABUF-centric modules so looks strange sync_file module of Android is placed in that directory - Android sync driver doesn't use really DMABUF but creates new file for their sync fence instead.
> For implicit sync interfaces for DMA devices, we use DMABUF and for explict sync interfaces for user-space, we use sync_file not DMABUF? That doesn't make sense.
>
> I love really Android but I feel as if we try to give a seat available to Android somehow.
>
> Thanks,
> Inki Dae
>
>> various drivers ever need to see that (vs just struct fence), at least
>> on the kms side of things.
>>
>> BR,
>> -R
>>
>>
>>> Thanks,
>>> Inki Dae
>>>
>>>>
>>>> /**
>>>> * struct fence_collection - aggregate fences together
>>>> * @num_fences: number of fence in the collection.
>>>> * @user_data: user data.
>>>> * @func: user callback to put user data.
>>>> * @fences: array of @num_fences fences.
>>>> */
>>>> struct fence_collection {
>>>> int num_fences;
>>>> void *user_data;
>>>> collection_put_func_t func;
>>>> struct fence *fences[];
>>>> };
>>>>
>>>>
>>>> The fence_collection is allocated and filled by sync_file_fences_get() and
>>>> atomic_commit helpers can use fence_collection_wait() to wait the fences to
>>>> signal.
>>>>
>>>> These patches depends on the sync ABI rework:
>>>>
>>>> https://www.spinics.net/lists/dri-devel/msg102795.html
>>>>
>>>> and the patch to de-stage the sync framework:
>>>>
>>>> https://www.spinics.net/lists/dri-devel/msg102799.html
>>>>
>>>>
>>>> I also hacked together some sync support into modetest for testing:
>>>>
>>>> https://git.collabora.com/cgit/user/padovan/libdrm.git/log/?h=atomic
>>>>
>>>>
>>>> Gustavo
>>>>
>>>>
>>>> Gustavo Padovan (6):
>>>> drm/fence: add FENCE_FD property to planes
>>>> dma-buf/fence: add struct fence_collection
>>>> dma-buf/sync_file: add sync_file_fences_get()
>>>> dma-buf/fence: add fence_collection_put()
>>>> dma-buf/fence: add fence_collection_wait()
>>>> drm/fence: support fence_collection on atomic commit
>>>>
>>>> drivers/dma-buf/fence.c | 33 +++++++++++++++++++++++++++++++++
>>>> drivers/dma-buf/sync_file.c | 36 ++++++++++++++++++++++++++++++++++++
>>>> drivers/gpu/drm/drm_atomic.c | 13 +++++++++++++
>>>> drivers/gpu/drm/drm_atomic_helper.c | 10 ++++++----
>>>> drivers/gpu/drm/drm_crtc.c | 7 +++++++
>>>> include/drm/drm_crtc.h | 5 ++++-
>>>> include/linux/fence.h | 19 +++++++++++++++++++
>>>> include/linux/sync_file.h | 8 ++++++++
>>>> 8 files changed, 126 insertions(+), 5 deletions(-)
>>>>
>>> _______________________________________________
>>> dri-devel mailing list
>>> dri-devel@...ts.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
>>
Powered by blists - more mailing lists