lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 13 Mar 2020 11:33:55 +0100
From:   Christian König <christian.koenig@....com>
To:     Jason Ekstrand <jason@...kstrand.net>
Cc:     Dave Airlie <airlied@...hat.com>,
        Jesse Hall <jessehall@...gle.com>,
        James Jones <jajones@...dia.com>,
        Daniel Stone <daniels@...labora.com>,
        Kristian Høgsberg <hoegsberg@...gle.com>,
        Daniel Vetter <daniel.vetter@...ll.ch>,
        Bas Nieuwenhuizen <bas@...nieuwenhuizen.nl>,
        Sumit Semwal <sumit.semwal@...aro.org>,
        Greg Hackmann <ghackmann@...gle.com>,
        Chenbo Feng <fengc@...gle.com>, linux-media@...r.kernel.org,
        Maling list - DRI developers 
        <dri-devel@...ts.freedesktop.org>, linaro-mm-sig@...ts.linaro.org,
        LKML <linux-kernel@...r.kernel.org>,
        Michel Dänzer <michel@...nzer.net>
Subject: Re: [PATCH 3/3] RFC: dma-buf: Add an API for importing and exporting
 sync files (v4)

Am 12.03.20 um 16:57 schrieb Jason Ekstrand:
> On Wed, Mar 11, 2020 at 8:18 AM Christian König
> <christian.koenig@....com> wrote:
>> Am 11.03.20 um 04:43 schrieb Jason Ekstrand:
>>> Explicit synchronization is the future.  At least, that seems to be what
>>> most userspace APIs are agreeing on at this point.  However, most of our
>>> Linux APIs (both userspace and kernel UAPI) are currently built around
>>> implicit synchronization with dma-buf.  While work is ongoing to change
>>> many of the userspace APIs and protocols to an explicit synchronization
>>> model, switching over piecemeal is difficult due to the number of
>>> potential components involved.  On the kernel side, many drivers use
>>> dma-buf including GPU (3D/compute), display, v4l, and others.  In
>>> userspace, we have X11, several Wayland compositors, 3D drivers, compute
>>> drivers (OpenCL etc.), media encode/decode, and the list goes on.
>>>
>>> This patch provides a path forward by allowing userspace to manually
>>> manage the fences attached to a dma-buf.  Alternatively, one can think
>>> of this as making dma-buf's implicit synchronization simply a carrier
>>> for an explicit fence.  This is accomplished by adding two IOCTLs to
>>> dma-buf for importing and exporting a sync file to/from the dma-buf.
>>> This way a userspace component which is uses explicit synchronization,
>>> such as a Vulkan driver, can manually set the write fence on a buffer
>>> before handing it off to an implicitly synchronized component such as a
>>> Wayland compositor or video encoder.  In this way, each of the different
>>> components can be upgraded to an explicit synchronization model one at a
>>> time as long as the userspace pieces connecting them are aware of it and
>>> import/export fences at the right times.
>>>
>>> There is a potential race condition with this API if userspace is not
>>> careful.  A typical use case for implicit synchronization is to wait for
>>> the dma-buf to be ready, use it, and then signal it for some other
>>> component.  Because a sync_file cannot be created until it is guaranteed
>>> to complete in finite time, userspace can only signal the dma-buf after
>>> it has already submitted the work which uses it to the kernel and has
>>> received a sync_file back.  There is no way to atomically submit a
>>> wait-use-signal operation.  This is not, however, really a problem with
>>> this API so much as it is a problem with explicit synchronization
>>> itself.  The way this is typically handled is to have very explicit
>>> ownership transfer points in the API or protocol which ensure that only
>>> one component is using it at any given time.  Both X11 (via the PRESENT
>>> extension) and Wayland provide such ownership transfer points via
>>> explicit present and idle messages.
>>>
>>> The decision was intentionally made in this patch to make the import and
>>> export operations IOCTLs on the dma-buf itself rather than as a DRM
>>> IOCTL.  This makes it the import/export operation universal across all
>>> components which use dma-buf including GPU, display, v4l, and others.
>>> It also means that a userspace component can do the import/export
>>> without access to the DRM fd which may be tricky to get in cases where
>>> the client communicates with DRM via a userspace API such as OpenGL or
>>> Vulkan.  At a future date we may choose to add direct import/export APIs
>>> to components such as drm_syncobj to avoid allocating a file descriptor
>>> and going through two ioctls.  However, that seems to be something of a
>>> micro-optimization as import/export operations are likely to happen at a
>>> rate of a few per frame of rendered or decoded video.
>>>
>>> v2 (Jason Ekstrand):
>>>    - Use a wrapper dma_fence_array of all fences including the new one
>>>      when importing an exclusive fence.
>>>
>>> v3 (Jason Ekstrand):
>>>    - Lock around setting shared fences as well as exclusive
>>>    - Mark SIGNAL_SYNC_FILE as a read-write ioctl.
>>>    - Initialize ret to 0 in dma_buf_wait_sync_file
>>>
>>> v4 (Jason Ekstrand):
>>>    - Use the new dma_resv_get_singleton helper
>>>
>>> Signed-off-by: Jason Ekstrand <jason@...kstrand.net>
>>> ---
>>>    drivers/dma-buf/dma-buf.c    | 96 ++++++++++++++++++++++++++++++++++++
>>>    include/uapi/linux/dma-buf.h | 13 ++++-
>>>    2 files changed, 107 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>> index d4097856c86b..09973c689866 100644
>>> --- a/drivers/dma-buf/dma-buf.c
>>> +++ b/drivers/dma-buf/dma-buf.c
>>> @@ -20,6 +20,7 @@
>>>    #include <linux/debugfs.h>
>>>    #include <linux/module.h>
>>>    #include <linux/seq_file.h>
>>> +#include <linux/sync_file.h>
>>>    #include <linux/poll.h>
>>>    #include <linux/dma-resv.h>
>>>    #include <linux/mm.h>
>>> @@ -348,6 +349,95 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf)
>>>        return ret;
>>>    }
>>>
>>> +static long dma_buf_wait_sync_file(struct dma_buf *dmabuf,
>>> +                                const void __user *user_data)
>>> +{
>>> +     struct dma_buf_sync_file arg;
>>> +     struct dma_fence *fence;
>>> +     int ret = 0;
>>> +
>>> +     if (copy_from_user(&arg, user_data, sizeof(arg)))
>>> +             return -EFAULT;
>>> +
>>> +     if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE)
>>> +             return -EINVAL;
>>> +
>>> +     fence = sync_file_get_fence(arg.fd);
>>> +     if (!fence)
>>> +             return -EINVAL;
>>> +
>>> +     dma_resv_lock(dmabuf->resv, NULL);
>>> +
>>> +     if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) {
>>> +             struct dma_fence *singleton = NULL;
>>> +             ret = dma_resv_get_singleton(dmabuf->resv, fence, &singleton);
>>> +             if (!ret && singleton)
>>> +                     dma_resv_add_excl_fence(dmabuf->resv, singleton);
>>> +     } else {
>>> +             dma_resv_add_shared_fence(dmabuf->resv, fence);
>>> +     }
>> You also need to create a singleton when adding a shared fences.
>>
>> The problem is that shared fences must always signal after exclusive
>> ones and you can't guarantee that for the fence you add here.
> I'm beginning to think that I should just drop the flags and always
> wait on all fences and always take what's currently the "write" path.
> Otherwise, something's going to get it wrong somewhere.  Thoughts?

If that is sufficient for your use case then that is certainly the more 
defensive (e.g. less dangerous) approach.

> Also, Michelle (added to CC) commented on IRC today that amdgpu does
> something with implicit sync fences where it sorts out the fences
> which affect one queue vs. others.  He thought that stuffing fences in
> the dma-buf in this way might cause that to not work.  Thoughts?

Yes that is correct. What amdgpu does is it ignores all fences from the 
same process.

E.g. when A submits IBs 1, 2 and 3 and then B submits IB 4 then 4 waits 
for 1,2,3, but 1,2,3 can run parallel to each other.

And yes adding anything as explicit sync would break that, but I don't 
think that this is much of a problem.

Regards,
Christian.


>
> --Jason
>
>
>> Regards,
>> Christian.
>>
>>> +
>>> +     dma_resv_unlock(dmabuf->resv);
>>> +
>>> +     dma_fence_put(fence);
>>> +
>>> +     return ret;
>>> +}
>>> +
>>> +static long dma_buf_signal_sync_file(struct dma_buf *dmabuf,
>>> +                                  void __user *user_data)
>>> +{
>>> +     struct dma_buf_sync_file arg;
>>> +     struct dma_fence *fence = NULL;
>>> +     struct sync_file *sync_file;
>>> +     int fd, ret;
>>> +
>>> +     if (copy_from_user(&arg, user_data, sizeof(arg)))
>>> +             return -EFAULT;
>>> +
>>> +     if (arg.flags != 0 && arg.flags != DMA_BUF_SYNC_FILE_SYNC_WRITE)
>>> +             return -EINVAL;
>>> +
>>> +     fd = get_unused_fd_flags(O_CLOEXEC);
>>> +     if (fd < 0)
>>> +             return fd;
>>> +
>>> +     if (arg.flags & DMA_BUF_SYNC_FILE_SYNC_WRITE) {
>>> +             /* We need to include both the exclusive fence and all of
>>> +              * the shared fences in our fence.
>>> +              */
>>> +             ret = dma_resv_get_singleton(dmabuf->resv, NULL, &fence);
>>> +             if (ret)
>>> +                     goto err_put_fd;
>>> +     } else {
>>> +             fence = dma_resv_get_excl_rcu(dmabuf->resv);
>>> +     }
>>> +
>>> +     if (!fence)
>>> +             fence = dma_fence_get_stub();
>>> +
>>> +     sync_file = sync_file_create(fence);
>>> +
>>> +     dma_fence_put(fence);
>>> +
>>> +     if (!sync_file) {
>>> +             ret = -EINVAL;
>>> +             goto err_put_fd;
>>> +     }
>>> +
>>> +     fd_install(fd, sync_file->file);
>>> +
>>> +     arg.fd = fd;
>>> +     if (copy_to_user(user_data, &arg, sizeof(arg)))
>>> +             return -EFAULT;
>>> +
>>> +     return 0;
>>> +
>>> +err_put_fd:
>>> +     put_unused_fd(fd);
>>> +     return ret;
>>> +}
>>> +
>>>    static long dma_buf_ioctl(struct file *file,
>>>                          unsigned int cmd, unsigned long arg)
>>>    {
>>> @@ -390,6 +480,12 @@ static long dma_buf_ioctl(struct file *file,
>>>        case DMA_BUF_SET_NAME:
>>>                return dma_buf_set_name(dmabuf, (const char __user *)arg);
>>>
>>> +     case DMA_BUF_IOCTL_WAIT_SYNC_FILE:
>>> +             return dma_buf_wait_sync_file(dmabuf, (const void __user *)arg);
>>> +
>>> +     case DMA_BUF_IOCTL_SIGNAL_SYNC_FILE:
>>> +             return dma_buf_signal_sync_file(dmabuf, (void __user *)arg);
>>> +
>>>        default:
>>>                return -ENOTTY;
>>>        }
>>> diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
>>> index dbc7092e04b5..86e07acca90c 100644
>>> --- a/include/uapi/linux/dma-buf.h
>>> +++ b/include/uapi/linux/dma-buf.h
>>> @@ -37,8 +37,17 @@ struct dma_buf_sync {
>>>
>>>    #define DMA_BUF_NAME_LEN    32
>>>
>>> +struct dma_buf_sync_file {
>>> +     __u32 flags;
>>> +     __s32 fd;
>>> +};
>>> +
>>> +#define DMA_BUF_SYNC_FILE_SYNC_WRITE (1 << 0)
>>> +
>>>    #define DMA_BUF_BASE                'b'
>>> -#define DMA_BUF_IOCTL_SYNC   _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
>>> -#define DMA_BUF_SET_NAME     _IOW(DMA_BUF_BASE, 1, const char *)
>>> +#define DMA_BUF_IOCTL_SYNC       _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
>>> +#define DMA_BUF_SET_NAME         _IOW(DMA_BUF_BASE, 1, const char *)
>>> +#define DMA_BUF_IOCTL_WAIT_SYNC_FILE _IOW(DMA_BUF_BASE, 2, struct dma_buf_sync)
>>> +#define DMA_BUF_IOCTL_SIGNAL_SYNC_FILE       _IOWR(DMA_BUF_BASE, 3, struct dma_buf_sync)
>>>
>>>    #endif

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ