lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <625fdb56-b1c7-a76b-0a99-bd9c98adf984@daenzer.net>
Date:   Wed, 24 Apr 2019 19:10:19 +0200
From:   Michel Dänzer <michel@...nzer.net>
To:     Paul Kocialkowski <paul.kocialkowski@...tlin.com>,
        Nicolas Dufresne <nicolas@...fresne.ca>,
        Daniel Vetter <daniel@...ll.ch>
Cc:     Alexandre Courbot <acourbot@...omium.org>,
        Maxime Ripard <maxime.ripard@...tlin.com>,
        linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
        Tomasz Figa <tfiga@...omium.org>,
        Hans Verkuil <hverkuil@...all.nl>,
        Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
        Dave Airlie <airlied@...hat.com>,
        Mauro Carvalho Chehab <mchehab@...nel.org>,
        linux-media@...r.kernel.org
Subject: Re: Support for 2D engines/blitters in V4L2 and DRM

On 2019-04-24 2:19 p.m., Paul Kocialkowski wrote:
> On Wed, 2019-04-24 at 10:31 +0200, Michel Dänzer wrote:
>> On 2019-04-19 10:38 a.m., Paul Kocialkowski wrote:
>>> On Thu, 2019-04-18 at 20:30 -0400, Nicolas Dufresne wrote:
>>>> Le jeudi 18 avril 2019 à 10:18 +0200, Daniel Vetter a écrit :
>>>>>> It would be cool if both could be used concurrently and not just return
>>>>>> -EBUSY when the device is used with the other subsystem.
>>>>>
>>>>> We live in this world already :-) I think there's even patches (or merged
>>>>> already) to add fences to v4l, for Android.
>>>>
>>>> This work is currently suspended. It will require some feature on DRM
>>>> display to really make this useful, but there is also a lot of
>>>> challanges in V4L2. In GFX space, most of the use case are about
>>>> rendering as soon as possible. Though, in multimedia we have two
>>>> problems, we need to synchronize the frame rendering with the audio,
>>>> and output buffers may comes out of order due to how video CODECs are
>>>> made.
>>>
>>> Definitely, it feels like the DRM display side is currently a good fit
>>> for render use cases, but not so much for precise display cases where
>>> we want to try and display a buffer at a given vblank target instead of
>>> "as soon as possible".
>>>
>>> I have a userspace project where I've implemented a page flip queue,
>>> which only schedules the next flip when relevant and keeps ready
>>> buffers in the queue until then. This requires explicit vblank
>>> syncronisation (which DRM offsers, but pretty much all other display
>>> APIs, that are higher-level don't, so I'm just using a refresh-rate
>>> timer for them) and flip done notification.
>>>
>>> I haven't looked too much at how to flip with a target vblank with DRM
>>> directly but maybe the atomic API already has the bits in for that (but
>>> I haven't heard of such a thing as a buffer queue, so that makes me
>>> doubt it).
>>
>> Not directly. What's available is that if userspace waits for vblank n
>> and then submits a flip, the flip will complete in vblank n+1 (or a
>> later vblank, depending on when the flip is submitted and when the
>> fences the flip depends on signal).
>>
>> There is reluctance allowing more than one flip to be queued in the
>> kernel, as it would considerably increase complexity in the kernel. It
>> would probably only be considered if there was a compelling use-case
>> which was outright impossible otherwise.
> 
> Well, I think it's just less boilerplace for userspace. This is indeed
> quite complex, and I prefer to see that complexity done once and well
> in Linux rather than duplicated in userspace with more or less reliable
> implementations.

That's not the only trade-off to consider, e.g. I suspect handling this
in the kernel is more complex than in userspace.


>>> Well, I need to handle stuff like SDL in my userspace project, so I have
>>> to have all that queuing stuff in software anyway, but it would be good
>>> if each project didn't have to implement that. Worst case, it could be
>>> in libdrm too.
>>
>> Usually, this kind of queuing will be handled in a display server such
>> as Xorg or a Wayland compositor, not by the application such as a video
>> player itself, or any library in the latter's address space. I'm not
>> sure there's much potential for sharing code between display servers for
>> this.
> 
> This assumes that you are using a display server, which is definitely
> not always the case (there is e.g. Kodi GBM). Well, I'm not saying it
> is essential to have it in the kernel, but it would avoid code
> duplication and lower the complexity in userspace.

For code duplication, my suggestion would be to use a display server
instead of duplicating its functionality.


>>>> In the first, we'd need a mechanism where we can schedule a render at a
>>>> specific time or vblank. We can of course already implement this in
>>>> software, but with fences, the scheduling would need to be done in the
>>>> driver. Then if the fence is signalled earlier, the driver should hold
>>>> on until the delay is met. If the fence got signalled late, we also
>>>> need to think of a workflow. As we can't schedule more then one render
>>>> in DRM at one time, I don't really see yet how to make that work.
>>>
>>> Indeed, that's also one of the main issues I've spotted. Before using
>>> an implicit fence, we basically have to make sure the frame is due for
>>> display at the next vblank. Otherwise, we need to refrain from using
>>> the fence and schedule the flip later, which is kind of counter-
>>> productive.
>>
>> [...]
> 
>>> I feel like specifying a target vblank would be a good unit for that,
>>
>> The mechanism described above works for that.
> 
> I still don't see any fence-based mechanism that can work to achieve
> that, but maybe I'm missing your point.

It's not fence based, just good old waiting for the previous vblank
before submitting the flip to the kernel.


>>> since it's our native granularity after all (while a timestamp is not).
>>
>> Note that variable refresh rate (Adaptive Sync / FreeSync / G-Sync)
>> changes things in this regard. It makes the vblank length variable, and
>> if you wait for multiple vblanks between flips, you get the maximum
>> vblank length corresponding to the minimum refresh rate / timing
>> granularity. Thus, it would be useful to allow userspace to specify a
>> timestamp corresponding to the earliest time when the flip is to
>> complete. The kernel could then try to hit that as closely as possible.
> 
> I'm not very familiar with how this works, but I don't really see what
> it changes. Does it mean we can flip multiple times per vblank?

It's not about that.


> And I really like a vblank count over a timestamp, as one is the native
> unit at hand and the other one only correleates to it.

>From a video playback application POV it's really the other way around,
isn't it? The target time is known (e.g. in order to sync up with
audio), the vblank count has to be calculated from that. And with
variable refresh rate, this calculation can't be done reliably, because
it's not known ahead of time when the next vblank starts (at least not
more accurately than an interval corresponding to the maximum/minimum
refresh rates).

If the target timestamp could be specified explicitly, the kernel could
do the conversion to the vblank count for fixed refresh, and could
adjust the refresh rate to hit the target more accurately with variable
refresh.


-- 
Earthling Michel Dänzer               |              https://www.amd.com
Libre software enthusiast             |             Mesa and X developer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ