lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 1 Jun 2021 08:46:14 -0700
From:   Rob Clark <robdclark@...il.com>
To:     Rob Clark <robdclark@...il.com>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        freedreno <freedreno@...ts.freedesktop.org>,
        linux-arm-msm <linux-arm-msm@...r.kernel.org>,
        Rob Clark <robdclark@...omium.org>,
        Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
        Maxime Ripard <mripard@...nel.org>,
        Thomas Zimmermann <tzimmermann@...e.de>,
        David Airlie <airlied@...ux.ie>,
        open list <linux-kernel@...r.kernel.org>,
        Matthew Brost <matthew.brost@...el.com>
Cc:     Daniel Vetter <daniel@...ll.ch>
Subject: Re: [RFC 2/3] drm/atomic: Call dma_fence_boost() when we've missed a vblank

On Tue, Jun 1, 2021 at 7:18 AM Daniel Vetter <daniel@...ll.ch> wrote:
>
> On Sun, May 30, 2021 at 07:33:57AM -0700, Rob Clark wrote:
> > On Thu, May 20, 2021 at 9:29 AM Daniel Vetter <daniel@...ll.ch> wrote:
> > >
> > > On Wed, May 19, 2021 at 11:38:53AM -0700, Rob Clark wrote:
> > > > From: Rob Clark <robdclark@...omium.org>
> > > >
> > > > Signed-off-by: Rob Clark <robdclark@...omium.org>
> > > > ---
> > > >  drivers/gpu/drm/drm_atomic_helper.c | 11 +++++++++++
> > > >  1 file changed, 11 insertions(+)
> > > >
> > > > diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
> > > > index 560aaecba31b..fe10fc2e7f86 100644
> > > > --- a/drivers/gpu/drm/drm_atomic_helper.c
> > > > +++ b/drivers/gpu/drm/drm_atomic_helper.c
> > > > @@ -1435,11 +1435,15 @@ int drm_atomic_helper_wait_for_fences(struct drm_device *dev,
> > > >       int i, ret;
> > > >
> > > >       for_each_new_plane_in_state(state, plane, new_plane_state, i) {
> > > > +             u64 vblank_count;
> > > > +
> > > >               if (!new_plane_state->fence)
> > > >                       continue;
> > > >
> > > >               WARN_ON(!new_plane_state->fb);
> > > >
> > > > +             vblank_count = drm_crtc_vblank_count(new_plane_state->crtc);
> > > > +
> > > >               /*
> > > >                * If waiting for fences pre-swap (ie: nonblock), userspace can
> > > >                * still interrupt the operation. Instead of blocking until the
> > > > @@ -1449,6 +1453,13 @@ int drm_atomic_helper_wait_for_fences(struct drm_device *dev,
> > > >               if (ret)
> > > >                       return ret;
> > > >
> > > > +             /*
> > > > +              * Check if we've missed a vblank while waiting, and if we have
> > > > +              * signal the fence that it's signaler should be boosted
> > > > +              */
> > > > +             if (vblank_count != drm_crtc_vblank_count(new_plane_state->crtc))
> > > > +                     dma_fence_boost(new_plane_state->fence);
> > >
> > > I think we should do a lot better here:
> > > - maybe only bother doing this for single-crtc updates, and only if
> > >   modeset isn't set. No one else cares about latency.
> > >
> > > - We should boost _right_ when we've missed the frame, so I think we
> > >   should have a _timeout wait here that guesstimates when the vblank is
> > >   over (might need to throw in a vblank wait if we missed) and then boost
> > >   immediately. Not wait a bunch of frames (worst case) until we finally
> > >   decide to boost.
> >
> > I was thinking about this a bit more.. How about rather than calling
> > some fence->op->boost() type thing when we are about to miss a vblank
> > (IMO that is also already too late), we do something more like
> > fence->ops->set_deadline() before we even wait?
>
> Hm yeah that sounds like a clean idea.
>
> Even more, why not add the deadline/waiter information to the callback
> we're adding? That way drivers can inspect it whenever they feel like and
> don't have to duplicate the tracking. And it's probably easier to
> tune/adjust to the myriads of use-cases (flip target miss, userspace wait,
> wakeup boost maybe too ...).

You mean, enumerate the types of deadline?

For userspace waits, we might have a timeout, but not really
(currently) any more information than that?  The vblank deadline is
the only type of deadline that seems pretty clear to me.

I suppose we could do something like:

   dma_fence_set_deadline(fence, &(struct dma_fence_deadline){
           .type = DMA_FENCE_DEADLINE_VBLANK,
           .time = next_vblank_ktime,
       });

to make it a bit more extensible to add more deadline types or
additional optional information

BR,
-R

>
> I like this direction a lot more than what we discussed with post-miss
> hints thus far.
>
> > It's probably a bit impossible for a gpu driver to really predict how
> > long some rendering will take, but other cases like video decoder are
> > somewhat more predictable.. the fence provider could predict given the
> > remaining time until the deadline what clk rates are required to get
> > you there.
>
> Well if we do have a deadline the driver can note that in its scheduler
> and arm a driver to kick the clocks. Or maybe use past history to do this
> upfront.
> -Daniel
>
> >
> > BR,
> > -R
> >
> >
> > >
> > > Otherwise I really like this, I think it's about the only real reason i915
> > > isn't using atomic helpers.
> > >
> > > Also adding Matt B for this topic.
> > > -Daniel
> > >
> > > > +
> > > >               dma_fence_put(new_plane_state->fence);
> > > >               new_plane_state->fence = NULL;
> > > >       }
> > > > --
> > > > 2.30.2
> > > >
> > >
> > > --
> > > Daniel Vetter
> > > Software Engineer, Intel Corporation
> > > http://blog.ffwll.ch
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ