lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 21 Apr 2020 14:34:59 +0200
From:   Daniel Vetter <daniel@...ll.ch>
To:     Tejun Heo <tj@...nel.org>
Cc:     Lyude Paul <lyude@...hat.com>, dri-devel@...ts.freedesktop.org,
        Daniel Vetter <daniel@...ll.ch>,
        Ville Syrjälä 
        <ville.syrjala@...ux.intel.com>,
        Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
        Maxime Ripard <mripard@...nel.org>,
        Thomas Zimmermann <tzimmermann@...e.de>,
        David Airlie <airlied@...ux.ie>, linux-kernel@...r.kernel.org
Subject: Re: [Poke: Tejun] Re: [RFC v3 03/11] drm/vblank: Add vblank works

On Fri, Apr 17, 2020 at 05:03:56PM -0400, Tejun Heo wrote:
> Hello,
> 
> On Fri, Apr 17, 2020 at 04:16:28PM -0400, Lyude Paul wrote:
> > Hey Tejun! So I ended up rewriting the drm_vblank_work stuff so that it used
> > kthread_worker. Things seem to work alright now. But while we're doing just
> > fine with vblank workers on nouveau, we're still having trouble meeting the
> > time constraints needed for using vblank works for i915's needs. There still
> > seems to be a considerable latency between when the irq handler for the vblank
> > interrupts fires, and when the actual drm_vblank_work we scheduled starts:
> ...
> > Tejun, do you have any idea if we might be able to further reduce the latency
> > from the scheduler here? I believe we're already using pm_qos to at least
> > reduce the latency between when the vblank interrupt fires and the interrupt
> > handler starts, but that still isn't enough to fix the other latency issues
> > apparently. We're also already setting the priority of kthread_worker->task to
> > RT_FIFO as well.
> 
> I don't think the kernel can do much better than what you're seeing. I don't
> know the time scale that you need - is it some tens of microseconds range? I'm
> definitely not an expert on the subject but on generic kernels I don't think
> you can achieve anything sub millisec with any kind of reliability.
> 
> If the timing is that tight and it's not a hot path, the right solution may be
> polling for it rather than yielding the cpu and hoping to get scheduled in
> time.

What we've discussed on irc is that the hand-rolled version is apparently
a bit better (but not perfect, since indeed this isn't possible without
-rt). I think we need to look more into whether that difference is real,
and why exactly it happens. From a quick look our hand-rolled worker and
kthread_worker have identical code ...

> > Also, of course, let me know if yu're not happy with the
> > __kthread_queue_work() changes/kthread_worker usage in drm_vblank_work as well
> 
> Just glanced over it and I still wonder whether it needs to be that tightly
> integrated, but we can look into that once we settle on whether this is the
> right direction.

I don't think we absolutely have to do this, simply means some nested
irq-safe spinlock. One in vblank_work, other in kthread_worker. Since the
delayed work doesn't do that I think it'd be nice if the drm_vblank
(instead of timer) delayed work could use the same pattern.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ