lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dbdf3853cc1e6fa5a05ced6bc72667e24fa7d8ef.camel@redhat.com>
Date:   Fri, 17 Apr 2020 17:24:54 -0400
From:   Lyude Paul <lyude@...hat.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     dri-devel@...ts.freedesktop.org, Daniel Vetter <daniel@...ll.ch>,
        Ville Syrjälä 
        <ville.syrjala@...ux.intel.com>,
        Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
        Maxime Ripard <mripard@...nel.org>,
        Thomas Zimmermann <tzimmermann@...e.de>,
        David Airlie <airlied@...ux.ie>, linux-kernel@...r.kernel.org
Subject: Re: [Poke: Tejun] Re: [RFC v3 03/11] drm/vblank: Add vblank works

On Fri, 2020-04-17 at 17:03 -0400, Tejun Heo wrote:
> Hello,
> 
> On Fri, Apr 17, 2020 at 04:16:28PM -0400, Lyude Paul wrote:
> > Hey Tejun! So I ended up rewriting the drm_vblank_work stuff so that it
> > used
> > kthread_worker. Things seem to work alright now. But while we're doing
> > just
> > fine with vblank workers on nouveau, we're still having trouble meeting
> > the
> > time constraints needed for using vblank works for i915's needs. There
> > still
> > seems to be a considerable latency between when the irq handler for the
> > vblank
> > interrupts fires, and when the actual drm_vblank_work we scheduled starts:
> ...
> > Tejun, do you have any idea if we might be able to further reduce the
> > latency
> > from the scheduler here? I believe we're already using pm_qos to at least
> > reduce the latency between when the vblank interrupt fires and the
> > interrupt
> > handler starts, but that still isn't enough to fix the other latency
> > issues
> > apparently. We're also already setting the priority of kthread_worker-
> > >task to
> > RT_FIFO as well.
> 
> I don't think the kernel can do much better than what you're seeing. I don't
> know the time scale that you need - is it some tens of microseconds range?
> I'm
> definitely not an expert on the subject but on generic kernels I don't think
> you can achieve anything sub millisec with any kind of reliability.
yeah, it's microsecond range :(

> 
> If the timing is that tight and it's not a hot path, the right solution may
> be
> polling for it rather than yielding the cpu and hoping to get scheduled in
> time.
> 
> > Also, of course, let me know if you're not happy with the
> > __kthread_queue_work() changes/kthread_worker usage in drm_vblank_work as
> > well
> 
> Just glanced over it and I still wonder whether it needs to be that tightly
> integrated, but we can look into that once we settle on whether this is the
> right direction.

FWIW - I think everyone on the DRM side is happy with the concept of vblank
workers, I think all that's really up for discussion at this point (ignoring
the nouveau patches that need Ben's OK) is just getting the code itself
reviewed and figuring out if we still want to use kthread_workers for this, or
if we should just go back to using kthreadd. At least, since it sounds like
there isn't much more we can do to improve on this latency-wise.

> 
> Thanks.
> 
-- 
Cheers,
	Lyude Paul (she/her)
	Associate Software Engineer at Red Hat

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ