lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 04 Nov 2012 15:04:13 +0100
From:	Uwaysi Bin Kareem <uwaysi.bin.kareem@...adoxuncreated.com>
To:	linux-kernel@...r.kernel.org, el es <el.es.cr@...il.com>
Subject: Re: The uncatchable jitter, or may the scheduler wars be over?

On Fri, 05 Oct 2012 14:04:29 +0200, el es <el.es.cr@...il.com> wrote:

> Hello,
>
> first of all, the posts that inspired me to write this up,
> were from Uwaysi Bin Kareem (paradoxuncreated dot com).
>
> Here is what I think:
> could the source of graphic/video jitter as most people
> perceive it, be something that could be technically defined
> as 'graphic buffer underrun', caused by the scheduler
> unable to align the deadline for some userspace programs
> that are crucial to video/opengl output v-refresh, that
> being really HARD RT ? As in, say the scheduler could
> sometimes decide to preempt the userspace in the middle of
> OpenGL/fb call [pretty easy to imagine this : userspace that
> often blocks on calls to the video hardware, or has a
> usespace thread that does that, and is unable to finish
> some opengl pipeline calls before end of its slice, or
> in case of misalignment, can execute enough commands to
> create one (or several) frame(s), and then is cut in the
> middle of creating another one and has to wait for its
> turn again, and in the mean time, vsync/video buffer swap
> occurs, and that last frame is lost/discarded/created with
> time settings from previous slice which are wrong]
>
> Bearing in mind, that the most length the video/fb/opengl
> buffer can have, is probably 3 (triple buffering as in
> some game settings), as opposed to (at least some)
> sound hw which can have buffers several ms long,
> it's not hard to imagine what happens if userspace cannot
> make it in time to update the buffer, causing 'underruns'.
>
> This would also explain why it doesn't matter to 'server'
> people - they don't have RT video hw/buffers they care for...
> (but they tune the below for max throughput instead)
>
> But whether it is measurable or not - I don't know.
>
> The OP (Uwaysi) has been fiddling with HZ value and the
> averaging period of the scheduler (which he called 'filter')
> (and granularity too). He's had some interesting results IMO.
>
> Hope the above makes sense and not much gibberish :)
>
> Lukasz
>

I have now tried both CFS and BFS.Doom 3 is now running with very low  
jitter on both. Both need a 90hz timer, no highres timer, and a  
granularity/interval suited for "natural" (psychovisual profile).
I also compiled them with some optimizations, and options for low jitter.
(KBUILD_CFLAGS	+= -O3 -fno-defer-pop --param prefetch-latency=200)
With Vsync on in doom3, it runs very smooth. Vsync off, BFS has less  
jitter than CFS.
Doom 3 does 3 passes to opengl, and therefore seems more jitter-sensitive,  
so getting it to run well, means minimizing jitter.
Compatibility layers, like Wine adds complexity though, and I have HL2  
running in an intensely tweaked XP install, perfectly (without jitter).  
With wine and BFS, it runs as good, but with some major one second  
jitters. With CFS, some more smaller jitters / higher average jitter. But  
the major jitters are of less lenght. Videojitter on youtube, seems less  
with CFS aswell.

So for "scheduler wars" indeed, identifying those jitters, and getting the  
best of both, is optimal.

This with "low-latency desktop" preemption.

I have yet to get the realtime patch/threadirqs working, however within  
the month I will have a new e5 computer, which probably will be a whole  
lot more fun to test that on.

Also like I stated elsewhere, since daemons seem to make a difference,  
optimally putting daemons or processes that can, on a low-jitter queue,  
transparent to the user, seems optimal. Unfortunately realtime is not  
quite working as one would expect, causing input to be choked at times, if  
you want to have one main app, and the rest on sched_other, as a  
low-jitter queue. So I am still iterating this.

Reducing jitter, seems to generally improve the computing experience,  
setting also higher expectations to quality. Also a machine with jitter  
ofcourse, behaves like a lower-end computer. So reducing jitter, seems to  
be central to an enjoyable computing experience. This all without  
unreasonable effort ofcourse.

Peace Be With You.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ