lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 Dec 2009 13:08:26 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Kasper Sandberg <lkml@...anurb.dk>
Cc:	Jason Garrett-Glaser <darkshikari@...il.com>,
	Mike Galbraith <efault@....de>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	LKML Mailinglist <linux-kernel@...r.kernel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: x264 benchmarks BFS vs CFS


* Kasper Sandberg <lkml@...anurb.dk> wrote:

> On Thu, 2009-12-17 at 11:53 +0100, Ingo Molnar wrote:
> > * Jason Garrett-Glaser <darkshikari@...il.com> wrote:
> > 
> > > On Thu, Dec 17, 2009 at 1:33 AM, Kasper Sandberg <lkml@...anurb.dk> wrote:
> > > > well well :) nothing quite speaks out like graphs..
> > > >
> > > > http://doom10.org/index.php?topic=78.0
> > > >
> > > >
> > > >
> > > > regards,
> > > > Kasper Sandberg
> > > 
> > > Yeah, I sent this to Mike a bit ago.  Seems that .32 has basically tied 
> > > it--and given the strict thread-ordering expectations of x264, you basically 
> > > can't expect it to do any better, though I'm curious what's responsible for 
> > > the gap in "veryslow", even with SCHED_BATCH enabled.
> > > 
> > > The most odd case is that of "ultrafast", in which CFS immediately ties BFS 
> > > when we enable SCHED_BATCH.  We're doing some further testing to see exactly 
> 
> Thats kinda besides the point.
> 
> all these tunables and weirdness is _NEVER_ going to work for people.

v2.6.32 improved quite a bit on the x264 front so i dont think that's 
necessarily the case.

But yes, i'll subscribe to the view that we cannot satisfy everything all the 
time. There's tradeoffs in every scheduler design.

> now forgive me for being so blunt, but for a user, having to do
> echo x264 > /proc/cfs/gief_me_performance_on_app
> or
> echo some_benchmark > x264 > /proc/cfs/gief_me_performance_on_app
> 
> just isnt usable, bfs matches, even exceeds cfs on all accounts, with ZERO 
> user tuning, so while cfs may be able to nearly match up with a ton of 
> application specific stuff, that just doesnt work for a normal user.
> 
> not to mention that bfs does this whilst not loosing interactivity, 
> something which cfs certainly cannot boast.

What kind of latencies are those? Arent they just compiz induced due to 
different weighting of workloads in BFS and in the upstream scheduler?
Would you be willing to help us out pinning them down?

To move the discussion to the numeric front please send the 'perf sched 
latency' output of an affected workload.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ