lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 Dec 2009 11:53:16 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Jason Garrett-Glaser <darkshikari@...il.com>,
	Mike Galbraith <efault@....de>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	Kasper Sandberg <lkml@...anurb.dk>,
	LKML Mailinglist <linux-kernel@...r.kernel.org>
Subject: Re: x264 benchmarks BFS vs CFS


* Jason Garrett-Glaser <darkshikari@...il.com> wrote:

> On Thu, Dec 17, 2009 at 1:33 AM, Kasper Sandberg <lkml@...anurb.dk> wrote:
> > well well :) nothing quite speaks out like graphs..
> >
> > http://doom10.org/index.php?topic=78.0
> >
> >
> >
> > regards,
> > Kasper Sandberg
> 
> Yeah, I sent this to Mike a bit ago.  Seems that .32 has basically tied 
> it--and given the strict thread-ordering expectations of x264, you basically 
> can't expect it to do any better, though I'm curious what's responsible for 
> the gap in "veryslow", even with SCHED_BATCH enabled.
> 
> The most odd case is that of "ultrafast", in which CFS immediately ties BFS 
> when we enable SCHED_BATCH.  We're doing some further testing to see exactly 
> what the conditions of this are--is it because ultrafast is just so much 
> faster than all the other modes and so switches threads/loads faster?  Is it 
> because ultrafast has relatively equal workload among the threads, unlike 
> the other loads?  We'll probably know soon.

Thanks for testing it!

Btw., you might want to make use of 'perf sched record', 'perf sched map', 
'perf sched trace' etc. to get an insight into how a particular workload 
schedules and why those decisions are done. (You'll need CONFIG_SCHED_DEBUG=y 
for best results.)

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ