lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090910074323.GA21751@elte.hu>
Date:	Thu, 10 Sep 2009 09:43:23 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	linux-kernel@...r.kernel.org
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Mike Galbraith <efault@....de>,
	Con Kolivas <kernel@...ivas.org>
Subject: [updated] BFS vs. mainline scheduler benchmarks and measurements


* Ingo Molnar <mingo@...e.hu> wrote:

>   OLTP performance (postgresql + sysbench)
>      http://redhat.com/~mingo/misc/bfs-vs-tip-oltp.jpg

To everyone who might care about this, i've updated the sysbench 
results to latest -tip:

    http://redhat.com/~mingo/misc/bfs-vs-tip-oltp-v2.jpg

This double checks the effects of the various interactivity fixlets 
in the scheduler tree (whose interactivity effects 
mentioned/documented in the various threads on lkml) in the 
throughput space too and they also improved sysbench performance.

Con, i'd also like to thank you for raising general interest in 
scheduler latencies once more by posting the BFS patch. It gave us 
more bugreports upstream and gave us desktop users willing to test 
patches which in turn helps us improve the code. When users choose 
to suffer in silence that is never helpful.

BFS isnt particularly strong in this graph - from having looked at 
the workload under BFS my impression was that this is primarily due 
to you having cut out much of the sched-domains SMP load-balancer 
code. BFS 'insta-balances' very agressively, which hurts cache 
affine workloads rather visibly.

You might want to have a look at that design detail if you care - 
load-balancing is in significant parts orthogonal to the basic 
design of a fair scheduler.

For example we kept much of the existing load-balancer when we went 
to CFS in v2.6.23 - the fairness engine and the load-balancer are in 
large parts independent units of code and can be improved/tweaked 
separately.

There's interactions, but the concepts are largely separate.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ