lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 17 Mar 2008 10:28:04 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	Willy Tarreau <w@....eu>, Ray Lee <ray-lk@...rabbit.org>,
	Ingo Molnar <mingo@...e.hu>,
	"LKML," <linux-kernel@...r.kernel.org>
Subject: Re: Poor PostgreSQL scaling on Linux 2.6.25-rc5 (vs 2.6.22)


Nick,

We do grow the period as the load increases, and this keeps the slice
constant - although it might not be big enough for your taste (but its
tunable)

Short running tasks will indeed be very likely to be run quickly after
wakeup because wakeup's are placed left in the tree. (and when using
sleeper fairness, can get up to a whole slice bonus).

Interactivity is all about generating a scheduling pattern that is easy
on the human brain - that means predictable and preferably with lags <
40ms - as long as the interval is predictable the human brain will patch
up a lot, once it becomes erratic all is out the window. (human
perception of lags is in the 10ms range, but up to 40ms seems to do
acceptable patch up as long as its predictable).

Due to current desktop bloat, its important cpu bound tasks are treated
well too. Take for instance scrolling firefox - that utterly consumes
the fastest cpus, still people expect a smooth experience. By ensuring
the scheduler behaviour degrades in a predicatable fashion, and trying
to keep the latency to a sane level.


The thing that seems to trip up this psql thing is the strict
requirement to always run the leftmost task. If all tasks have very
short runnable periods, we start interleaving between all contending
tasks. The way we're looking to solve this by weakening this leftmost
requirement so that a server/client pair can ping-pong for a while, then
switch to another pair which gets to ping-pong for a while.

This alternating pattern as opposed to the interleaving pattern is much
more friendly to the cache. And we should do it in such a manner that we
still ensure fairness and predictablilty and such.

The latest sched code contains a few patches in this direction
(.25-rc6), and they seem to have the desired effect on 1 socket single
and dual core and 8 socket single core and dual core. On quad core we
seem to have some load balance problems that destroy the workload in
other interresting ways - looking into that now.

- Peter

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ