lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 24 Jul 2007 10:19:12 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
Cc:	Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Balbir Singh <balbir@...ibm.com>, linux-kernel@...r.kernel.org
Subject: Re: System hangs on running kernbench


* Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com> wrote:

> Basically, "make -s -j" workload hanged the machine, leading to lot of 
> OOM killings. This was on a 8-cpu machine with no swap space 
> configured and 4GB RAM. The same workload works "fine" (runs to 
> completion) on 2.6.22.

while i agree that the 32msec was too low, i think the problem is that 
"make -s -j" is a workload that has no guarantee of "success" on that 
system. The box does not have enough RAM to service it and does not have 
enough swap to survive it. In make -j, jobs are started without any 
throttling whatsoever. _Any_ control mechanism within the kernel can act 
as an "accidental throttle": for example IO could artificially slow it 
down to reduce job rate and keep RAM usage below the critical level. Or 
a kernel bug could cause tasks to be delayed and thus let the make -j 
"succeed". Or some bad kernel inefficiency in sys_fork() could have this 
effect too. It is very important that we dont look at every random 
number that a system can produce as a "benchmark", we really have to 
consider what happens behind it.

> I played with the scheduler tunables a bit and found that the problem 
> goes away if I set sched_granularity_ns to 100ms (default value 32ms).

yep - 32msecs was too low, please try -rc1 too: i've increased the 
granularity limit so it should be larger than 32ms. Reduce CONFIG_HZ as 
well if you are on a more server-type system.

> So my theory is this: 32ms preemption granularity is too low value for 
> any compile thread to make "usefull" progress. As a result of this 
> rapid context switch, job retiral rate slows down compared to job 
> arrival rate. This builds up job pressure on the system very quickly 
> (than may have happened with 100ms default granularity_ns or 2.6.22 
> kernel), leading to OOM killings (and hang).

By increasing the granularity the timings change - one can imagine 
workloads where _reducing_ the granularity would result in an effective 
throttling of the workload. I'm sure a workload could be constructed on 
the old scheduler too where its 100 msecs isnt enough either, only 
200msecs would help. That thinking never ends - you cannot tune 
non-throttled workloads. We've got to be really careful about this.

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ