lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20090902101157.89d23384.kamezawa.hiroyu@jp.fujitsu.com>
Date:	Wed, 2 Sep 2009 10:11:57 +0900
From:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	Lasse Kärkkäinen <tronic+bpsk@....iki.fi>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: Avoiding crash in out-of-memory situations

On Tue, 01 Sep 2009 16:24:09 +0300
Lasse Kärkkäinen <tronic+bpsk@....iki.fi> wrote:

> Currently a number of simple while (1) malloc(n); processes can crash a 
> system even if resource limits are in place as one can only limit the 
> memory usage of a process (not that of an user nor the total used by the 
> userspace) and any otherwise reasonable nproc and memory limits can be 
> circumvented by using more processes.
> 
> The OOM killer is supposed to work as a fallback in these situations, 
> but unfortunately the system still goes absolutely unresponsive for 
> about 10 minutes whenever the OOM killer runs. It would seem that this 
> happens because the kernel first gets rid of all buffers and caches, 
> slowing things down to a halt, and the OOM killer activates only after 
> nothing else can be done.
> 
> In a more complex situation (e.g. the one that we just had on our server 
> by accidentally running too many valgrind processes) this hang state can 
> take very long, essentially requiring the server to be reseted the hard way.
> 
> As there AFAIK is no existing remedy to this problem, I would suggest 
> implementing either (a) per-user limits, (b) a memory reserve for the 
> kernel (e.g. one could reserve 100 MB for the kernel/buffers/caches, 
> giving less for the userspace to allocate even if that means having to 
> kill processes) or (c) both of them.
> 
> Or perhaps there is something that I missed?
> 
if per-user limit is allowed, memory cgroup ?

Documentation/cgroups/memory.txt

thx,
-Kame


> P.S. using or not using swap doesn't really affect the fundamental 
> problem nor its symptoms, so please don't suggest that either way.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ