lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 17 Feb 2010 08:42:39 +0900
From:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	David Rientjes <rientjes@...gle.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Rik van Riel <riel@...hat.com>,
	Nick Piggin <npiggin@...e.de>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Balbir Singh <balbir@...ux.vnet.ibm.com>,
	Lubos Lunak <l.lunak@...e.cz>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [patch -mm 4/9 v2] oom: remove compulsory panic_on_oom mode

On Tue, 16 Feb 2010 01:02:28 -0800 (PST)
David Rientjes <rientjes@...gle.com> wrote:

> On Tue, 16 Feb 2010, KAMEZAWA Hiroyuki wrote:
> 
> > > You don't understand that the behavior has changed ever since 
> > > mempolicy-constrained oom conditions are now affected by a compulsory 
> > > panic_on_oom mode, please see the patch description.  It's absolutely 
> > > insane for a single sysctl mode to panic the machine anytime a cpuset or 
> > > mempolicy runs out of memory and is more prone to user error from setting 
> > > it without fully understanding the ramifications than any use it will ever 
> > > do.  The kernel already provides a mechanism for doing this, OOM_DISABLE.  
> > > if you want your cpuset or mempolicy to risk panicking the machine, set 
> > > all tasks that share its mems or nodes, respectively, to OOM_DISABLE.  
> > > This is no different from the memory controller being immune to such 
> > > panic_on_oom conditions, stop believing that it is the only mechanism used 
> > > in the kernel to do memory isolation.
> > > 
> > You don't explain why "we _have to_ remove API which is used"
> > 
> 
> First, I'm not stating that we _have_ to remove anything, this is a patch 
> proposal that is open for review.
> 
> Second, I believe we _should_ remove panic_on_oom == 2 because it's no 
> longer being used as it was documented: as we've increased the exposure of 
> the oom killer (memory controller, pagefault ooms, now mempolicy tasklist 
> scanning), we constantly have to re-evaluate the semantics of this option 
> while a well-understood tunable with a long history, OOM_DISABLE, already 
> does the equivalent.  The downside of getting this wrong is that the 
> machine panics when it shouldn't have because of an unintended consequence 
> of the mode being enabled (a mempolicy ooms, for example, that was created 
> by the user).  When reconsidering its semantics, I'd personally opt on the 
> safe side and make sure the machine doesn't panic unnecessarily and 
> instead require users to use OOM_DISABLE for tasks they do not want to be 
> oom killed.
> 

Please don't. I had a chance to talk with customer support team and talked
about panic_on_oom briefly. I understood that panic_on_oom_alyways+kdump
is the strongest tool for investigating customer's OOM situtation and do
the best advice to them. panic_on_oom_always+kdump is the 100% information
as snapshot when oom-killer happens. Then, it's easy to investigate and
explain what is wront. They sometimes discover memory leak (by some prorietary
driver) or miss-configuration of the system (as using unnecessary bounce buffer.)

Then, please leave panic_on_oom=always.
Even with mempolicy or cpuset 's OOM, we need panic_on_oom=always option.
And yes, I'll add something similar to memcg. freeze_at_oom or something.

Thanks,
-Kame




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ