[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20100217180405.fae00f46.kamezawa.hiroyu@jp.fujitsu.com>
Date: Wed, 17 Feb 2010 18:04:05 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"balbir@...ux.vnet.ibm.com" <balbir@...ux.vnet.ibm.com>,
"nishimura@....nes.nec.co.jp" <nishimura@....nes.nec.co.jp>,
rientjes@...gle.com,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
npiggin@...e.de
Subject: [PATCH] memcg: handle panic_on_oom=always case v2
Documenation is updated.
==
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Now, if panic_on_oom=2, the whole system panics even if the oom happend
in some special situation (as cpuset, mempolicy....).
Then, panic_on_oom=2 means painc_on_oom_always.
Now, memcg doesn't check panic_on_oom flag. This patch adds a check.
BTW, how it's useful ?
kdump+panic_on_oom=2 is the last tool to investigate what happens in oom-ed
system. When a task is killed, the sysytem recovers and there will be few hint
to know what happnes. In mission critical system, oom should never happen.
Then, panic_on_oom=2+kdump is useful to avoid next OOM by knowing
precise information via snapshot.
TODO:
- For memcg, it's for isolate system's memory usage, oom-notiifer and
freeze_at_oom (or rest_at_oom) should be implemented. Then, management
daemon can do similar jobs (as kdump) or taking snapshot per cgroup.
Changelg:
- rewrote documentations.
CC: Balbir Singh <balbir@...ux.vnet.ibm.com>
CC: David Rientjes <rientjes@...gle.com>
CC: Nick Piggin <npiggin@...e.de>
Reviewed-by: Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
---
Documentation/cgroups/memory.txt | 5 ++++-
Documentation/sysctl/vm.txt | 5 ++++-
mm/oom_kill.c | 2 ++
3 files changed, 10 insertions(+), 2 deletions(-)
Index: mmotm-2.6.33-Feb11/Documentation/cgroups/memory.txt
===================================================================
--- mmotm-2.6.33-Feb11.orig/Documentation/cgroups/memory.txt
+++ mmotm-2.6.33-Feb11/Documentation/cgroups/memory.txt
@@ -182,6 +182,8 @@ list.
NOTE: Reclaim does not work for the root cgroup, since we cannot set any
limits on the root cgroup.
+Note2: When panic_on_oom is set to "2", the whole system will panic.
+
2. Locking
The memory controller uses the following hierarchy
@@ -379,7 +381,8 @@ The feature can be disabled by
NOTE1: Enabling/disabling will fail if the cgroup already has other
cgroups created below it.
-NOTE2: This feature can be enabled/disabled per subtree.
+NOTE2: When panic_on_oom is set to "2", the whole system will panic in
+case of an oom event in any cgroup.
7. Soft limits
Index: mmotm-2.6.33-Feb11/Documentation/sysctl/vm.txt
===================================================================
--- mmotm-2.6.33-Feb11.orig/Documentation/sysctl/vm.txt
+++ mmotm-2.6.33-Feb11/Documentation/sysctl/vm.txt
@@ -573,11 +573,14 @@ Because other nodes' memory may be free.
may be not fatal yet.
If this is set to 2, the kernel panics compulsorily even on the
-above-mentioned.
+above-mentioned. Even oom happens under memory cgroup, the whole
+system panics.
The default value is 0.
1 and 2 are for failover of clustering. Please select either
according to your policy of failover.
+panic_on_oom=2+kdump gives you very strong tool to investigate
+why oom happens. You can get snapshot.
=============================================================
Index: mmotm-2.6.33-Feb11/mm/oom_kill.c
===================================================================
--- mmotm-2.6.33-Feb11.orig/mm/oom_kill.c
+++ mmotm-2.6.33-Feb11/mm/oom_kill.c
@@ -471,6 +471,8 @@ void mem_cgroup_out_of_memory(struct mem
unsigned long points = 0;
struct task_struct *p;
+ if (sysctl_panic_on_oom == 2)
+ panic("out of memory(memcg). panic_on_oom is selected.\n");
read_lock(&tasklist_lock);
retry:
p = select_bad_process(&points, mem);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists