lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 5 Jan 2010 12:26:33 +0900 From: Daisuke Nishimura <d-nishimura@....biglobe.ne.jp> To: Greg KH <greg@...ah.com> Cc: stable <stable@...nel.org>, LKML <linux-kernel@...r.kernel.org>, linux-mm <linux-mm@...ck.org>, KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>, David Rientjes <rientjes@...gle.com>, Andrew Morton <akpm@...ux-foundation.org>, KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>, Balbir Singh <balbir@...ux.vnet.ibm.com>, Daisuke Nishimura <nishimura@....nes.nec.co.jp> Subject: [stable][BUGFIX][PATCH v3] memcg: avoid oom-killing innocent task in case of use_hierarchy On Mon, 4 Jan 2010 14:28:19 -0800 Greg KH <greg@...ah.com> wrote: > On Thu, Dec 17, 2009 at 09:47:24AM +0900, Daisuke Nishimura wrote: > > Stable team. > > > > Cay you pick this up for 2.6.32.y(and 2.6.31.y if it will be released) ? > > > > This is a for-stable version of a bugfix patch that corresponds to the > > upstream commmit d31f56dbf8bafaacb0c617f9a6f137498d5c7aed. > > I've applied it to the .32-stable tree, but it does not apply to .31. > Care to provide a version of the patch for that kernel if you want it > applied there? > hmm, strange. I can apply it onto 2.6.31.9. It might conflict with other patches in 2.6.31.y queue ? Anyway, I've attached the patch that is rebased on 2.6.31.9. Please tell me if you have any problem with it. v3: rebased on 2.6.31.9 === >From 14cd608eef94c851460d3d56e0c676d17ecc64f2 Mon Sep 17 00:00:00 2001 From: Daisuke Nishimura <nishimura@....nes.nec.co.jp> Date: Tue, 5 Jan 2010 12:15:42 +0900 Subject: [PATCH] memcg: avoid oom-killing innocent task in case of use_hierarchy task_in_mem_cgroup(), which is called by select_bad_process() to check whether a task can be a candidate for being oom-killed from memcg's limit, checks "curr->use_hierarchy"("curr" is the mem_cgroup the task belongs to). But this check return true(it's false positive) when: <some path>/00 use_hierarchy == 0 <- hitting limit <some path>/00/aa use_hierarchy == 1 <- "curr" This leads to killing an innocent task in 00/aa. This patch is a fix for this bug. And this patch also fixes the arg for mem_cgroup_print_oom_info(). We should print information of mem_cgroup which the task being killed, not current, belongs to. Signed-off-by: Daisuke Nishimura <nishimura@....nes.nec.co.jp> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> Reviewed-by: Balbir Singh <balbir@...ux.vnet.ibm.com> --- mm/memcontrol.c | 8 +++++++- mm/oom_kill.c | 2 +- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index fd4529d..566925e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -496,7 +496,13 @@ int task_in_mem_cgroup(struct task_struct *task, const struct mem_cgroup *mem) task_unlock(task); if (!curr) return 0; - if (curr->use_hierarchy) + /* + * We should check use_hierarchy of "mem" not "curr". Because checking + * use_hierarchy of "curr" here make this function true if hierarchy is + * enabled in "curr" and "curr" is a child of "mem" in *cgroup* + * hierarchy(even if use_hierarchy is disabled in "mem"). + */ + if (mem->use_hierarchy) ret = css_is_ancestor(&curr->css, &mem->css); else ret = (curr == mem); diff --git a/mm/oom_kill.c b/mm/oom_kill.c index a7b2460..ed452e9 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -400,7 +400,7 @@ static int oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order, cpuset_print_task_mems_allowed(current); task_unlock(current); dump_stack(); - mem_cgroup_print_oom_info(mem, current); + mem_cgroup_print_oom_info(mem, p); show_mem(); if (sysctl_oom_dump_tasks) dump_tasks(mem); -- 1.6.3.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists