[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090203170427.c6070cda.kamezawa.hiroyu@jp.fujitsu.com>
Date: Tue, 3 Feb 2009 17:04:27 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: balbir@...ux.vnet.ibm.com
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"nishimura@....nes.nec.co.jp" <nishimura@....nes.nec.co.jp>,
"lizf@...fujitsu.com" <lizf@...fujitsu.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [-mm patch] Show memcg information during OOM (v2)
On Tue, 3 Feb 2009 12:57:01 +0530
Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
> Checkpatch caught an additional space, so here is the patch again
>
>
> Description: Add RSS and swap to OOM output from memcg
>
> From: Balbir Singh <balbir@...ux.vnet.ibm.com>
>
> Changelog v2..v1:
>
> 1. Add more information about task's memcg and the memcg
> over it's limit
> 2. Print data in KB
> 3. Move the print routine outside task_lock()
> 4. Use rcu_read_lock() around cgroup_path, strictly speaking it
> is not required, but relying on the current memcg implementation
> is not a good idea.
>
>
> This patch displays memcg values like failcnt, usage and limit
> when an OOM occurs due to memcg.
>
> Thanks go out to Johannes Weiner, Li Zefan, David Rientjes,
> Kamezawa Hiroyuki, Daisuke Nishimura and KOSAKI Motohiro for
> review.
>
IIUC, this oom_kill is serialized by memcg_tasklist mutex.
Then, you don't have to allocate buffer on stack.
> +void mem_cgroup_print_mem_info(struct mem_cgroup *memcg, struct task_struct *p)
> +{
> + struct cgroup *task_cgrp;
> + struct cgroup *mem_cgrp;
> + /*
> + * Need a buffer on stack, can't rely on allocations.
> + */
> + char task_memcg_name[MEM_CGROUP_OOM_BUF_SIZE];
> + char memcg_name[MEM_CGROUP_OOM_BUF_SIZE];
> + int ret;
> +
making this as
static char task_memcg_name[PATH_MAX];
static char memcg_name[PATH_MAX];
is ok, I think. and the patch will be more simple.
Thanks,
-kame
> + if (!memcg)
> + return;
> +
> + mem_cgrp = memcg->css.cgroup;
> + task_cgrp = mem_cgroup_from_task(p)->css.cgroup;
> +
> + rcu_read_lock();
> + ret = cgroup_path(task_cgrp, task_memcg_name, MEM_CGROUP_OOM_BUF_SIZE);
> + if (ret < 0) {
> + /*
> + * Unfortunately, we are unable to convert to a useful name
> + * But we'll still print out the usage information
> + */
> + rcu_read_unlock();
> + goto done;
> + }
> + ret = cgroup_path(mem_cgrp, memcg_name, MEM_CGROUP_OOM_BUF_SIZE);
> + if (ret < 0) {
> + rcu_read_unlock();
> + goto done;
> + }
> +
> + rcu_read_unlock();
> +
> + printk(KERN_INFO "Task in %s killed as a result of limit of %s\n",
> + task_memcg_name, memcg_name);
> +done:
> +
> + printk(KERN_INFO "memory: usage %llukB, limit %llukB, failcnt %llu\n",
> + res_counter_read_u64(&memcg->res, RES_USAGE) >> 10,
> + res_counter_read_u64(&memcg->res, RES_LIMIT) >> 10,
> + res_counter_read_u64(&memcg->res, RES_FAILCNT));
> + printk(KERN_INFO "memory+swap: usage %llukB, limit %llukB, "
> + "failcnt %llu\n",
> + res_counter_read_u64(&memcg->memsw, RES_USAGE) >> 10,
> + res_counter_read_u64(&memcg->memsw, RES_LIMIT) >> 10,
> + res_counter_read_u64(&memcg->memsw, RES_FAILCNT));
> +}
> +
> /*
> * Unlike exported interface, "oom" parameter is added. if oom==true,
> * oom-killer can be invoked.
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index d3b9bac..951356f 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -394,6 +394,7 @@ static int oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order,
> cpuset_print_task_mems_allowed(current);
> task_unlock(current);
> dump_stack();
> + mem_cgroup_print_mem_info(mem, current);
> show_mem();
> if (sysctl_oom_dump_tasks)
> dump_tasks(mem);
>
> --
> Balbir
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists