[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090205135503.f89049e9.kamezawa.hiroyu@jp.fujitsu.com>
Date: Thu, 5 Feb 2009 13:55:03 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Lai Jiangshan <laijs@...fujitsu.com>
Cc: balbir@...ux.vnet.ibm.com,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"nishimura@....nes.nec.co.jp" <nishimura@....nes.nec.co.jp>,
"lizf@...fujitsu.com" <lizf@...fujitsu.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [-mm patch] Show memcg information during OOM (v3)
On Thu, 05 Feb 2009 12:00:05 +0800
Lai Jiangshan <laijs@...fujitsu.com> wrote:
>
> > +void mem_cgroup_print_oom_info(struct mem_cgroup *memcg, struct task_struct *p)
> > +{
> > + struct cgroup *task_cgrp;
> > + struct cgroup *mem_cgrp;
> > + /*
> > + * Need a buffer on stack, can't rely on allocations. The code relies
> > + * on the assumption that OOM is serialized for memory controller.
> > + * If this assumption is broken, revisit this code.
> > + */
> > + static char task_memcg_name[PATH_MAX];
> > + static char memcg_name[PATH_MAX];
>
> Is there any lock which protects this static data?
>
As commented, seriealized by memory cgroup. see memcg_taslist mutex.
> > + int ret;
> > +
> > + if (!memcg)
> > + return;
> > +
> > + mem_cgrp = memcg->css.cgroup;
> > + task_cgrp = mem_cgroup_from_task(p)->css.cgroup;
> > +
> > + rcu_read_lock();
> > + ret = cgroup_path(task_cgrp, task_memcg_name, PATH_MAX);
> > + if (ret < 0) {
> > + /*
> > + * Unfortunately, we are unable to convert to a useful name
> > + * But we'll still print out the usage information
> > + */
> > + rcu_read_unlock();
> > + goto done;
> > + }
> > + ret = cgroup_path(mem_cgrp, memcg_name, PATH_MAX);
> > + if (ret < 0) {
> > + rcu_read_unlock();
> > + goto done;
> > + }
> > +
> > + rcu_read_unlock();
>
> IIRC, a preempt_enable() will add about 50 bytes to kernel size.
>
I think compliler does good job here....
> I think these lines are also good for readability:
>
> rcu_read_lock();
> ret = cgroup_path(task_cgrp, task_memcg_name, PATH_MAX);
> if (ret >= 0)
> ret = cgroup_path(mem_cgrp, memcg_name, PATH_MAX);
> rcu_read_unlock();
>
In mmotm set, there is no 2 buffers. just one.
Sorry, if you have comments, patch against mmotm is welcome.
Thanks,
-Kame
> if (ret < 0) {
> /*
> * Unfortunately, we are unable to convert to a useful name
> * But we'll still print out the usage information
> */
> goto done;
> }
>
> Lai
>
> > +
> > + printk(KERN_INFO "Task in %s killed as a result of limit of %s\n",
> > + task_memcg_name, memcg_name);
> > +done:
> > +
> > + printk(KERN_INFO "memory: usage %llukB, limit %llukB, failcnt %llu\n",
> > + res_counter_read_u64(&memcg->res, RES_USAGE) >> 10,
> > + res_counter_read_u64(&memcg->res, RES_LIMIT) >> 10,
> > + res_counter_read_u64(&memcg->res, RES_FAILCNT));
> > + printk(KERN_INFO "memory+swap: usage %llukB, limit %llukB, "
> > + "failcnt %llu\n",
> > + res_counter_read_u64(&memcg->memsw, RES_USAGE) >> 10,
> > + res_counter_read_u64(&memcg->memsw, RES_LIMIT) >> 10,
> > + res_counter_read_u64(&memcg->memsw, RES_FAILCNT));
> > +}
> > +
> > /*
> > * Unlike exported interface, "oom" parameter is added. if oom==true,
> > * oom-killer can be invoked.
> > diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> > index d3b9bac..2f3166e 100644
> > --- a/mm/oom_kill.c
> > +++ b/mm/oom_kill.c
> > @@ -394,6 +394,7 @@ static int oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order,
> > cpuset_print_task_mems_allowed(current);
> > task_unlock(current);
> > dump_stack();
> > + mem_cgroup_print_oom_info(mem, current);
> > show_mem();
> > if (sysctl_oom_dump_tasks)
> > dump_tasks(mem);
> >
>
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists