[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1498079956-24467-4-git-send-email-guro@fb.com>
Date: Wed, 21 Jun 2017 22:19:13 +0100
From: Roman Gushchin <guro@...com>
To: <linux-mm@...ck.org>
CC: Roman Gushchin <guro@...com>, Tejun Heo <tj@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Li Zefan <lizefan@...wei.com>,
Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
<kernel-team@...com>, <cgroups@...r.kernel.org>,
<linux-doc@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: [v3 3/6] mm, oom: cgroup-aware OOM killer debug info
Dump the cgroup oom badness score, as well as the name
of chosen victim cgroup.
Here how it looks like in dmesg:
[ 18.824495] Choosing a victim memcg because of the system-wide OOM
[ 18.826911] Cgroup /A1: 200805
[ 18.827996] Cgroup /A2: 273072
[ 18.828937] Cgroup /A2/B3: 51
[ 18.829795] Cgroup /A2/B4: 272969
[ 18.830800] Cgroup /A2/B5: 52
[ 18.831890] Chosen cgroup /A2/B4: 272969
Signed-off-by: Roman Gushchin <guro@...com>
Cc: Tejun Heo <tj@...nel.org>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: Li Zefan <lizefan@...wei.com>
Cc: Michal Hocko <mhocko@...nel.org>
Cc: Vladimir Davydov <vdavydov.dev@...il.com>
Cc: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc: kernel-team@...com
Cc: cgroups@...r.kernel.org
Cc: linux-doc@...r.kernel.org
Cc: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org
---
mm/memcontrol.c | 20 +++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index bdb5103..4face20 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2669,7 +2669,15 @@ bool mem_cgroup_select_oom_victim(struct oom_control *oc)
if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
return false;
+
+ pr_info("Choosing a victim memcg because of the %s",
+ oc->memcg ?
+ "memory limit reached of cgroup " :
+ "system-wide OOM\n");
if (oc->memcg) {
+ pr_cont_cgroup_path(oc->memcg->css.cgroup);
+ pr_cont("\n");
+
chosen_memcg = oc->memcg;
parent = oc->memcg;
}
@@ -2683,6 +2691,10 @@ bool mem_cgroup_select_oom_victim(struct oom_control *oc)
points = mem_cgroup_oom_badness(iter, oc->nodemask);
+ pr_info("Cgroup ");
+ pr_cont_cgroup_path(iter->css.cgroup);
+ pr_cont(": %ld\n", points);
+
if (points > chosen_memcg_points) {
chosen_memcg = iter;
chosen_memcg_points = points;
@@ -2731,6 +2743,10 @@ bool mem_cgroup_select_oom_victim(struct oom_control *oc)
oc->chosen_memcg = chosen_memcg;
}
+ pr_info("Chosen cgroup ");
+ pr_cont_cgroup_path(chosen_memcg->css.cgroup);
+ pr_cont(": %ld\n", oc->chosen_points);
+
/*
* Even if we have to kill all tasks in the cgroup,
* we need to select the biggest task to start with.
@@ -2739,7 +2755,9 @@ bool mem_cgroup_select_oom_victim(struct oom_control *oc)
*/
oc->chosen_points = 0;
mem_cgroup_scan_tasks(chosen_memcg, oom_evaluate_task, oc);
- }
+ } else if (oc->chosen)
+ pr_info("Chosen task %s (%d) in root cgroup: %ld\n",
+ oc->chosen->comm, oc->chosen->pid, oc->chosen_points);
rcu_read_unlock();
--
2.7.4
Powered by blists - more mailing lists