[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E01C84A.60400@jp.fujitsu.com>
Date: Wed, 22 Jun 2011 19:47:38 +0900
From: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To: kosaki.motohiro@...fujitsu.com
CC: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, caiqian@...hat.com, rientjes@...gle.com,
hughd@...gle.com, kamezawa.hiroyu@...fujitsu.com,
minchan.kim@...il.com, oleg@...hat.com
Subject: [PATCH 3/6] oom: kill younger process first
This patch introduces do_each_thread_reverse() and select_bad_process()
uses it. The benefits are two, 1) oom-killer can kill younger process
than older if they have a same oom score. Usually younger process is
less important. 2) younger task often have PF_EXITING because shell
script makes a lot of short lived processes. Reverse order search can
detect it faster.
Reported-by: CAI Qian <caiqian@...hat.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Reviewed-by: Minchan Kim <minchan.kim@...il.com>
Acked-by: David Rientjes <rientjes@...gle.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
---
include/linux/sched.h | 11 +++++++++++
mm/oom_kill.c | 2 +-
2 files changed, 12 insertions(+), 1 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index e4e6d7b..392ff30 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2257,6 +2257,9 @@ static inline unsigned long wait_task_inactive(struct task_struct *p,
#define next_task(p) \
list_entry_rcu((p)->tasks.next, struct task_struct, tasks)
+#define prev_task(p) \
+ list_entry((p)->tasks.prev, struct task_struct, tasks)
+
#define for_each_process(p) \
for (p = &init_task ; (p = next_task(p)) != &init_task ; )
@@ -2269,6 +2272,14 @@ extern bool current_is_single_threaded(void);
#define do_each_thread(g, t) \
for (g = t = &init_task ; (g = t = next_task(g)) != &init_task ; ) do
+/*
+ * Similar to do_each_thread(). but two difference are there.
+ * - traverse tasks reverse order (i.e. younger to older)
+ * - caller must hold tasklist_lock. rcu_read_lock isn't enough
+*/
+#define do_each_thread_reverse(g, t) \
+ for (g = t = &init_task ; (g = t = prev_task(g)) != &init_task ; ) do
+
#define while_each_thread(g, t) \
while ((t = next_thread(t)) != g)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 9412657..797308b 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -300,7 +300,7 @@ static struct task_struct *select_bad_process(unsigned int *ppoints,
struct task_struct *chosen = NULL;
*ppoints = 0;
- do_each_thread(g, p) {
+ do_each_thread_reverse(g, p) {
unsigned int points;
if (!p->mm)
--
1.7.3.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists