[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100826093923.d4ac29b6.kamezawa.hiroyu@jp.fujitsu.com>
Date: Thu, 26 Aug 2010 09:39:23 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: David Rientjes <rientjes@...gle.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>,
Minchan Kim <minchan.kim@...il.com>
Subject: Re: [PATCH 1/2][BUGFIX] oom: remove totalpage normalization from
oom_badness()
On Wed, 25 Aug 2010 03:25:25 -0700 (PDT)
David Rientjes <rientjes@...gle.com> wrote:
>
> > 3) No reason to implement ABI breakage.
> > old tuning parameter mean)
> > oom-score = oom-base-score x 2^oom_adj
>
> Everybody knows this is useless beyond polarizing a task for kill or
> making it immune.
>
> > new tuning parameter mean)
> > oom-score = oom-base-score + oom_score_adj / (totalram + totalswap)
>
> This, on the other hand, has an actual unit (proportion of available
> memory) that can be used to prioritize tasks amongst those competing for
> the same set of shared resources and remains constant even when a task
> changes cpuset, its memcg limit changes, etc.
>
> And your equation is wrong, it's
>
> ((rss + swap) / (available ram + swap)) + oom_score_adj
>
> which is completely different from what you think it is.
>
I'm now trying to write a userspace tool to calculate this, for me.
Then, could you update documentation ?
==
3.2 /proc/<pid>/oom_score - Display current oom-killer score
-------------------------------------------------------------
This file can be used to check the current score used by the oom-killer is for
any given <pid>. Use it together with /proc/<pid>/oom_adj to tune which
process should be killed in an out-of-memory situation.
==
add a some documentation like:
==
(For system monitoring tool developpers, not for usual users.)
oom_score calculation is implemnentation dependent and can be modified without
any caution. But current logic is
oom_score = ((proc's rss + proc's swap) / (available ram + swap)) + oom_score_adj
proc's rss and swap can be obtained by /proc/<pid>/statm and available ram + swap
is dependent on the situation.
If the system is totaly under oom,
available ram == /proc/meminfo's MemTotal
available swap == in most case == /proc/meminfo's SwapTotal
When you use memory cgroup,
When swap is limited, avaliable ram + swap == memory cgroup's memsw limit.
When swap is unlimited, avaliable ram + swap = memory cgroup's memory limit +
SwapTotal
Then, please be careful that oom_score's order among tasks depends on the
situation. Assume 2 proceses A, B which has oom_score_adj of 300 and 0
And A uses 200M, B uses 1G of memory under 4G system
Under the 4G system.
A's socre = (200M *1000)/4G + 300 = 350
B's score = (1G * 1000)/4G = 250.
In the memory cgroup, it has 2G of resource.
A's score = (200M * 1000)/2G + 300 = 400
B's socre = (1G * 1000)/2G = 500
You shoudn't depend on /proc/<pid>/oom_score if you have to handle OOM under
cgroups and cpuset. But the logic is simple.
==
If you don't want, I'll add text and a sample tool to cgroup/memory.txt.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists