lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 25 Aug 2010 19:50:22 -0700 (PDT)
From:	David Rientjes <rientjes@...gle.com>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
cc:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	LKML <linux-kernel@...r.kernel.org>,
	linux-mm <linux-mm@...ck.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Oleg Nesterov <oleg@...hat.com>,
	Minchan Kim <minchan.kim@...il.com>
Subject: Re: [PATCH 1/2][BUGFIX] oom: remove totalpage normalization from
 oom_badness()

On Thu, 26 Aug 2010, KAMEZAWA Hiroyuki wrote:

> Hmm. I'll add a text like following to cgroup/memory.txt. O.K. ?
> 
> ==
> Notes on oom_score and oom_score_adj.
> 
> oom_score is calculated as
> 	oom_score = (taks's proportion of memory) + oom_score_adj.
> 

I'd replace "memory" with "memory limit (or memsw limit)" so it's clear 
we're talking about the amount of memory available to task.

> Then, when you use oom_score_adj to control the order of priority of oom,
> you should know about the amount of memory you can use.

Hmm, you need to know the amount of memory that you can use iff you know 
the memcg limit and it's a static value.  Otherwise, you only need to know 
the "memory usage of your application relative to others in the same 
cgroup."  An oom_score_adj of +300 adds 30% of that memcg's limit to the 
task, allowing all other tasks to use 30% more memory than that task with 
it still be killed.  An oom_score_adj of -300 allows that task to use 30% 
more memory than other tasks without getting killed.  These don't need to 
know the actual limit.

> So, an approximate oom_score under memcg can be
> 
>  memcg_oom_score = (oom_score - oom_score_adj) * system_memory/memcg's limit
> 		+ oom_score_adj.
> 

Right, that's the exact score within the memcg.

But, I still wouldn't encourage a formula like this because the memcg 
limit (or cpuset mems, mempolicy nodes, etc) are dynamic and may change 
out from under us.  So it's more important to define oom_score_adj in the 
user's mind as a proportion of memory available to be added (either 
positively or negatively) to its memory use when comparing it to other 
tasks.  The point is that the memcg limit isn't interesting in this 
formula, it's more important to understand the priority of the task 
_compared_ to other tasks memory usage in that memcg.

It probably would be helpful, though, if you know that a vital system task 
uses 1G, for instance, in a 4G memcg that an oom_score_adj of -250 will 
disable oom killing for it.  If that tasks leaks memory or becomes 
significantly large, for whatever reason, it could be killed, but we _can_ 
discount the 1G in comparison to other tasks as the "cost of doing 
business" when it comes to vital system tasks:

	(memory usage) * (memory+swap limit / system memory)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ