lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 31 May 2008 10:59:16 +0900 (JST)
From:	kamezawa.hiroyu@...fujitsu.com
To:	balbir@...ux.vnet.ibm.com
Cc:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>,
	xemul@...nvz.org, menage@...gle.com, yamamoto@...inux.co.jp,
	lizf@...fujitsu.com
Subject: Re: Re: [RFC][PATCH 1/2] memcg: res_counter hierarchy

>KAMEZAWA Hiroyuki wrote:
>> This patch tries to implements _simple_ 'hierarchy policy' in res_counter.
>> 
>> While several policy of hierarchy can be considered, this patch implements
>> simple one 
>>    - the parent includes, over-commits the child
>>    - there are no shared resource
>
>I am not sure if this is desirable. The concept of a hierarchy applies really
>well when there are shared resources.
>
>>    - dynamic hierarchy resource usage management in the kernel is not neces
sary
>> 
>
>Could you please elaborate as to why? I am not sure I understand your point
>

ok, let's consider a _miiddleware_ wchich has following paramater.

An expoterd param to the user.
   - user_memory_limit
parameters for co-operation with the kernel
   - kernel_memory_limit

And here,
   user_memory_limit >= kernel_memory_limit == cgroup's memory.limits_in_bytes

When a user ask the miidleware to set limit to 1Gbytes
   user_memory_limit = 1G
   kernel_memory_limit = 0-1G.
It moves kernel_memory_limit dynamically 0 to 1Gbytes and reset limits_in_byte
s in dynamic way with checking memory cgroup's statistics.
Of course, we can add some kind of interdace , as following
  - failure_notifier - triggered at failcnt increment.
  - threshhold_notifier - triggered as usage > threshold.

>> works as following.
>> 
>>  1. create a child. set default child limits to be 0.
>>  2. set limit to child.
>>     2-a. before setting limit to child, prepare enough room in parent.
>>     2-b. increase 'usage' of parent by child's limit.
>
>The problem with this is that you are forcing the parent will run into a recl
aim
>loop even if the child is not using the assigned limit to it.
>
That's not problem because it's avoildable by users.
But it's ok to limit the sum of child's limit to be below XX % ot the parent.

>>  3. the child sets its limit to the val moved from the parent.
>>     the parent remembers what amount of resource is to the children.
>> 
>
>All of this needs to be dynamic
>
As explained, this can be dynamic by middleware.

>>  Above means that
>> 	- a directory's usage implies the sum of all sub directories +
>>           own usage.
>> 	- there are no shared resource between parent <-> child.
>> 
>>  Pros.
>>   - simple and easy policy.
>>   - no hierarchy overhead.
>>   - no resource share among child <-> parent. very suitable for multilevel
>>     resource isolation.
>
>Sharing is an important aspect of hierachies. I am not convinced of this
>approach. Did you look at the patches I sent out? Was there something
>fundamentally broken in them?
>

Yes, I read. And tried to make it faster and found it will be complicated.
One problem is overhead of counter itself.
Another problem is overhead of shrinking multi-level LRU with feedback.
One more problem is that it's hard to implement various kinds of hierarchy
policy. I believe there are other hierarhcy policies rather than OpenVZ
want to use. Kicking out functions to middleware AMAP is what I'm thinking
now.

Thanks,
-Kame



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ