lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 8 Jan 2009 13:57:28 +0900
From:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	balbir@...ux.vnet.ibm.com
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Sudhir Kumar <skumar@...ux.vnet.ibm.com>,
	YAMAMOTO Takashi <yamamoto@...inux.co.jp>,
	Paul Menage <menage@...gle.com>, lizf@...fujitsu.com,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	David Rientjes <rientjes@...gle.com>,
	Pavel Emelianov <xemul@...nvz.org>
Subject: Re: [RFC][PATCH 3/4] Memory controller soft limit organize cgroups

On Thu, 8 Jan 2009 10:11:08 +0530
Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:

> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> [2009-01-08 13:28:55]:
> 
> > On Thu, 8 Jan 2009 09:55:58 +0530
> > Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
> > 
> > > * KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> [2009-01-08 10:11:48]:
> > > > Hmm,  Could you clarify following ?
> > > >   
> > > >   - Usage of memory at insertsion and usage of memory at reclaim is different.
> > > >     So, this *sorted* order by RB-tree isn't the best order in general.
> > > 
> > > True, but we frequently update the tree at an interval of HZ/4.
> > > Updating at every page fault sounded like an overkill and building the
> > > entire tree at reclaim is an overkill too.
> > > 
> > "sort" is not necessary.
> > If this feature is implemented as background daemon,
> > just select the worst one at each iteration is enough.
> 
> OK, definitely an alternative worth considering, but the trade-off is
> lazy building (your suggestion), which involves actively seeing the
> usage of all cgroups (and if they are large, O(c), c is number of
> cgroups can be quite a bit) versus building the tree as and when the
> fault occurs and controlled by some interval.
> 
I never think there will be "thousands" of memcg. O(c) is not so bad
if it's on background.

But usual cost of adding res_counter_soft_limit_excess(&mem->res); is big...
This maintainance cost of tree is always necessary even while there are no
memory shortage.

BTW, 
- mutex is bad. Can you use mutex while __GFP_WAIT is unset ?

- what happens when a big uncharge() occurs and no new charge() happens ?
  please add

   +		mem = mem_cgroup_get_largest_soft_limit_exceeding_node();
		if ( mem is still over soft limit )
			do reclaim....

   at least.

-Kame


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ