lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 17 Feb 2009 15:36:58 +0900
From:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	balbir@...ux.vnet.ibm.com
Cc:	linux-mm@...ck.org, Sudhir Kumar <skumar@...ux.vnet.ibm.com>,
	YAMAMOTO Takashi <yamamoto@...inux.co.jp>,
	Bharata B Rao <bharata@...ibm.com>,
	Paul Menage <menage@...gle.com>, lizf@...fujitsu.com,
	linux-kernel@...r.kernel.org,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	David Rientjes <rientjes@...gle.com>,
	Pavel Emelianov <xemul@...nvz.org>,
	Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
	Rik van Riel <riel@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC][PATCH 0/4] Memory controller soft limit patches (v2)

On Tue, 17 Feb 2009 11:09:03 +0530
Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:

> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> [2009-02-17 14:10:39]:
> 
> > On Tue, 17 Feb 2009 10:11:10 +0530
> > Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
> > 
> > > * KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> [2009-02-17 13:03:52]:
> > > 
> > > > On Tue, 17 Feb 2009 08:35:26 +0530
> > > > Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
> > > > I don't want to add any new big burden to kernel hackers of memory management,
> > > > they work hard to improve memory reclaim. This patch will change the behavior.
> > > > 
> > > 
> > > I don't think I agree, this approach suggests that before doing global
> > > reclaim, there are several groups that are using more than their
> > > share of memory, so it makes sense to reclaim from them first.
> > > 
> > 
> > > 
> > > > BTW, in typical bad case, several threads on cpus goes into memory recalim at once and
> > > > all thread will visit this memcg's soft-limit tree at once and soft-limit will
> > > > not work as desired anyway.
> > > > You can't avoid this problem at alloc_page() hot-path.
> > > 
> > > Even if all threads go into soft-reclaim at once, the tree will become
> > > empty after a point and we will just return saying there are no more
> > > memcg's to reclaim from (we remove the memcg from the tree when
> > > reclaiming), then those threads will go into regular reclaim if there
> > > is still memory pressure.
> > 
> > Yes. the largest-excess group will be removed. So, it seems that it doesn't work
> > as designed. rbtree is considered as just a hint ? If so, rbtree seems to be
> > overkill.
> > 
> > just a question:
> > Assume memcg under hierarchy.
> >    ../group_A/                 usage=1G, soft_limit=900M  hierarchy=1
> >               01/              usage=200M, soft_limit=100M
> >               02/              usage=300M, soft_limit=200M
> >               03/              usage=500M, soft_limit=300M  <==== 200M over.
> >                  004/          usage=200M, soft_limit=100M
> >                  005/          usage=300M, soft_limit=200M
> > 
> > At memory shortage, group 03's memory will be reclaimed 
> >   - reclaim memory from 03, 03/004, 03/005
> > 
> > When 100M of group 03' memory is reclaimed, group_A 's memory is reclaimd at the
> > same time, implicitly. Doesn't this break your rb-tree ?
> > 
> > I recommend you that soft-limit can be only applied to the node which is top of
> > hierarchy.
> 
> Yes, that can be done, but the reason for putting both was to target
> the right memcg early.
> 
My point is  that sort by rb-tree is broken in above case.

Thanks,
-Kame

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ