[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090108132855.77d3d3d4.kamezawa.hiroyu@jp.fujitsu.com>
Date: Thu, 8 Jan 2009 13:28:55 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: balbir@...ux.vnet.ibm.com
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Sudhir Kumar <skumar@...ux.vnet.ibm.com>,
YAMAMOTO Takashi <yamamoto@...inux.co.jp>,
Paul Menage <menage@...gle.com>, lizf@...fujitsu.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
David Rientjes <rientjes@...gle.com>,
Pavel Emelianov <xemul@...nvz.org>
Subject: Re: [RFC][PATCH 3/4] Memory controller soft limit organize cgroups
On Thu, 8 Jan 2009 09:55:58 +0530
Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> [2009-01-08 10:11:48]:
> > Hmm, Could you clarify following ?
> >
> > - Usage of memory at insertsion and usage of memory at reclaim is different.
> > So, this *sorted* order by RB-tree isn't the best order in general.
>
> True, but we frequently update the tree at an interval of HZ/4.
> Updating at every page fault sounded like an overkill and building the
> entire tree at reclaim is an overkill too.
>
"sort" is not necessary.
If this feature is implemented as background daemon,
just select the worst one at each iteration is enough.
> > Why don't you sort this at memory-reclaim dynamically ?
> > - Considering above, the look of RB tree can be
> >
> > +30M (an amount over soft limit is 30M)
> > / \
> > -15M +60M
>
> We don't have elements below their soft limit in the tree
>
> > ?
> >
> > At least, pleease remove the node at uncharge() when the usage goes down.
> >
>
> We do remove the tree if it goes under its soft limit at commit_charge,
> I thought I had the same code in uncharge(), but clearly that is
> missing. Thanks, I'll add it there.
>
Ah, ok. I missed it. Thank you for clalification.
Regards,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists