[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090408175409.eb0818db.kamezawa.hiroyu@jp.fujitsu.com>
Date: Wed, 8 Apr 2009 17:54:09 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: balbir@...ux.vnet.ibm.com
Cc: "linux-mm@...ck.org" <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"lizf@...fujitsu.com" <lizf@...fujitsu.com>,
Rik van Riel <riel@...riel.com>,
Bharata B Rao <bharata.rao@...ibm.com>,
Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFI] Shared accounting for memory resource controller
On Wed, 8 Apr 2009 14:19:52 +0530
Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> [2009-04-08 17:03:41]:
>
> > On Wed, 8 Apr 2009 13:18:09 +0530
> > Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
> >
> > > > > 3. Using the above, we can then try to (using an algorithm you
> > > > > proposed), try to do some work for figuring out the shared percentage.
> > > > >
> > > > This is the point. At last. Why "# of shared pages" is important ?
> > > >
> > >
> > > I posted this in my motivation yesterday. # of shared pages can help
> > > plan the system better and the size of the cgroup. A cgroup might have
> > > small usage_in_bytes but large number of shared pages. We need a
> > > metric that can help figure out the fair usage of the cgroup.
> > >
> > I don't fully understand but NR_FILE_MAPPED is an information in /proc/meminfo.
> > I personally think I want to support information in /proc/meminfo per memcg.
> >
> > Hmm ? then, if you add a hook, it seems
> > == mm/rmap.c
> > 689 void page_add_file_rmap(struct page *page)
> > 690 {
> > 691 if (atomic_inc_and_test(&page->_mapcount))
> > 692 __inc_zone_page_state(page, NR_FILE_MAPPED);
> > 693 }
> > == page_remove_rmap(struct page *page)
> > 739 __dec_zone_page_state(page,
> > 740 PageAnon(page) ? NR_ANON_PAGES : NR_FILE_MAPPED);
> > ==
> >
> > Is good place to go, maybe.
> >
> > page->page_cgroup->mem_cgroup-> inc/dec counter ?
> >
> > Maybe the patch itself will be simple, overhead is unknown..
>
> I thought of the same thing, but then moved to the following
>
> ... mem_cgroup_charge_statistics(..) {
> if (page_mapcount(page) == 0 && page_is_file_cache(page))
> __mem_cgroup_stat_add_safe(cpustat, MEM_CGROUP_STAT_FILE_RSS, val);
>
> But I've not yet tested the end result
>
I think
- at uncharge:
charge_statistics is only called when FILE CACHE is removed from radix-tree.
mem_cgroup_uncharge() is called only when PageAnon(page).
- at charge:
charge_statistics is only called when FILE CACHE is added to radix-tree.
This "checking only radix-tree insert/delete" help us to remove most of overheads
on FILE CACHE.
So, adding new hooks to page_add_file_rmap() and page_remove_rmap()
is a way to go. (and easy to understand because we account it at the same time
NR_FILE_MAPPED is modified.)
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists