[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100803123934.3aea00cc.kamezawa.hiroyu@jp.fujitsu.com>
Date: Tue, 3 Aug 2010 12:39:34 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: balbir@...ux.vnet.ibm.com
Cc: linux-mm@...ck.org,
"nishimura@....nes.nec.co.jp" <nishimura@....nes.nec.co.jp>,
vgoyal@...hat.com, m-ikeda@...jp.nec.com, gthelen@...gle.com,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH -mm 3/5] memcg scalable file stat accounting method
On Tue, 3 Aug 2010 09:03:27 +0530
Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> [2010-08-02 19:15:59]:
>
> > From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> >
> > At accounting file events per memory cgroup, we need to find memory cgroup
> > via page_cgroup->mem_cgroup. Now, we use lock_page_cgroup().
> >
> > But, considering the context which page-cgroup for files are accessed,
> > we can use alternative light-weight mutual execusion in the most case.
> > At handling file-caches, the only race we have to take care of is "moving"
> > account, IOW, overwriting page_cgroup->mem_cgroup. Because file status
> > update is done while the page-cache is in stable state, we don't have to
> > take care of race with charge/uncharge.
> >
> > Unlike charge/uncharge, "move" happens not so frequently. It happens only when
> > rmdir() and task-moving (with a special settings.)
> > This patch adds a race-checker for file-cache-status accounting v.s. account
> > moving. The new per-cpu-per-memcg counter MEM_CGROUP_ON_MOVE is added.
> > The routine for account move
> > 1. Increment it before start moving
> > 2. Call synchronize_rcu()
> > 3. Decrement it after the end of moving.
> > By this, file-status-counting routine can check it needs to call
> > lock_page_cgroup(). In most case, I doesn't need to call it.
> >
> >
> > Changelog: 20100730
> > - some cleanup.
> > Changelog: 20100729
> > - replaced __this_cpu_xxx() with this_cpu_xxx
> > (because we don't call spinlock)
> > - added VM_BUG_ON().
> >
> > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> > ---
> > mm/memcontrol.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++---------
> > 1 file changed, 66 insertions(+), 12 deletions(-)
> >
> > Index: mmotm-0727/mm/memcontrol.c
> > ===================================================================
> > --- mmotm-0727.orig/mm/memcontrol.c
> > +++ mmotm-0727/mm/memcontrol.c
> > @@ -88,6 +88,7 @@ enum mem_cgroup_stat_index {
> > MEM_CGROUP_STAT_PGPGOUT_COUNT, /* # of pages paged out */
> > MEM_CGROUP_STAT_SWAPOUT, /* # of pages, swapped out */
> > MEM_CGROUP_EVENTS, /* incremented at every pagein/pageout */
> > + MEM_CGROUP_ON_MOVE, /* A check for locking move account/status */
> >
> > MEM_CGROUP_STAT_NSTATS,
> > };
> > @@ -1074,7 +1075,49 @@ static unsigned int get_swappiness(struc
> > return swappiness;
> > }
> >
> > -/* A routine for testing mem is not under move_account */
> > +static void mem_cgroup_start_move(struct mem_cgroup *mem)
> > +{
> > + int cpu;
> > + /* for fast checking in mem_cgroup_update_file_stat() etc..*/
> > + spin_lock(&mc.lock);
> > + for_each_possible_cpu(cpu)
> > + per_cpu(mem->stat->count[MEM_CGROUP_ON_MOVE], cpu) += 1;
>
> Is for_each_possible really required? Won't online cpus suffice? There
> can be a race if a hotplug event happens between start and end move,
> shouldn't we handle that. My concern is that with something like 1024
> cpus possible today, we might need to optimize this further.
>
yes. I have the same concern. But I don't have any justification to disable
cpu hotplug while moving pages , it may take several msec.
> May be we can do this first and optimize later.
>
Maybe. For now, cpu-hotplug event hanlder tend to be a noise for this patch.
I would like to do it later.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists