[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20090312130556.68d03711.kamezawa.hiroyu@jp.fujitsu.com>
Date: Thu, 12 Mar 2009 13:05:56 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: balbir@...ux.vnet.ibm.com
Cc: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"nishimura@....nes.nec.co.jp" <nishimura@....nes.nec.co.jp>,
"kosaki.motohiro@...fujitsu.com" <kosaki.motohiro@...fujitsu.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
Subject: Re: [BUGFIX][PATCH 1/5] memcg use correct scan number at reclaim
On Thu, 12 Mar 2009 09:30:54 +0530
Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> [2009-03-12 12:51:24]:
>
> > On Thu, 12 Mar 2009 09:19:18 +0530
> > Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
> >
> > > * KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> [2009-03-12 09:55:16]:
> > >
> > > > Andrew, this [1/5] is a bug fix, others are not.
> > > >
> > > > ==
> > > > From: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
> > > >
> > > > Even when page reclaim is under mem_cgroup, # of scan page is determined by
> > > > status of global LRU. Fix that.
> > > >
> > > > Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
> > > > Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> > > > ---
> > > > mm/vmscan.c | 2 +-
> > > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > Index: mmotm-2.6.29-Mar10/mm/vmscan.c
> > > > ===================================================================
> > > > --- mmotm-2.6.29-Mar10.orig/mm/vmscan.c
> > > > +++ mmotm-2.6.29-Mar10/mm/vmscan.c
> > > > @@ -1470,7 +1470,7 @@ static void shrink_zone(int priority, st
> > > > int file = is_file_lru(l);
> > > > int scan;
> > > >
> > > > - scan = zone_page_state(zone, NR_LRU_BASE + l);
> > > > + scan = zone_nr_pages(zone, sc, l);
> > >
> > > I have the exact same patch in my patch queue. BTW, mem_cgroup_zone_nr_pages is
> > > buggy. We don't hold any sort of lock while extracting
> > > MEM_CGROUP_ZSTAT (ideally we need zone->lru_lock). Without that how do
> > > we guarantee that MEM_CGRUP_ZSTAT is not changing at the same time as
> > > we are reading it?
> > >
> > Is it big problem ? We don't need very precise value and ZSTAT just have
> > increment/decrement. So, I tend to ignore this small race.
> > (and it's unsigned long, not long long.)
> >
>
> The assumption is that unsigned long read is atomic even on 32 bit
> systems? What if we get pre-empted in the middle of reading the data
> and don't return back for long? The data can be highly in-accurate.
> No?
>
Hmm, preempt_disable() is appropriate ?
But shrink_zone() itself works on the value which is read at this time and
dont' take care of changes in situation by preeemption...so it's not problem
of memcg.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists