[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110722084413.9dd4b880.kamezawa.hiroyu@jp.fujitsu.com>
Date: Fri, 22 Jul 2011 08:44:13 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Michal Hocko <mhocko@...e.cz>
Cc: linux-mm@...ck.org, Balbir Singh <bsingharora@...il.com>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/4] memcg: do not try to drain per-cpu caches without
pages
On Thu, 21 Jul 2011 13:36:06 +0200
Michal Hocko <mhocko@...e.cz> wrote:
> On Thu 21-07-11 19:12:50, KAMEZAWA Hiroyuki wrote:
> > On Thu, 21 Jul 2011 09:38:00 +0200
> > Michal Hocko <mhocko@...e.cz> wrote:
> >
> > > drain_all_stock_async tries to optimize a work to be done on the work
> > > queue by excluding any work for the current CPU because it assumes that
> > > the context we are called from already tried to charge from that cache
> > > and it's failed so it must be empty already.
> > > While the assumption is correct we can do it by checking the current
> > > number of pages in the cache. This will also reduce a work on other CPUs
> > > with an empty stock.
> > >
> > > Signed-off-by: Michal Hocko <mhocko@...e.cz>
> >
> >
> > At the first look, when a charge against TransParentHugepage() goes
> > into the reclaim routine, stock->nr_pages != 0 and this will
> > call additional kworker.
>
> True. We will drain a charge which could be used by other allocations
> in the meantime so we have a good chance to reclaim less. But how big
> problem is that?
> I mean I can add a new parameter that would force checking the current
> cpu but it doesn't look nice. I cannot add that condition
> unconditionally because the code will be shared with the sync path in
> the next patch and that one needs to drain _all_ cpus.
>
> What would you suggest?
By 2 methods
- just check nr_pages.
- drain "local stock" without calling schedule_work(). It's fast.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists