[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110906194545.0533ff1f.kamezawa.hiroyu@jp.fujitsu.com>
Date: Tue, 6 Sep 2011 19:45:45 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Johannes Weiner <jweiner@...hat.com>
Cc: Greg Thelen <gthelen@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Balbir Singh <bsingharora@...il.com>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>
Subject: Re: [PATCH] memcg: remove unneeded preempt_disable
On Tue, 6 Sep 2011 12:49:15 +0200
Johannes Weiner <jweiner@...hat.com> wrote:
> On Tue, Sep 06, 2011 at 07:04:24PM +0900, KAMEZAWA Hiroyuki wrote:
> > On Tue, 6 Sep 2011 11:58:52 +0200
> > Johannes Weiner <jweiner@...hat.com> wrote:
> >
> > > On Wed, Aug 17, 2011 at 11:50:53PM -0700, Greg Thelen wrote:
> > > > Both mem_cgroup_charge_statistics() and mem_cgroup_move_account() were
> > > > unnecessarily disabling preemption when adjusting per-cpu counters:
> > > > preempt_disable()
> > > > __this_cpu_xxx()
> > > > __this_cpu_yyy()
> > > > preempt_enable()
> > > >
> > > > This change does not disable preemption and thus CPU switch is possible
> > > > within these routines. This does not cause a problem because the total
> > > > of all cpu counters is summed when reporting stats. Now both
> > > > mem_cgroup_charge_statistics() and mem_cgroup_move_account() look like:
> > > > this_cpu_xxx()
> > > > this_cpu_yyy()
> > > >
> > > > Reported-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> > > > Signed-off-by: Greg Thelen <gthelen@...gle.com>
> > >
> > > I just noticed that both cases have preemption disabled anyway because
> > > of the page_cgroup bit spinlock.
> > >
> > > So removing the preempt_disable() is fine but we can even keep the
> > > non-atomic __this_cpu operations.
> > >
> > > Something like this instead?
> > >
> > > ---
> > > From: Johannes Weiner <jweiner@...hat.com>
> > > Subject: mm: memcg: remove needless recursive preemption disabling
> > >
> > > Callsites of mem_cgroup_charge_statistics() hold the page_cgroup bit
> > > spinlock, which implies disabled preemption.
> > >
> > > The same goes for the explicit preemption disabling to account mapped
> > > file pages in mem_cgroup_move_account().
> > >
> > > The explicit disabling of preemption in both cases is redundant.
> > >
> > > Signed-off-by: Johannes Weiner <jweiner@...hat.com>
> >
> > Could you add comments as
> > "This operation is called under bit spin lock !" ?
>
> I left it as is in the file-mapped case, because if you read the
> __this_cpu ops and go looking for who disables preemption, you see the
> lock_page_cgroup() immediately a few lines above.
>
> But I agree that the charge_statistics() is much less obvious, so I
> added a line there.
>
> Is that okay?
>
seems nice.
Thanks,
-Kame
> > Nice catch.
> >
> > Acked-by: KAMEZAWA Hiroyuki <kamezawa.hioryu@...fujitsu.com>
>
> Thanks!
>
> ---
>
> Signed-off-by: Johannes Weiner <jweiner@...hat.com>
> ---
> mm/memcontrol.c | 1 +
> 1 file changed, 1 insertion(+)
>
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -615,6 +615,7 @@ static unsigned long mem_cgroup_read_eve
> return val;
> }
>
> +/* Must be called with preemption disabled */
> static void mem_cgroup_charge_statistics(struct mem_cgroup *mem,
> bool file, int nr_pages)
> {
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists