[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20101008133712.2a836331.kamezawa.hiroyu@jp.fujitsu.com>
Date: Fri, 8 Oct 2010 13:37:12 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
Minchan Kim <minchan.kim@...il.com>,
Greg Thelen <gthelen@...gle.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, containers@...ts.osdl.org,
Andrea Righi <arighi@...eler.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>
Subject: Re: [PATCH v2] memcg: reduce lock time at move charge (Was Re:
[PATCH 04/10] memcg: disable local interrupts in lock_page_cgroup()
On Thu, 7 Oct 2010 16:14:54 -0700
Andrew Morton <akpm@...ux-foundation.org> wrote:
> On Thu, 7 Oct 2010 17:04:05 +0900
> KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
>
> > Now, at task migration among cgroup, memory cgroup scans page table and moving
> > account if flags are properly set.
> >
> > The core code, mem_cgroup_move_charge_pte_range() does
> >
> > pte_offset_map_lock();
> > for all ptes in a page table:
> > 1. look into page table, find_and_get a page
> > 2. remove it from LRU.
> > 3. move charge.
> > 4. putback to LRU. put_page()
> > pte_offset_map_unlock();
> >
> > for pte entries on a 3rd level? page table.
> >
> > This pte_offset_map_lock seems a bit long. This patch modifies a rountine as
> >
> > for 32 pages: pte_offset_map_lock()
> > find_and_get a page
> > record it
> > pte_offset_map_unlock()
> > for all recorded pages
> > isolate it from LRU.
> > move charge
> > putback to LRU
> > for all recorded pages
> > put_page()
>
> The patch makes the code larger, more complex and slower!
>
Slower ?
> I do think we're owed a more complete description of its benefits than
> "seems a bit long". Have problems been observed? Any measurements
> taken?
>
I'll rewrite the whole patch against today's mmotom.
Thanks,
-Kame
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists