[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080916211355.277b625d.kamezawa.hiroyu@jp.fujitsu.com>
Date: Tue, 16 Sep 2008 21:13:55 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: balbir@...ux.vnet.ibm.com
Cc: "xemul@...nvz.org" <xemul@...nvz.org>,
"hugh@...itas.com" <hugh@...itas.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, menage@...gle.com,
Dave Hansen <haveblue@...ibm.com>,
"nickpiggin@...oo.com.au" <nickpiggin@...oo.com.au>
Subject: memcg: lazy_lru (was Re: [RFC] [PATCH 8/9] memcg: remove
page_cgroup pointer from memmap)
On Fri, 12 Sep 2008 09:12:48 -0700
Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
> Kamezawa,
>
> I feel we can try the following approaches
>
> 1. Try per-node per-zone radix tree with dynamic allocation
> 2. Try the approach you have
> 3. Integrate with sparsemem (last resort for performance), Dave Hansen suggested
> adding a mem_section member and using that.
>
> I am going to try #1 today and see what the performance looks like
>
I'm now writing *lazy* lru handing via per-cpu struct like pagevec.
It seems works well (but not so fast as expected on 2cpu box....)
I need more tests but it's not so bad to share the logic at this stage.
I added 3 patches on to this set. (my old set need bug fix.)
==
[1] patches/page_count.patch ....get_page()/put_page() via page_cgroup.
[2] patches/lazy_lru_free.patch ....free page_cgroup from LRU in lazy way.
[3] patches/lazy_lru_add.patch ....add page_cgroup to LRU in lazy way.
3 patches will follow this mail.
Because of speculative radix-tree lookup, page_count patch seems a bit
difficult.
Anyway, I'll make this patch readable and post again.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists