[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20111214164734.4d7d6d97.kamezawa.hiroyu@jp.fujitsu.com>
Date: Wed, 14 Dec 2011 16:47:34 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: "linux-mm@...ck.org" <linux-mm@...ck.org>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"hannes@...xchg.org" <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.cz>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>, Ying Han <yinghan@...gle.com>
Subject: [PATCH 0/4] memcg: simplify LRU handling.
This series is onto linux-next +
memcg-add-mem_cgroup_replace_page_cache-to-fix-lru-issue.patch
The 1st purpose of this patch is reduce overheads of mem_cgroup_add/del_lru.
They uses some atomic ops. After this patch, lru handling routine will be
==
struct lruvec *mem_cgroup_lru_add_list(struct zone *zone, struct page *page,
enum lru_list lru)
{
struct mem_cgroup_per_zone *mz;
struct mem_cgroup *memcg;
struct page_cgroup *pc;
if (mem_cgroup_disabled())
return &zone->lruvec;
pc = lookup_page_cgroup(page);
memcg = pc->mem_cgroup;
VM_BUG_ON(!memcg);
mz = page_cgroup_zoneinfo(memcg, page);
/* compound_order() is stabilized through lru_lock */
MEM_CGROUP_ZSTAT(mz, lru) += 1 << compound_order(page);
return &mz->lruvec;
}
==
simple and no atomic ops. Because of Johannes works in linux-next,
this can be archived by very straightforward way.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists