[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110330074231.GB15394@tiehlicka.suse.cz>
Date: Wed, 30 Mar 2011 09:42:31 +0200
From: Michal Hocko <mhocko@...e.cz>
To: Zhu Yanhai <zhu.yanhai@...il.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC 0/3] Implementation of cgroup isolation
On Tue 29-03-11 22:02:23, Zhu Yanhai wrote:
> Hi,
>
> 2011/3/29 Michal Hocko <mhocko@...e.cz>:
> > Isn't this an overhead that would slow the whole thing down. Consider
> > that you would need to lookup page_cgroup for every page and touch
> > mem_cgroup to get the limit.
>
> Current almost has did such things, say the direct reclaim path:
> shrink_inactive_list()
> ->isolate_pages_global()
> ->isolate_lru_pages()
> ->mem_cgroup_del_lru(for each page it wants to isolate)
> and in mem_cgroup_del_lru() we have:
> [code]
> pc = lookup_page_cgroup(page);
> /*
> * Used bit is set without atomic ops but after smp_wmb().
> * For making pc->mem_cgroup visible, insert smp_rmb() here.
> */
> smp_rmb();
> /* unused or root page is not rotated. */
> if (!PageCgroupUsed(pc) || mem_cgroup_is_root(pc->mem_cgroup))
> return;
> [/code]
> By calling mem_cgroup_is_root(pc->mem_cgroup) we already brought the
> struct mem_cgroup into cache.
> So probably things won't get worse at least.
But we would still have to isolate and put back a lot of pages
potentially. If we do not have those pages on the list we will skip them
automatically.
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists