[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20111220135817.5ba7ab05.akpm@linux-foundation.org>
Date: Tue, 20 Dec 2011 13:58:17 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"hannes@...xchg.org" <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.cz>, Hugh Dickins <hughd@...gle.com>,
Ying Han <yinghan@...gle.com>
Subject: Re: [PATCH 1/4] memcg: simplify page cache charging.
On Mon, 19 Dec 2011 09:01:22 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
> On Fri, 16 Dec 2011 14:28:14 -0800
> Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> > On Wed, 14 Dec 2011 16:49:22 +0900
> > KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
> >
> > > Because of commit ef6a3c6311, FUSE uses replace_page_cache() instead
> > > of add_to_page_cache(). Then, mem_cgroup_cache_charge() is not
> > > called against FUSE's pages from splice.
> >
> > Speaking of ef6a3c6311 ("mm: add replace_page_cache_page() function"),
> > may I pathetically remind people that it's rather inefficient?
> >
> > http://lkml.indiana.edu/hypermail/linux/kernel/1109.1/00375.html
> >
>
> IIRC, people says inefficient because it uses memcg codes for page-migration
> for fixing up accounting. Now, We added replace-page-cache for memcg in
> memcg-add-mem_cgroup_replace_page_cache-to-fix-lru-issue.patch
>
> So, I think the problem originally mentioned is fixed.
>
No, the inefficiency in replace_page_cache_page() is still there. Two
identical walks down the radix tree, a pointless decrement then
increment of mapping->nrpages, two writes to page->mapping, an often
pointless decrement then increment of NR_FILE_PAGES, and probably other things.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists