[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1206251726040.1895@chino.kir.corp.google.com>
Date: Mon, 25 Jun 2012 17:32:00 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Rik van Riel <riel@...hat.com>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Minchan Kim <minchan@...nel.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [patch] mm, thp: abort compaction if migration page cannot be
charged to memcg
On Mon, 25 Jun 2012, Rik van Riel wrote:
> The patch makes sense, however I wonder if it would make
> more sense in the long run to allow migrate/compaction to
> temporarily exceed the memcg memory limit for a cgroup,
> because the original page will get freed again soon anyway.
>
> That has the potential to improve compaction success, and
> reduce compaction related CPU use.
>
Yeah, Kame brought up the same point with a sample patch by allowing the
temporary charge for the new page. It would certainly solve this problem
in a way that we don't have to even touch compaction, it's disappointing
that we have to charge memory to do a page migration. I'm not so sure
about the approach of temporarily allowing the excess charge, however,
since it would scale with the number of cpus doing compaction or
migration, which could end up with PAGE_SIZE * nr_cpu_ids.
I haven't looked at it (yet), but I'm hoping that there's a way to avoid
charging the temporary page at all until after move_to_new_page()
succeeds, i.e. find a way to uncharge page before charging newpage. We
currently don't charge things like vmalloc() memory to things that call
alloc_pages() directly so it seems like it's plausible without causing
usage > limit.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists