[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100907104635.2a02a1ca@basil.nowhere.org>
Date: Tue, 7 Sep 2010 10:46:35 +0200
From: Andi Kleen <andi@...stfloor.org>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: "linux-mm\@kvack.org" <linux-mm@...ck.org>,
"linux-kernel\@vger.kernel.org" <linux-kernel@...r.kernel.org>,
"minchan.kim\@gmail.com" <minchan.kim@...il.com>,
Mel Gorman <mel@....ul.ie>,
"kosaki.motohiro\@jp.fujitsu.com" <kosaki.motohiro@...fujitsu.com>
Subject: Re: [RFC][PATCH] big continuous memory allocator v2
On Tue, 7 Sep 2010 17:25:59 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
> On Tue, 07 Sep 2010 09:29:21 +0200
> Andi Kleen <andi@...stfloor.org> wrote:
>
> > KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> writes:
> >
> > > This is a page allcoator based on memory migration/hotplug code.
> > > passed some small tests, and maybe easier to read than previous
> > > one.
> >
> > Maybe I'm missing context here, but what is the use case for this?
> >
>
> I hear some drivers want to allocate xxMB of continuous area.(camera?)
> Maybe embeded guys can answer the question.
Ok what I wanted to say -- assuming you can make this work
nicely, and the delays (swap storms?) likely caused by this are not
too severe, it would be interesting for improving the 1GB pages on x86.
This would be a major use case and probably be enough
to keep the code around.
But it depends on how well it works.
e.g. when the zone is already fully filled how long
does the allocation of 1GB take?
How about when parallel programs are allocating/freeing
in it too?
What's the worst case delay under stress?
Does it cause swap storms?
One issue is also that it would be good to be able to decide
in advance if the OOM killer is likely triggered (and if yes
reject the allocation in the first place).
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists