[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20101013161206.c29df8ea.kamezawa.hiroyu@jp.fujitsu.com>
Date: Wed, 13 Oct 2010 16:12:06 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Andi Kleen <andi@...stfloor.org>
Cc: "linux-mm\@kvack.org" <linux-mm@...ck.org>,
"linux-kernel\@vger.kernel.org" <linux-kernel@...r.kernel.org>,
"minchan.kim\@gmail.com" <minchan.kim@...il.com>,
fujita.tomonori@....ntt.co.jp
Subject: Re: [RFC][PATCH 1/3] contigous big page allocator
On Wed, 13 Oct 2010 09:01:43 +0200
Andi Kleen <andi@...stfloor.org> wrote:
> KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> writes:
> >
> > What this wants to do:
> > allocates a contiguous chunk of pages larger than MAX_ORDER.
> > for device drivers (camera? etc..)
>
> I think to really move forward you need a concrete use case
> actually implemented in tree.
>
yes. I heard there were users at LinuxCon Japan, so restarted.
I heared video-for-linux + ARM wants this.
I found this thread, now.
http://kerneltrap.org/mailarchive/linux-kernel/2010/10/10/4630166
Hmm.
> > My intention is not for allocating HUGEPAGE(> MAX_ORDER).
>
> I still believe using this for 1GB pages would be one of the more
> interesting use cases.
>
I'm successfully allocating 1GB of continous pages at test. But I'm not sure
requirements and users. How quick this allocation should be ?
For example, if prep_new_page() for 1GB page is slow, what kind of chunk-of-page
construction is the best.
THanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists