[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20101014105521.8B80.A69D9226@jp.fujitsu.com>
Date: Thu, 14 Oct 2010 10:59:06 +0900 (JST)
From: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To: Andi Kleen <andi@...stfloor.org>
Cc: kosaki.motohiro@...fujitsu.com,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
"linux-mm\@kvack.org" <linux-mm@...ck.org>,
"linux-kernel\@vger.kernel.org" <linux-kernel@...r.kernel.org>,
"minchan.kim\@gmail.com" <minchan.kim@...il.com>,
fujita.tomonori@....ntt.co.jp
Subject: Re: [RFC][PATCH 1/3] contigous big page allocator
> KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> writes:
>
> >> > My intention is not for allocating HUGEPAGE(> MAX_ORDER).
> >>
> >> I still believe using this for 1GB pages would be one of the more
> >> interesting use cases.
> >>
> >
> > I'm successfully allocating 1GB of continous pages at test. But I'm not sure
> > requirements and users. How quick this allocation should be ?
>
> This will always be slow. Huge pages are always pre allocated
> even today through a sysctl. The use case would be have
>
> echo XXX > /proc/sys/vm/nr_hugepages
>
> at runtime working for 1GB too, instead of requiring a reboot
> for this.
>
> I think it's ok if that is somewhat slow, as long as it is not
> incredible slow. Ideally it shouldn't cause a swap storm either
offtopic: When I tried to increase nr_hugepages on ia64
which has 256MB hugepage architecture, sometimes I needed to wait
>10 miniture if the system is under memory pressure. So, slow allocation
is NOT only this contigous allocator issue. we already accept it and
we should. (I doubt it can be avoidable)
>
> (maybe we need some way to indicate how hard the freeing code should
> try?)
>
> I guess it would only really work well if you predefine
> movable zones at boot time.
>
> -Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists