[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51C9E83A.8070700@sgi.com>
Date: Tue, 25 Jun 2013 11:58:02 -0700
From: Mike Travis <travis@....com>
To: "H. Peter Anvin" <hpa@...or.com>
CC: Yinghai Lu <yinghai@...nel.org>,
Greg KH <gregkh@...uxfoundation.org>,
Nathan Zimmer <nzimmer@....com>, Robin Holt <holt@....com>,
Rob Landley <rob@...dley.net>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
the arch/x86 maintainers <x86@...nel.org>,
linux-doc@...r.kernel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC 0/2] Delay initializing of large sections of memory
On 6/25/2013 11:44 AM, H. Peter Anvin wrote:
> On 06/25/2013 11:40 AM, Yinghai Lu wrote:
>> On Tue, Jun 25, 2013 at 11:17 AM, H. Peter Anvin <hpa@...or.com> wrote:
>>> On 06/25/2013 10:35 AM, Mike Travis wrote:
>>
>>> However, please consider Ingo's counterproposal of doing this via the
>>> buddy allocator, i.e. hugepages being broken on demand. That is a
>>> *very* powerful model, although would require more infrastructure.
>>
>> Can you or Ingo elaborate more about the buddy allocator proposal?
>>
>
> Start by initializing 1G hyperpages only, but mark them so that the
> allocator knows that if it needs to break them apart it has to
> initialize the page structures for the 2M subpages.
>
> Same thing with 2M -> 4K.
>
> -hpa
>
>
It is worth experimenting with but the big question would be,
if it still avoids the very expensive "memmap_init_zone" and
it's sub-functions using huge expanses of memory. I'll do some
experimenting as soon as I can. Our 32TB system is being
brought back to 16TB (we found a number of problems as we
get closer and closer to the 64TB limit), but that's still
a significant size.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists