[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51C4B20B.20201@sgi.com>
Date: Fri, 21 Jun 2013 15:05:31 -0500
From: Nathan Zimmer <nzimmer@....com>
To: "H. Peter Anvin" <hpa@...or.com>
CC: Greg KH <gregkh@...uxfoundation.org>, <holt@....com>,
<travis@....com>, <rob@...dley.net>, <tglx@...utronix.de>,
<mingo@...hat.com>, <yinghai@...nel.org>,
<akpm@...ux-foundation.org>, <x86@...nel.org>,
<linux-doc@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [RFC 0/2] Delay initializing of large sections of memory
On 06/21/2013 12:28 PM, H. Peter Anvin wrote:
> On 06/21/2013 10:18 AM, Nathan Zimmer wrote:
>>> Since you made it a compile time option, it would be good to know how
>>> much code it adds, but otherwise I agree with Greg here... this really
>>> shouldn't need to be an option. It *especially* shouldn't need to be a
>>> hand-set runtime option (which looks quite complex, to boot.)
>> The patchset as a whole is just over 400 lines so it doesn't add alot.
>> If I were to pull the .config option it would probably remove 30 lines.
> I'm more concerned about bytes of code.
Oh, The difference is just under 32k.
371843425 Jun 21 14:08 vmlinux.o /* DELAY_MEM_INIT is not set */
371875600 Jun 21 14:36 vmlinux.o /* DELAY_MEM_INIT=y */
>
>> The command line option is too complex but some of the data I haven't
>> found a way to get at runtime yet.
> I think that is probably key.
>
>>> I suspect the cutoff for this should be a lot lower than 8 TB even, more
>>> like 128 GB or so. The only concern is to not set the cutoff so low
>>> that we can end up running out of memory or with suboptimal NUMA
>>> placement just because of this.
>> Even at lower amounts of ram there is an positive impact.I it knocks
>> time off
>> boot even at as small as a 1TB of ram.
> I am not surprised.
>
> -hpa
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists