lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51C4A726.1090300@sgi.com>
Date:	Fri, 21 Jun 2013 14:19:02 -0500
From:	Nathan Zimmer <nzimmer@....com>
To:	Yinghai Lu <yinghai@...nel.org>
CC:	Greg KH <gregkh@...uxfoundation.org>,
	"H. Peter Anvin" <hpa@...or.com>, Robin Holt <holt@....com>,
	Mike Travis <travis@....com>, Rob Landley <rob@...dley.net>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	the arch/x86 maintainers <x86@...nel.org>,
	<linux-doc@...r.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC 0/2] Delay initializing of large sections of memory

On 06/21/2013 02:10 PM, Yinghai Lu wrote:
> On Fri, Jun 21, 2013 at 11:50 AM, Greg KH <gregkh@...uxfoundation.org> wrote:
>> On Fri, Jun 21, 2013 at 11:44:22AM -0700, Yinghai Lu wrote:
>>> On Fri, Jun 21, 2013 at 10:03 AM, H. Peter Anvin <hpa@...or.com> wrote:
>>>> On 06/21/2013 09:51 AM, Greg KH wrote:
>>>>
>>>> I suspect the cutoff for this should be a lot lower than 8 TB even, more
>>>> like 128 GB or so.  The only concern is to not set the cutoff so low
>>>> that we can end up running out of memory or with suboptimal NUMA
>>>> placement just because of this.
>>> I would suggest another way:
>>> only boot the system with boot node (include cpu, ram and pci root buses).
>>> then after boot, could add other nodes.
>> What exactly do you mean by "after boot"?  Often, the boot process of
>> userspace needs those additional cpus and ram in order to initialize
>> everything (like the pci devices) properly.
> I mean for Intel cpu have cpu and memory controller and IIO.
> every IIO is one peer pci root bus.
> So scan root bus that are not with boot node later.
>
> in this way we can keep all numa etc on the place when online ram, cpu, pci...
>
> For example if we have 32 sockets system, most time for boot is with *BIOS*
> instead of OS. In those kind of system boot is like this way:
> only first two sockets get booted from bios to OS.
> later use hot add every other two sockets.
>
> that will also make BIOS simpler, and it need to support hot-add for
> services purpose anyway.
>
> Yinghai

Yes the hot add path was one option we looked at and it did shorten boot 
times but the goal I had here is to get from power on to having the full 
machine available as quick as possible. Several clients need significant 
portions of ram for their key workloads.  So that guided my thoughts on 
this patch.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ