lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 13 Aug 2013 13:24:31 -0700
From:	Yinghai Lu <yinghai@...nel.org>
To:	Mike Travis <travis@....com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Nathan Zimmer <nzimmer@....com>, Peter Anvin <hpa@...or.com>,
	Ingo Molnar <mingo@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-mm <linux-mm@...ck.org>, Robin Holt <holt@....com>,
	Rob Landley <rob@...dley.net>,
	Daniel J Blueman <daniel@...ascale-asia.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Mel Gorman <mgorman@...e.de>
Subject: Re: [RFC v3 0/5] Transparent on-demand struct page initialization
 embedded in the buddy allocator

On Tue, Aug 13, 2013 at 12:06 PM, Mike Travis <travis@....com> wrote:
>
>
> On 8/13/2013 11:04 AM, Mike Travis wrote:
>>
>>
>> On 8/13/2013 10:51 AM, Linus Torvalds wrote:
>>> by the time you can log in. And if it then takes another ten minutes
>>> until you have the full 16TB initialized, and some things might be a
>>> tad slower early on, does anybody really care?  The machine will be up
>>> and running with plenty of memory, even if it may not be *all* the
>>> memory yet.
>>
>> Before the patches adding memory took ~45 mins for 16TB and almost 2 hours
>> for 32TB.  Adding it late sped up early boot but late insertion was still
>> very slow, where the full 32TB was still not fully inserted after an hour.
>> Doing it in parallel along with the memory hotplug lock per node, we got
>> it down to the 10-15 minute range.
>>
>
> FYI, the system at this time had 128 nodes each with 256GB of memory.
> About 252GB was inserted into the absent list from nodes 1 .. 126.
> Memory on nodes 0 and 128 was left fully present.

Can we have one topic about those boot time issues in this year kernel summit?

There will be more 32 sockets x86 systems and will have lots of
memory, pci chain and cpu cores.

current kernel/smp.c::smp_init(),  we still have
|        /* FIXME: This should be done in userspace --RR */
|        for_each_present_cpu(cpu) {
|                if (num_online_cpus() >= setup_max_cpus)
|                        break;
|                if (!cpu_online(cpu))
|                        cpu_up(cpu);
|        }

solution would be:
1. delay some memory, pci chain, or cpus cores.
2. or parallel initialize them during booting
3. or parallel add them after booting.

Thanks

Yinghai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ