[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1429785196-7668-1-git-send-email-mgorman@suse.de>
Date: Thu, 23 Apr 2015 11:33:03 +0100
From: Mel Gorman <mgorman@...e.de>
To: Linux-MM <linux-mm@...ck.org>
Cc: Nathan Zimmer <nzimmer@....com>,
Dave Hansen <dave.hansen@...el.com>,
Waiman Long <waiman.long@...com>,
Scott Norton <scott.norton@...com>,
Daniel J Blueman <daniel@...ascale.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, Mel Gorman <mgorman@...e.de>
Subject: [PATCH 0/13] Parallel struct page initialisation v3
The big change here is an adjustment to the topology_init path that caused
soft lockups on Waiman and Daniel Blue had reported it was an expensive
function.
Changelog since v2
o Reduce overhead of topology_init
o Remove boot-time kernel parameter to enable/disable
o Enable on UMA
Changelog since v1
o Always initialise low zones
o Typo corrections
o Rename parallel mem init to parallel struct page init
o Rebase to 4.0
Struct page initialisation had been identified as one of the reasons why
large machines take a long time to boot. Patches were posted a long time ago
to defer initialisation until they were first used. This was rejected on
the grounds it should not be necessary to hurt the fast paths. This series
reuses much of the work from that time but defers the initialisation of
memory to kswapd so that one thread per node initialises memory local to
that node.
After applying the series and setting the appropriate Kconfig variable I
see this in the boot log on a 64G machine
[ 7.383764] kswapd 0 initialised deferred memory in 188ms
[ 7.404253] kswapd 1 initialised deferred memory in 208ms
[ 7.411044] kswapd 3 initialised deferred memory in 216ms
[ 7.411551] kswapd 2 initialised deferred memory in 216ms
On a 1TB machine, I see
[ 8.406511] kswapd 3 initialised deferred memory in 1116ms
[ 8.428518] kswapd 1 initialised deferred memory in 1140ms
[ 8.435977] kswapd 0 initialised deferred memory in 1148ms
[ 8.437416] kswapd 2 initialised deferred memory in 1148ms
Once booted the machine appears to work as normal. Boot times were measured
from the time shutdown was called until ssh was available again. In the
64G case, the boot time savings are negligible. On the 1TB machine, the
savings were 16 seconds.
It would be nice if the people that have access to really large machines
would test this series and report how much boot time is reduced.
arch/ia64/mm/numa.c | 19 +--
arch/x86/Kconfig | 1 +
drivers/base/node.c | 11 +-
include/linux/memblock.h | 18 +++
include/linux/mm.h | 8 +-
include/linux/mmzone.h | 23 ++-
mm/Kconfig | 18 +++
mm/bootmem.c | 8 +-
mm/internal.h | 23 ++-
mm/memblock.c | 34 ++++-
mm/mm_init.c | 9 +-
mm/nobootmem.c | 7 +-
mm/page_alloc.c | 379 ++++++++++++++++++++++++++++++++++++++++-------
mm/vmscan.c | 6 +-
14 files changed, 462 insertions(+), 102 deletions(-)
--
2.3.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists