[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1428920226-18147-1-git-send-email-mgorman@suse.de>
Date: Mon, 13 Apr 2015 11:16:52 +0100
From: Mel Gorman <mgorman@...e.de>
To: Linux-MM <linux-mm@...ck.org>
Cc: Robin Holt <holt@....com>, Nathan Zimmer <nzimmer@....com>,
Daniel Rahn <drahn@...e.com>,
Davidlohr Bueso <dbueso@...e.com>,
Dave Hansen <dave.hansen@...el.com>,
Tom Vaden <tom.vaden@...com>,
Scott Norton <scott.norton@...com>,
LKML <linux-kernel@...r.kernel.org>, Mel Gorman <mgorman@...e.de>
Subject: [RFC PATCH 0/14] Parallel memory initialisation
Memory initialisation had been identified as one of the reasons why large
machines take a long time to boot. Patches were posted a long time ago
that attempted to move deferred initialisation into the page allocator
paths. This was rejected on the grounds it should not be necessary to hurt
the fast paths to parallelise initialisation. This series reuses much of
the work from that time but defers the initialisation of memory to kswapd
so that one thread per node initialises memory local to that node. The
issue is that on the machines I tested with, memory initialisation was not
a major contributor to boot times. I'm posting the RFC to both review the
series and see if it actually helps users of very large machines.
After applying the series and setting the appropriate Kconfig variable I
see this in the boot log on a 64G machine
[ 7.383764] kswapd 0 initialised deferred memory in 188ms
[ 7.404253] kswapd 1 initialised deferred memory in 208ms
[ 7.411044] kswapd 3 initialised deferred memory in 216ms
[ 7.411551] kswapd 2 initialised deferred memory in 216ms
On a 1TB machine, I see
[ 11.913324] kswapd 0 initialised deferred memory in 1168ms
[ 12.220011] kswapd 2 initialised deferred memory in 1476ms
[ 12.245369] kswapd 3 initialised deferred memory in 1500ms
[ 12.271680] kswapd 1 initialised deferred memory in 1528ms
Once booted the machine appears to work as normal. Boot times were measured
from the time shutdown was called until ssh was available again. In the
64G case, the boot time savings are negligible. On the 1TB machine, the
savings were 10 seconds (about 8% improvement on kernel times but 1-2%
overall as POST takes so long).
It would be nice if the people that have access to really large machines
would test this series and report back if the complexity is justified.
Patches are against 4.0-rc7.
Documentation/kernel-parameters.txt | 8 +
arch/ia64/mm/numa.c | 19 +-
arch/x86/Kconfig | 2 +
include/linux/memblock.h | 18 ++
include/linux/mm.h | 8 +-
include/linux/mmzone.h | 37 +++-
init/main.c | 1 +
mm/Kconfig | 29 +++
mm/bootmem.c | 6 +-
mm/internal.h | 23 ++-
mm/memblock.c | 34 ++-
mm/mm_init.c | 9 +-
mm/nobootmem.c | 7 +-
mm/page_alloc.c | 398 +++++++++++++++++++++++++++++++-----
mm/vmscan.c | 6 +-
15 files changed, 507 insertions(+), 98 deletions(-)
--
2.1.2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists