[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130626133715.GA6424@gmail.com>
Date: Wed, 26 Jun 2013 15:37:15 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Mike Travis <travis@....com>, "H. Peter Anvin" <hpa@...or.com>,
Nathan Zimmer <nzimmer@....com>, holt@....com, rob@...dley.net,
tglx@...utronix.de, mingo@...hat.com, yinghai@...nel.org,
gregkh@...uxfoundation.org, x86@...nel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [RFC] Transparent on-demand memory setup initialization embedded
in the (GFP) buddy allocator
* Andrew Morton <akpm@...ux-foundation.org> wrote:
> On Wed, 26 Jun 2013 11:22:48 +0200 Ingo Molnar <mingo@...nel.org> wrote:
>
> > except that on 32 TB
> > systems we don't spend ~2 hours initializing 8,589,934,592 page heads.
>
> That's about a million a second which is crazy slow - even my
> prehistoric desktop is 100x faster than that.
>
> Where's all this time actually being spent?
See the earlier part of the thread - apparently it's spent initializing
the page heads - remote NUMA node misses from a single boot CPU, going
across a zillion cross-connects? I guess there's some other low hanging
fruits as well - so making this easier to profile would be nice. The
profile posted was not really usable.
Btw., NUMA locality would be another advantage of on-demand
initialization: actual users of RAM tend to allocate node-local
(especially on large clusters), so any overhead will be naturally lower.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists