[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20130629180318.GA83854@asylum.americas.sgi.com>
Date: Sat, 29 Jun 2013 13:03:18 -0500
From: Nathan Zimmer <nzimmer@....com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Nathan Zimmer <nzimmer@....com>,
Daniel J Blueman <daniel@...ascale-asia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Travis <travis@....com>, "H. Peter Anvin" <hpa@...or.com>,
holt@....com, rob@...dley.net,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, yinghai@...nel.org,
Greg KH <gregkh@...uxfoundation.org>, x86@...nel.org,
linux-doc@...r.kernel.org,
Linux Kernel <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Steffen Persvold <sp@...ascale.com>
Subject: Re: [RFC] Transparent on-demand memory setup initialization
embedded in the (GFP) buddy allocator
On Sat, Jun 29, 2013 at 09:24:41AM +0200, Ingo Molnar wrote:
>
> * Nathan Zimmer <nzimmer@....com> wrote:
>
> > On 06/26/2013 10:35 PM, Daniel J Blueman wrote:
> > >On Wednesday, June 26, 2013 9:30:02 PM UTC+8, Andrew Morton wrote:
> > >>
> > >> On Wed, 26 Jun 2013 11:22:48 +0200 Ingo Molnar
> > ><mi...@...nel.org> wrote:
> > >>
> > >> > except that on 32 TB
> > >> > systems we don't spend ~2 hours initializing 8,589,934,592
> > >page heads.
> > >>
> > >> That's about a million a second which is crazy slow - even my
> > >prehistoric desktop
> > >> is 100x faster than that.
> > >>
> > >> Where's all this time actually being spent?
> > >
> > > The complexity of a directory-lookup architecture to make the
> > > (intrinsically unscalable) cache-coherency protocol scalable gives you
> > > a ~1us roundtrip to remote NUMA nodes.
> > >
> > > Probably a lot of time is spent in some memsets, and RMW cycles which
> > > are setting page bits, which are intrinsically synchronous, so the
> > > initialising core can't get to 12 or so outstanding memory
> > > transactions.
> > >
> > > Since EFI memory ranges have a flag to state if they are zerod (which
> > > may be a fair assumption for memory on non-bootstrap processor NUMA
> > > nodes), we can probably collapse the RMWs to just writes.
> > >
> > > A normal write will require a coherency cycle, then a fetch and a
> > > writeback when it's evicted from the cache. For this purpose,
> > > non-temporal writes would eliminate the cache line fetch and give a
> > > massive increase in bandwidth. We wouldn't even need a store-fence as
> > > the initialising core is the only one online.
> >
> > Could you elaborate a bit more? or suggest a specific area to look at?
> >
> > After some experiments with trying to just set some fields in the struct
> > page directly I haven't been able to produce any improvements. Of
> > course there is lots about the area which I don't have much experience
> > with.
>
> Any such improvement will at most be in the 10-20% range.
>
> I'd suggest first concentrating on the 1000-fold boot time initialization
> speedup that the buddy allocator delayed initialization can offer, and
> speeding up whatever remains after that stage - in a much more
> development-friendly environment. (You'll be able to run 'perf record
> ./calloc-1TB' after bootup and get meaningful results, etc.)
>
> Thanks,
>
> Ingo
I had been focusing on the bigger gains but my attention had been diverted by
hope of an easy, alibiet smaller, win.
I have been experimenting with the patch proper, I am just doing 2MB pages for
the moment. The improvement is vast, I'll worry about proper numbers once I
think I have a fully working patch.
Some progress is being made on the real patch. I think the memory is
being set up correctly, On aligned pages setting the up the page as normal
plus setting new PG_ flag.
Right now I am trying to sort out free_pages_prepare and free_pages_check.
Thanks,
Nate
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists