[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130712082756.GA4328@gmail.com>
Date: Fri, 12 Jul 2013 10:27:56 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Robin Holt <holt@....com>, Borislav Petkov <bp@...en8.de>,
Robert Richter <rric@...nel.org>
Cc: "H. Peter Anvin" <hpa@...or.com>, Nate Zimmer <nzimmer@....com>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>, Rob Landley <rob@...dley.net>,
Mike Travis <travis@....com>,
Daniel J Blueman <daniel@...ascale-asia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Greg KH <gregkh@...uxfoundation.org>,
Yinghai Lu <yinghai@...nel.org>, Mel Gorman <mgorman@...e.de>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [RFC 0/4] Transparent on-demand struct page initialization
embedded in the buddy allocator
* Robin Holt <holt@....com> wrote:
> [...]
>
> With this patch, we did boot a 16TiB machine. Without the patches, the
> v3.10 kernel with the same configuration took 407 seconds for
> free_all_bootmem. With the patches and operating on 2MiB pages instead
> of 1GiB, it took 26 seconds so performance was improved. I have no feel
> for how the 1GiB chunk size will perform.
That's pretty impressive.
It's still a 15x speedup instead of a 512x speedup, so I'd say there's
something else being the current bottleneck, besides page init
granularity.
Can you boot with just a few gigs of RAM and stuff the rest into hotplug
memory, and then hot-add that memory? That would allow easy profiling of
remaining overhead.
Side note:
Robert Richter and Boris Petkov are working on 'persistent events' support
for perf, which will eventually allow boot time profiling - I'm not sure
if the patches and the tooling support is ready enough yet for your
purposes.
Robert, Boris, the following workflow would be pretty intuitive:
- kernel developer sets boot flag: perf=boot,freq=1khz,size=16MB
- we'd get a single (cycles?) event running once the perf subsystem is up
and running, with a sampling frequency of 1 KHz, sending profiling
trace events to a sufficiently sized profiling buffer of 16 MB per
CPU.
- once the system reaches SYSTEM_RUNNING, profiling is stopped either
automatically - or the user stops it via a new tooling command.
- the profiling buffer is extracted into a regular perf.data via a
special 'perf record' call or some other, new perf tooling
solution/variant.
[ Alternatively the kernel could attempt to construct a 'virtual'
perf.data from the persistent buffer, available via /sys/debug or
elsewhere in /sys - just like the kernel constructs a 'virtual'
/proc/kcore, etc. That file could be copied or used directly. ]
- from that point on this workflow joins the regular profiling workflow:
perf report, perf script et al can be used to analyze the resulting
boot profile.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists