[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE9FiQUnsENuMGAXgxmJq+u-NnXW4fEXR-JKww8LY_8jOfoeBA@mail.gmail.com>
Date: Fri, 19 Jul 2013 16:51:49 -0700
From: Yinghai Lu <yinghai@...nel.org>
To: Robin Holt <holt@....com>
Cc: Sam Ben <sam.bennn@...il.com>, "H. Peter Anvin" <hpa@...or.com>,
Ingo Molnar <mingo@...nel.org>, Nate Zimmer <nzimmer@....com>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>, Rob Landley <rob@...dley.net>,
Mike Travis <travis@....com>,
Daniel J Blueman <daniel@...ascale-asia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Greg KH <gregkh@...uxfoundation.org>,
Mel Gorman <mgorman@...e.de>
Subject: Re: [RFC 0/4] Transparent on-demand struct page initialization
embedded in the buddy allocator
On Wed, Jul 17, 2013 at 2:30 AM, Robin Holt <holt@....com> wrote:
> On Wed, Jul 17, 2013 at 01:17:44PM +0800, Sam Ben wrote:
>> >With this patch, we did boot a 16TiB machine. Without the patches,
>> >the v3.10 kernel with the same configuration took 407 seconds for
>> >free_all_bootmem. With the patches and operating on 2MiB pages instead
>> >of 1GiB, it took 26 seconds so performance was improved. I have no feel
>> >for how the 1GiB chunk size will perform.
>>
>> How to test how much time spend on free_all_bootmem?
>
> We had put a pr_emerg at the beginning and end of free_all_bootmem and
> then used a modified version of script which record the time in uSecs
> at the beginning of each line of output.
used two patches, found 3TiB system will take 100s before slub is ready.
about three portions:
1. sparse vmemap buf allocation, it is with bootmem wrapper, so clear those
struct page area take about 30s.
2. memmap_init_zone: take about 25s
3. mem_init/free_all_bootmem about 30s.
so still wonder why 16TiB will need hours.
also your patches looks like only address 2 and 3.
Yinghai
Download attachment "printk_time_tsc_0.patch" of type "application/octet-stream" (2624 bytes)
Download attachment "printk_time_tsc_1.patch" of type "application/octet-stream" (1201 bytes)
Powered by blists - more mailing lists