[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <560FD031.3030909@synopsys.com>
Date: Sat, 3 Oct 2015 18:25:13 +0530
From: Vineet Gupta <Vineet.Gupta1@...opsys.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Robin Holt <robin.m.holt@...il.com>,
Nathan Zimmer <nzimmer@....com>
CC: Jiang Liu <liuj97@...il.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
lkml <linux-kernel@...r.kernel.org>, Mel Gorman <mgorman@...e.de>
Subject: New helper to free highmem pages in larger chunks
Hi,
I noticed increased boot time when enabling highmem for ARC. Turns out that
freeing highmem pages into buddy allocator is done page at a time, while it is
batched for low mem pages. Below is call flow.
I'm thinking of writing free_highmem_pages() which takes start and end pfn and
want to solicit some ideas whether to write it from scratch or preferably call
existing __free_pages_memory() to reuse the logic to convert a pfn range into
{pfn, order} tuples.
For latter however there are semantical differences as you can see below which I'm
not sure of:
-highmem page->count is set to 1, while 0 for low mem
-atomic clearing of page reserved flag vs. non atomic
mem_init
for (tmp = min_high_pfn; tmp < max_pfn; tmp++)
free_highmem_page(pfn_to_page(tmp));
__free_reserved_page
ClearPageReserved(page); <--- atomic
init_page_count(page); <-- _count = 1
__free_page(page); <-- free SINGLE page
free_all_bootmem
free_low_memory_core_early
__free_memory_core(start, end)
__free_pages_memory(s_pfn, e_pfn) <- creates "order" sized batches
__free_pages_bootmem(pfn, order)
__free_pages_boot_core(start_page, start_pfn, order)
loops from 0 to (1 << order)
__ClearPageReserved(p); <-- non atomic
set_page_count(p, 0); <--- _count = 0
__free_pages(page, order); <--- free BATCH
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists