[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1563861073-47071-3-git-send-email-guohanjun@huawei.com>
Date: Tue, 23 Jul 2019 13:51:13 +0800
From: Hanjun Guo <guohanjun@...wei.com>
To: Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Catalin Marinas <catalin.marinas@....com>,
"Jia He" <hejianet@...il.com>, Mike Rapoport <rppt@...ux.ibm.com>,
Will Deacon <will@...nel.org>
CC: <linux-arm-kernel@...ts.infradead.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, Hanjun Guo <guohanjun@...wei.com>
Subject: [PATCH v12 2/2] mm: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn
From: Jia He <hejianet@...il.com>
After skipping some invalid pfns in memmap_init_zone(), there is still
some room for improvement.
E.g. if pfn and pfn+1 are in the same memblock region, we can simply pfn++
instead of doing the binary search in memblock_next_valid_pfn.
Furthermore, if the pfn is in a gap of two memory region, skip to next
region directly to speedup the binary search.
Signed-off-by: Jia He <hejianet@...il.com>
Signed-off-by: Hanjun Guo <guohanjun@...wei.com>
---
mm/memblock.c | 37 +++++++++++++++++++++++++++++++------
1 file changed, 31 insertions(+), 6 deletions(-)
diff --git a/mm/memblock.c b/mm/memblock.c
index d57ba51bb9cd..95d5916716a0 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1256,28 +1256,53 @@ int __init_memblock memblock_set_node(phys_addr_t base, phys_addr_t size,
unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn)
{
struct memblock_type *type = &memblock.memory;
+ struct memblock_region *regions = type->regions;
unsigned int right = type->cnt;
unsigned int mid, left = 0;
+ unsigned long start_pfn, end_pfn, next_start_pfn;
phys_addr_t addr = PFN_PHYS(++pfn);
+ static int early_region_idx __initdata_memblock = -1;
+ /* fast path, return pfn+1 if next pfn is in the same region */
+ if (early_region_idx != -1) {
+ start_pfn = PFN_DOWN(regions[early_region_idx].base);
+ end_pfn = PFN_DOWN(regions[early_region_idx].base +
+ regions[early_region_idx].size);
+
+ if (pfn >= start_pfn && pfn < end_pfn)
+ return pfn;
+
+ /* try slow path */
+ if (++early_region_idx == type->cnt)
+ goto slow_path;
+
+ next_start_pfn = PFN_DOWN(regions[early_region_idx].base);
+
+ if (pfn >= end_pfn && pfn <= next_start_pfn)
+ return next_start_pfn;
+ }
+
+slow_path:
+ /* slow path, do the binary searching */
do {
mid = (right + left) / 2;
- if (addr < type->regions[mid].base)
+ if (addr < regions[mid].base)
right = mid;
- else if (addr >= (type->regions[mid].base +
- type->regions[mid].size))
+ else if (addr >= (regions[mid].base + regions[mid].size))
left = mid + 1;
else {
- /* addr is within the region, so pfn is valid */
+ early_region_idx = mid;
return pfn;
}
} while (left < right);
if (right == type->cnt)
return -1UL;
- else
- return PHYS_PFN(type->regions[right].base);
+
+ early_region_idx = right;
+
+ return PHYS_PFN(regions[early_region_idx].base);
}
EXPORT_SYMBOL(memblock_next_valid_pfn);
#endif /* CONFIG_HAVE_MEMBLOCK_PFN_VALID */
--
2.19.1
Powered by blists - more mailing lists