[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56E2FB5C.1040602@suse.cz>
Date: Fri, 11 Mar 2016 18:07:40 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Joonsoo Kim <js1304@...il.com>,
"Leizhen (ThunderTown)" <thunder.leizhen@...wei.com>
Cc: Laura Abbott <labbott@...hat.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Hanjun Guo <guohanjun@...wei.com>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Sasha Levin <sasha.levin@...cle.com>,
Laura Abbott <lauraa@...eaurora.org>,
qiuxishi <qiuxishi@...wei.com>,
Catalin Marinas <Catalin.Marinas@....com>,
Will Deacon <will.deacon@....com>,
Arnd Bergmann <arnd@...db.de>,
dingtinahong <dingtianhong@...wei.com>, chenjie6@...wei.com,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: Suspicious error for CMA stress test
On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> 2016-03-09 10:23 GMT+09:00 Leizhen (ThunderTown) <thunder.leizhen@...wei.com>:
>>
>> Hi, Joonsoo:
>> This new patch worked well. Do you plan to upstream it in the near furture?
>
> Of course!
> But, I should think more because it touches allocator's fastpatch and
> I'd like to detour.
> If I fail to think a better solution, I will send it as is, soon.
How about something like this? Just and idea, probably buggy (off-by-one etc.).
Should keep away cost from <pageblock_order iterations at the expense of the
relatively fewer >pageblock_order iterations.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ff1e3cbc8956..b8005a07b2a1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -685,21 +685,13 @@ static inline void __free_one_page(struct page *page,
unsigned long combined_idx;
unsigned long uninitialized_var(buddy_idx);
struct page *buddy;
- unsigned int max_order = MAX_ORDER;
+ unsigned int max_order = pageblock_order + 1;
VM_BUG_ON(!zone_is_initialized(zone));
VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
VM_BUG_ON(migratetype == -1);
- if (is_migrate_isolate(migratetype)) {
- /*
- * We restrict max order of merging to prevent merge
- * between freepages on isolate pageblock and normal
- * pageblock. Without this, pageblock isolation
- * could cause incorrect freepage accounting.
- */
- max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
- } else {
+ if (likely(!is_migrate_isolate(migratetype))) {
__mod_zone_freepage_state(zone, 1 << order, migratetype);
}
@@ -708,11 +700,12 @@ static inline void __free_one_page(struct page *page,
VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page);
VM_BUG_ON_PAGE(bad_range(zone, page), page);
+continue_merging:
while (order < max_order - 1) {
buddy_idx = __find_buddy_index(page_idx, order);
buddy = page + (buddy_idx - page_idx);
if (!page_is_buddy(page, buddy, order))
- break;
+ goto done_merging;
/*
* Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
* merge with it and move up one order.
@@ -729,6 +722,26 @@ static inline void __free_one_page(struct page *page,
page_idx = combined_idx;
order++;
}
+ if (max_order < MAX_ORDER) {
+ if (IS_ENABLED(CONFIG_CMA) &&
+ unlikely(has_isolate_pageblock(zone))) {
+
+ int buddy_mt;
+
+ buddy_idx = __find_buddy_index(page_idx, order);
+ buddy = page + (buddy_idx - page_idx);
+ buddy_mt = get_pageblock_migratetype(buddy);
+
+ if (migratetype != buddy_mt &&
+ (is_migrate_isolate(migratetype) ||
+ is_migrate_isolate(buddy_mt)))
+ goto done_merging;
+ }
+ max_order++;
+ goto continue_merging;
+ }
+
+done_merging:
set_page_order(page, order);
/*
Powered by blists - more mailing lists