[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1423726340-4084-8-git-send-email-iamjoonsoo.kim@lge.com>
Date: Thu, 12 Feb 2015 16:32:11 +0900
From: Joonsoo Kim <iamjoonsoo.kim@....com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Mel Gorman <mgorman@...e.de>,
Laura Abbott <lauraa@...eaurora.org>,
Minchan Kim <minchan@...nel.org>,
Heesub Shin <heesub.shin@...sung.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Michal Nazarewicz <mina86@...a86.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Hui Zhu <zhuhui@...omi.com>, Gioh Kim <gioh.kim@....com>,
Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
Ritesh Harjani <ritesh.list@...il.com>,
Vlastimil Babka <vbabka@...e.cz>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: [RFC 07/16] mm/page_isolation: watch out zone range overlap
In the following patches, new zone, ZONE_CMA, will be introduced and
it would be overlapped with other zones. Currently, many places
iterating pfn range doesn't consider possibility of zone overlap and
this would cause a problem such as printing wrong statistics information.
To prevent this situation, this patch add some code to consider zone
overlapping before adding ZONE_CMA.
pfn range argument provieded to test_pages_isolated() should be in
a single zone. If not, zone lock doesn't work to protect free state of
buddy freepage.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
---
mm/page_isolation.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index c8778f7..883e78d 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -210,8 +210,8 @@ int undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
* Returns 1 if all pages in the range are isolated.
*/
static int
-__test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
- bool skip_hwpoisoned_pages)
+__test_page_isolated_in_pageblock(struct zone *zone, unsigned long pfn,
+ unsigned long end_pfn, bool skip_hwpoisoned_pages)
{
struct page *page;
@@ -221,6 +221,9 @@ __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
continue;
}
page = pfn_to_page(pfn);
+ if (page_zone(page) != zone)
+ break;
+
if (PageBuddy(page)) {
/*
* If race between isolatation and allocation happens,
@@ -281,7 +284,7 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
/* Check all pages are free or marked as ISOLATED */
zone = page_zone(page);
spin_lock_irqsave(&zone->lock, flags);
- ret = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
+ ret = __test_page_isolated_in_pageblock(zone, start_pfn, end_pfn,
skip_hwpoisoned_pages);
spin_unlock_irqrestore(&zone->lock, flags);
return ret ? 0 : -EBUSY;
--
1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists