[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <1362466679-17111-5-git-send-email-m.szyprowski@samsung.com>
Date: Tue, 05 Mar 2013 07:57:58 +0100
From: Marek Szyprowski <m.szyprowski@...sung.com>
To: linux-mm@...ck.org, linaro-mm-sig@...ts.linaro.org,
linux-kernel@...r.kernel.org
Cc: Marek Szyprowski <m.szyprowski@...sung.com>,
Kyungmin Park <kyungmin.park@...sung.com>,
Arnd Bergmann <arnd@...db.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@....ul.ie>,
Michal Nazarewicz <mina86@...a86.com>,
Minchan Kim <minchan@...nel.org>,
Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>
Subject: [RFC/PATCH 4/5] mm: get_user_pages: migrate out CMA pages when
FOLL_DURABLE flag is set
When __get_user_pages() is called with FOLL_DURABLE flag, ensure that no
page in CMA pageblocks gets locked. This workarounds the permanent
migration failures caused by locking the pages by get_user_pages() call for
a long period of time.
Signed-off-by: Marek Szyprowski <m.szyprowski@...sung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@...sung.com>
---
mm/internal.h | 12 ++++++++++++
mm/memory.c | 43 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 55 insertions(+)
diff --git a/mm/internal.h b/mm/internal.h
index 8562de0..a290d04 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -105,6 +105,18 @@ extern void prep_compound_page(struct page *page, unsigned long order);
extern bool is_free_buddy_page(struct page *page);
#endif
+#ifdef CONFIG_CMA
+static inline int is_cma_page(struct page *page)
+{
+ unsigned mt = get_pageblock_migratetype(page);
+ if (mt == MIGRATE_ISOLATE || mt == MIGRATE_CMA)
+ return true;
+ return false;
+}
+#else
+#define is_cma_page(page) 0
+#endif
+
#if defined CONFIG_COMPACTION || defined CONFIG_CMA
/*
diff --git a/mm/memory.c b/mm/memory.c
index 2b9c2dd..f81b273 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1650,6 +1650,45 @@ static inline int stack_guard_page(struct vm_area_struct *vma, unsigned long add
}
/**
+ * replace_cma_page() - migrate page out of CMA page blocks
+ * @page: source page to be migrated
+ *
+ * Returns either the old page (if migration was not possible) or the pointer
+ * to the newly allocated page (with additional reference taken).
+ *
+ * get_user_pages() might take a reference to a page for a long period of time,
+ * what prevent such page from migration. This is fatal to the preffered usage
+ * pattern of CMA pageblocks. This function replaces the given user page with
+ * a new one allocated from NON-MOVABLE pageblock, so locking CMA page can be
+ * avoided.
+ */
+static inline struct page *migrate_replace_cma_page(struct page *page)
+{
+ struct page *newpage = alloc_page(GFP_HIGHUSER);
+
+ if (!newpage)
+ goto out;
+
+ /*
+ * Take additional reference to the new page to ensure it won't get
+ * freed after migration procedure end.
+ */
+ get_page_foll(newpage);
+
+ if (migrate_replace_page(page, newpage) == 0)
+ return newpage;
+
+ put_page(newpage);
+ __free_page(newpage);
+out:
+ /*
+ * Migration errors in case of get_user_pages() might not
+ * be fatal to CMA itself, so better don't fail here.
+ */
+ return page;
+}
+
+/**
* __get_user_pages() - pin user pages in memory
* @tsk: task_struct of target task
* @mm: mm_struct of target mm
@@ -1884,6 +1923,10 @@ long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
}
if (IS_ERR(page))
return i ? i : PTR_ERR(page);
+
+ if ((gup_flags & FOLL_DURABLE) && is_cma_page(page))
+ page = migrate_replace_cma_page(page);
+
if (pages) {
pages[i] = page;
--
1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists