[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Wed, 12 Dec 2018 09:13:57 -0500
From: zhangjun <openzhangj@...il.com>
To: Richard Weinberger <richard@....at>,
Artem Bityutskiy <dedekind1@...il.com>,
Adrian Hunter <adrian.hunter@...el.com>
Cc: linux-mtd@...ts.infradead.org, linux-kernel@...r.kernel.org,
zhangjun <openzhangj@...il.com>
Subject: ubifs: fix page_count in ->ubifs_migrate_page()
Because the PagePrivate() in UBIFS is different meanings,
alloc_cma() will fail when one dirty page cache located in
the type of MIGRATE_CMA
If not adjust the 'extra_count' for dirty page,
ubifs_migrate_page() -> migrate_page_move_mapping() will
always return -EAGAIN for:
expected_count += page_has_private(page)
This causes the migration to fail until the page cache is cleaned
In general, PagePrivate() indicates that buff_head is already bound
to this page, and at the same time page_count() will also increase.
But UBIFS set private flag when the cache is dirty, and page_count()
not increase.
Therefore, the expected_count of UBIFS is different from the general
case.
Signed-off-by: zhangjun <openzhangj@...il.com>
---
fs/ubifs/file.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
index 1b78f2e..2136a5c 100644
--- a/fs/ubifs/file.c
+++ b/fs/ubifs/file.c
@@ -1480,8 +1480,15 @@ static int ubifs_migrate_page(struct address_space *mapping,
struct page *newpage, struct page *page, enum migrate_mode mode)
{
int rc;
+ int extra_count;
- rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0);
+ /*
+ * UBIFS is using PagePrivate() which can have different meanings across
+ * filesystems. So here adjusting the 'extra_count' make it work.
+ */
+ extra_count = 0 - page_has_private(page);
+ rc = migrate_page_move_mapping(mapping, newpage,
+ page, NULL, mode, extra_count);
if (rc != MIGRATEPAGE_SUCCESS)
return rc;
--
2.7.4
Powered by blists - more mailing lists