[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200304195002.3854765-1-guro@fb.com>
Date: Wed, 4 Mar 2020 11:50:02 -0800
From: Roman Gushchin <guro@...com>
To: Josef Bacik <josef@...icpanda.com>, Chris Mason <clm@...com>,
David Sterba <dsterba@...e.com>
CC: <linux-btrfs@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<kernel-team@...com>, Rik van Riel <riel@...riel.com>,
Roman Gushchin <guro@...com>
Subject: [PATCH] btrfs: implement migratepage callback
Currently btrfs doesn't provide a migratepage callback. It means that
fallback_migrate_page() is used to migrate btrfs pages.
fallback_migrate_page() cannot move dirty pages, instead it tries to
flush them (in sync mode) or just fails (in async mode).
In the sync mode pages which are scheduled to be processed by
btrfs_writepage_fixup_worker() can't be effectively flushed by the
migration code, because there is no established way to wait for the
completion of the delayed work.
It all leads to page migration failures.
To fix it the patch implements a btrs-specific migratepage callback,
which is similar to iomap_migrate_page() used by some other fs, except
it does take care of the PagePrivate2 flag which is used for data
ordering purposes.
Signed-off-by: Roman Gushchin <guro@...com>
Reviewed-by: Chris Mason <clm@...com>
---
fs/btrfs/inode.c | 33 +++++++++++++++++++++++++++++++++
1 file changed, 33 insertions(+)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 7735ce6127c3..f23230b3cbda 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -28,6 +28,7 @@
#include <linux/magic.h>
#include <linux/iversion.h>
#include <linux/swap.h>
+#include <linux/migrate.h>
#include <linux/sched/mm.h>
#include <asm/unaligned.h>
#include "misc.h"
@@ -8323,6 +8324,37 @@ static int btrfs_releasepage(struct page *page, gfp_t gfp_flags)
return __btrfs_releasepage(page, gfp_flags);
}
+static int btrfs_migratepage(struct address_space *mapping,
+ struct page *newpage, struct page *page,
+ enum migrate_mode mode)
+{
+ int ret;
+
+ ret = migrate_page_move_mapping(mapping, newpage, page, 0);
+ if (ret != MIGRATEPAGE_SUCCESS)
+ return ret;
+
+ if (page_has_private(page)) {
+ ClearPagePrivate(page);
+ get_page(newpage);
+ set_page_private(newpage, page_private(page));
+ set_page_private(page, 0);
+ put_page(page);
+ SetPagePrivate(newpage);
+ }
+
+ if (PagePrivate2(page)) {
+ ClearPagePrivate2(page);
+ SetPagePrivate2(newpage);
+ }
+
+ if (mode != MIGRATE_SYNC_NO_COPY)
+ migrate_page_copy(newpage, page);
+ else
+ migrate_page_states(newpage, page);
+ return MIGRATEPAGE_SUCCESS;
+}
+
static void btrfs_invalidatepage(struct page *page, unsigned int offset,
unsigned int length)
{
@@ -10525,6 +10557,7 @@ static const struct address_space_operations btrfs_aops = {
.direct_IO = btrfs_direct_IO,
.invalidatepage = btrfs_invalidatepage,
.releasepage = btrfs_releasepage,
+ .migratepage = btrfs_migratepage,
.set_page_dirty = btrfs_set_page_dirty,
.error_remove_page = generic_error_remove_page,
.swap_activate = btrfs_swap_activate,
--
2.24.1
Powered by blists - more mailing lists