[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170104200446.939635852@linuxfoundation.org>
Date: Wed, 4 Jan 2017 21:06:01 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Filipe Manana <fdmanana@...e.com>,
David Sterba <dsterba@...e.com>
Subject: [PATCH 4.9 09/83] Btrfs: fix qgroup rescan worker initialization
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Filipe Manana <fdmanana@...e.com>
commit 8d9eddad19467b008e0c881bc3133d7da94b7ec1 upstream.
We were setting the qgroup_rescan_running flag to true only after the
rescan worker started (which is a task run by a queue). So if a user
space task starts a rescan and immediately after asks to wait for the
rescan worker to finish, this second call might happen before the rescan
worker task starts running, in which case the rescan wait ioctl returns
immediatley, not waiting for the rescan worker to finish.
This was making the fstest btrfs/022 fail very often.
Fixes: d2c609b834d6 (btrfs: properly track when rescan worker is running)
Signed-off-by: Filipe Manana <fdmanana@...e.com>
Reviewed-by: David Sterba <dsterba@...e.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
fs/btrfs/qgroup.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -2335,10 +2335,6 @@ static void btrfs_qgroup_rescan_worker(s
int err = -ENOMEM;
int ret = 0;
- mutex_lock(&fs_info->qgroup_rescan_lock);
- fs_info->qgroup_rescan_running = true;
- mutex_unlock(&fs_info->qgroup_rescan_lock);
-
path = btrfs_alloc_path();
if (!path)
goto out;
@@ -2449,6 +2445,7 @@ qgroup_rescan_init(struct btrfs_fs_info
sizeof(fs_info->qgroup_rescan_progress));
fs_info->qgroup_rescan_progress.objectid = progress_objectid;
init_completion(&fs_info->qgroup_rescan_completion);
+ fs_info->qgroup_rescan_running = true;
spin_unlock(&fs_info->qgroup_lock);
mutex_unlock(&fs_info->qgroup_rescan_lock);
Powered by blists - more mailing lists