[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170118104651.582905633@linuxfoundation.org>
Date: Wed, 18 Jan 2017 11:46:43 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Jeff Mahoney <jeffm@...e.com>,
Liu Bo <bo.li.liu@...cle.com>, David Sterba <dsterba@...e.com>
Subject: [PATCH 4.9 085/120] btrfs: fix locking when we put back a delayed ref thats too new
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jeff Mahoney <jeffm@...e.com>
commit d0280996437081dd12ed1e982ac8aeaa62835ec4 upstream.
In __btrfs_run_delayed_refs, when we put back a delayed ref that's too
new, we have already dropped the lock on locked_ref when we set
->processing = 0.
This patch keeps the lock to cover that assignment.
Fixes: d7df2c796d7 (Btrfs: attach delayed ref updates to delayed ref heads)
Signed-off-by: Jeff Mahoney <jeffm@...e.com>
Reviewed-by: Liu Bo <bo.li.liu@...cle.com>
Signed-off-by: David Sterba <dsterba@...e.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
fs/btrfs/extent-tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -2537,11 +2537,11 @@ static noinline int __btrfs_run_delayed_
if (ref && ref->seq &&
btrfs_check_delayed_seq(fs_info, delayed_refs, ref->seq)) {
spin_unlock(&locked_ref->lock);
- btrfs_delayed_ref_unlock(locked_ref);
spin_lock(&delayed_refs->lock);
locked_ref->processing = 0;
delayed_refs->num_heads_ready++;
spin_unlock(&delayed_refs->lock);
+ btrfs_delayed_ref_unlock(locked_ref);
locked_ref = NULL;
cond_resched();
count++;
Powered by blists - more mailing lists