[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191122054911.1750-68-sashal@kernel.org>
Date: Fri, 22 Nov 2019 00:46:47 -0500
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Josef Bacik <jbacik@...com>, Nikolay Borisov <nborisov@...e.com>,
David Sterba <dsterba@...e.com>,
Sasha Levin <sashal@...nel.org>, linux-btrfs@...r.kernel.org
Subject: [PATCH AUTOSEL 4.19 075/219] btrfs: only track ref_heads in delayed_ref_updates
From: Josef Bacik <jbacik@...com>
[ Upstream commit 158ffa364bf723fa1ef128060646d23dc3942994 ]
We use this number to figure out how many delayed refs to run, but
__btrfs_run_delayed_refs really only checks every time we need a new
delayed ref head, so we always run at least one ref head completely no
matter what the number of items on it. Fix the accounting to only be
adjusted when we add/remove a ref head.
In addition to using this number to limit the number of delayed refs
run, a future patch is also going to use it to calculate the amount of
space required for delayed refs space reservation.
Reviewed-by: Nikolay Borisov <nborisov@...e.com>
Signed-off-by: Josef Bacik <jbacik@...com>
Signed-off-by: David Sterba <dsterba@...e.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
fs/btrfs/delayed-ref.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index 62ff545ba1f71..7e5c81e80e15d 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -234,8 +234,6 @@ static inline void drop_delayed_ref(struct btrfs_trans_handle *trans,
ref->in_tree = 0;
btrfs_put_delayed_ref(ref);
atomic_dec(&delayed_refs->num_entries);
- if (trans->delayed_ref_updates)
- trans->delayed_ref_updates--;
}
static bool merge_ref(struct btrfs_trans_handle *trans,
@@ -446,7 +444,6 @@ static int insert_delayed_ref(struct btrfs_trans_handle *trans,
if (ref->action == BTRFS_ADD_DELAYED_REF)
list_add_tail(&ref->add_list, &href->ref_add_list);
atomic_inc(&root->num_entries);
- trans->delayed_ref_updates++;
spin_unlock(&href->lock);
return ret;
}
--
2.20.1
Powered by blists - more mailing lists