[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210916155811.227794182@linuxfoundation.org>
Date: Thu, 16 Sep 2021 17:56:01 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Josef Bacik <josef@...icpanda.com>,
David Sterba <dsterba@...e.com>
Subject: [PATCH 5.14 012/432] btrfs: do not do preemptive flushing if the majority is global rsv
From: Josef Bacik <josef@...icpanda.com>
commit 114623979405abf0b143f9c6688b3ff00ee48338 upstream.
A common characteristic of the bug report where preemptive flushing was
going full tilt was the fact that the vast majority of the free metadata
space was used up by the global reserve. The hard 90% threshold would
cover the majority of these cases, but to be even smarter we should take
into account how much of the outstanding reservations are covered by the
global block reserve. If the global block reserve accounts for the vast
majority of outstanding reservations, skip preemptive flushing, as it
will likely just cause churn and pain.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=212185
Signed-off-by: Josef Bacik <josef@...icpanda.com>
Signed-off-by: David Sterba <dsterba@...e.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
fs/btrfs/space-info.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
--- a/fs/btrfs/space-info.c
+++ b/fs/btrfs/space-info.c
@@ -741,6 +741,20 @@ static bool need_preemptive_reclaim(stru
global_rsv_size) >= thresh)
return false;
+ used = space_info->bytes_may_use + space_info->bytes_pinned;
+
+ /* The total flushable belongs to the global rsv, don't flush. */
+ if (global_rsv_size >= used)
+ return false;
+
+ /*
+ * 128MiB is 1/4 of the maximum global rsv size. If we have less than
+ * that devoted to other reservations then there's no sense in flushing,
+ * we don't have a lot of things that need flushing.
+ */
+ if (used - global_rsv_size <= SZ_128M)
+ return false;
+
/*
* We have tickets queued, bail so we don't compete with the async
* flushers.
Powered by blists - more mailing lists