[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201103012039.183672-6-sashal@kernel.org>
Date: Mon, 2 Nov 2020 20:20:34 -0500
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Gabriel Krisman Bertazi <krisman@...labora.com>,
Tejun Heo <tj@...nel.org>, Jens Axboe <axboe@...nel.dk>,
Sasha Levin <sashal@...nel.org>, cgroups@...r.kernel.org,
linux-block@...r.kernel.org
Subject: [PATCH AUTOSEL 4.19 06/11] blk-cgroup: Pre-allocate tree node on blkg_conf_prep
From: Gabriel Krisman Bertazi <krisman@...labora.com>
[ Upstream commit f255c19b3ab46d3cad3b1b2e1036f4c926cb1d0c ]
Similarly to commit 457e490f2b741 ("blkcg: allocate struct blkcg_gq
outside request queue spinlock"), blkg_create can also trigger
occasional -ENOMEM failures at the radix insertion because any
allocation inside blkg_create has to be non-blocking, making it more
likely to fail. This causes trouble for userspace tools trying to
configure io weights who need to deal with this condition.
This patch reduces the occurrence of -ENOMEMs on this path by preloading
the radix tree element on a GFP_KERNEL context, such that we guarantee
the later non-blocking insertion won't fail.
A similar solution exists in blkcg_init_queue for the same situation.
Acked-by: Tejun Heo <tj@...nel.org>
Signed-off-by: Gabriel Krisman Bertazi <krisman@...labora.com>
Signed-off-by: Jens Axboe <axboe@...nel.dk>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
block/blk-cgroup.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 51fc803c999d7..85bd46e0a745f 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -876,6 +876,12 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
goto fail;
}
+ if (radix_tree_preload(GFP_KERNEL)) {
+ blkg_free(new_blkg);
+ ret = -ENOMEM;
+ goto fail;
+ }
+
rcu_read_lock();
spin_lock_irq(q->queue_lock);
@@ -883,7 +889,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
if (IS_ERR(blkg)) {
ret = PTR_ERR(blkg);
blkg_free(new_blkg);
- goto fail_unlock;
+ goto fail_preloaded;
}
if (blkg) {
@@ -892,10 +898,12 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
blkg = blkg_create(pos, q, new_blkg);
if (unlikely(IS_ERR(blkg))) {
ret = PTR_ERR(blkg);
- goto fail_unlock;
+ goto fail_preloaded;
}
}
+ radix_tree_preload_end();
+
if (pos == blkcg)
goto success;
}
@@ -905,6 +913,8 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
ctx->body = body;
return 0;
+fail_preloaded:
+ radix_tree_preload_end();
fail_unlock:
spin_unlock_irq(q->queue_lock);
rcu_read_unlock();
--
2.27.0
Powered by blists - more mailing lists