[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180618081210.419522594@linuxfoundation.org>
Date: Mon, 18 Jun 2018 10:12:04 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Jiang Biao <jiang.biao2@....com.cn>,
Wen Yang <wen.yang99@....com.cn>, Tejun Heo <tj@...nel.org>,
Jens Axboe <axboe@...nel.dk>,
Sasha Levin <alexander.levin@...rosoft.com>
Subject: [PATCH 4.14 028/189] blkcg: dont hold blkcg lock when deactivating policy
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jiang Biao <jiang.biao2@....com.cn>
[ Upstream commit 946b81da114b8ba5c74bb01e57c0c6eca2bdc801 ]
As described in the comment of blkcg_activate_policy(),
*Update of each blkg is protected by both queue and blkcg locks so
that holding either lock and testing blkcg_policy_enabled() is
always enough for dereferencing policy data.*
with queue lock held, there is no need to hold blkcg lock in
blkcg_deactivate_policy(). Similar case is in
blkcg_activate_policy(), which has removed holding of blkcg lock in
commit 4c55f4f9ad3001ac1fefdd8d8ca7641d18558e23.
Signed-off-by: Jiang Biao <jiang.biao2@....com.cn>
Signed-off-by: Wen Yang <wen.yang99@....com.cn>
CC: Tejun Heo <tj@...nel.org>
Signed-off-by: Jens Axboe <axboe@...nel.dk>
Signed-off-by: Sasha Levin <alexander.levin@...rosoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
block/blk-cgroup.c | 5 -----
1 file changed, 5 deletions(-)
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1374,17 +1374,12 @@ void blkcg_deactivate_policy(struct requ
__clear_bit(pol->plid, q->blkcg_pols);
list_for_each_entry(blkg, &q->blkg_list, q_node) {
- /* grab blkcg lock too while removing @pd from @blkg */
- spin_lock(&blkg->blkcg->lock);
-
if (blkg->pd[pol->plid]) {
if (pol->pd_offline_fn)
pol->pd_offline_fn(blkg->pd[pol->plid]);
pol->pd_free_fn(blkg->pd[pol->plid]);
blkg->pd[pol->plid] = NULL;
}
-
- spin_unlock(&blkg->blkcg->lock);
}
spin_unlock_irq(q->queue_lock);
Powered by blists - more mailing lists