[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1581185940.432706977@decadent.org.uk>
Date: Sat, 08 Feb 2020 18:19:44 +0000
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org, Denis Kirjanov <kda@...ux-powerpc.org>,
"Jens Axboe" <axboe@...com>, "Ming Lei" <tom.leiming@...il.com>,
"Akinobu Mita" <akinobu.mita@...il.com>,
"Christoph Hellwig" <hch@....de>,
"Wanpeng Li" <wanpeng.li@...mail.com>
Subject: [PATCH 3.16 045/148] blk-mq: fix deadlock when reading cpu_list
3.16.82-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Akinobu Mita <akinobu.mita@...il.com>
commit 60de074ba1e8f327db19bc33d8530131ac01695d upstream.
CPU hotplug handling for blk-mq (blk_mq_queue_reinit) acquires
all_q_mutex in blk_mq_queue_reinit_notify() and then removes sysfs
entries by blk_mq_sysfs_unregister(). Removing sysfs entry needs to
be blocked until the active reference of the kernfs_node to be zero.
On the other hand, reading blk_mq_hw_sysfs_cpu sysfs entry (e.g.
/sys/block/nullb0/mq/0/cpu_list) acquires all_q_mutex in
blk_mq_hw_sysfs_cpus_show().
If these happen at the same time, a deadlock can happen. Because one
can wait for the active reference to be zero with holding all_q_mutex,
and the other tries to acquire all_q_mutex with holding the active
reference.
The reason that all_q_mutex is acquired in blk_mq_hw_sysfs_cpus_show()
is to avoid reading an imcomplete hctx->cpumask. Since reading sysfs
entry for blk-mq needs to acquire q->sysfs_lock, we can avoid deadlock
and reading an imcomplete hctx->cpumask by protecting q->sysfs_lock
while hctx->cpumask is being updated.
Signed-off-by: Akinobu Mita <akinobu.mita@...il.com>
Reviewed-by: Ming Lei <tom.leiming@...il.com>
Cc: Ming Lei <tom.leiming@...il.com>
Cc: Wanpeng Li <wanpeng.li@...mail.com>
Reviewed-by: Christoph Hellwig <hch@....de>
Signed-off-by: Jens Axboe <axboe@...com>
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
block/blk-mq-sysfs.c | 4 ----
block/blk-mq.c | 7 +++++++
2 files changed, 7 insertions(+), 4 deletions(-)
--- a/block/blk-mq-sysfs.c
+++ b/block/blk-mq-sysfs.c
@@ -229,8 +229,6 @@ static ssize_t blk_mq_hw_sysfs_cpus_show
unsigned int i, first = 1;
ssize_t ret = 0;
- blk_mq_disable_hotplug();
-
for_each_cpu(i, hctx->cpumask) {
if (first)
ret += sprintf(ret + page, "%u", i);
@@ -240,8 +238,6 @@ static ssize_t blk_mq_hw_sysfs_cpus_show
first = 0;
}
- blk_mq_enable_hotplug();
-
ret += sprintf(ret + page, "\n");
return ret;
}
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1645,6 +1645,11 @@ static void blk_mq_map_swqueue(struct re
struct blk_mq_ctx *ctx;
struct blk_mq_tag_set *set = q->tag_set;
+ /*
+ * Avoid others reading imcomplete hctx->cpumask through sysfs
+ */
+ mutex_lock(&q->sysfs_lock);
+
queue_for_each_hw_ctx(q, hctx, i) {
cpumask_clear(hctx->cpumask);
hctx->nr_ctx = 0;
@@ -1664,6 +1669,8 @@ static void blk_mq_map_swqueue(struct re
hctx->ctxs[hctx->nr_ctx++] = ctx;
}
+ mutex_unlock(&q->sysfs_lock);
+
queue_for_each_hw_ctx(q, hctx, i) {
/*
* If not software queues are mapped to this hardware queue,
Powered by blists - more mailing lists