[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180411183628.557715299@linuxfoundation.org>
Date: Wed, 11 Apr 2018 20:34:41 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Christoph Hellwig <hch@....de>,
Yi Zhang <yi.zhang@...hat.com>, Ming Lei <ming.lei@...hat.com>,
Jens Axboe <axboe@...nel.dk>,
Sasha Levin <alexander.levin@...rosoft.com>
Subject: [PATCH 4.9 142/310] blk-mq: fix race between updating nr_hw_queues and switching io sched
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ming Lei <ming.lei@...hat.com>
[ Upstream commit fb350e0ad99359768e1e80b4784692031ec340e4 ]
In both elevator_switch_mq() and blk_mq_update_nr_hw_queues(), sched tags
can be allocated, and q->nr_hw_queue is used, and race is inevitable, for
example: blk_mq_init_sched() may trigger use-after-free on hctx, which is
freed in blk_mq_realloc_hw_ctxs() when nr_hw_queues is decreased.
This patch fixes the race be holding q->sysfs_lock.
Reviewed-by: Christoph Hellwig <hch@....de>
Reported-by: Yi Zhang <yi.zhang@...hat.com>
Tested-by: Yi Zhang <yi.zhang@...hat.com>
Signed-off-by: Ming Lei <ming.lei@...hat.com>
Signed-off-by: Jens Axboe <axboe@...nel.dk>
Signed-off-by: Sasha Levin <alexander.levin@...rosoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
block/blk-mq.c | 4 ++++
1 file changed, 4 insertions(+)
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1907,6 +1907,9 @@ static void blk_mq_realloc_hw_ctxs(struc
struct blk_mq_hw_ctx **hctxs = q->queue_hw_ctx;
blk_mq_sysfs_unregister(q);
+
+ /* protect against switching io scheduler */
+ mutex_lock(&q->sysfs_lock);
for (i = 0; i < set->nr_hw_queues; i++) {
int node;
@@ -1956,6 +1959,7 @@ static void blk_mq_realloc_hw_ctxs(struc
}
}
q->nr_hw_queues = i;
+ mutex_unlock(&q->sysfs_lock);
blk_mq_sysfs_register(q);
}
Powered by blists - more mailing lists