[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170605153105.353143927@linuxfoundation.org>
Date: Mon, 5 Jun 2017 18:17:28 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Zhang Yi <yizhan@...hat.com>,
Keith Busch <keith.busch@...el.com>,
Johannes Thumshirn <jthumshirn@...e.de>,
Ming Lei <ming.lei@...hat.com>, Christoph Hellwig <hch@....de>
Subject: [PATCH 4.9 52/94] nvme: avoid to use blk_mq_abort_requeue_list()
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ming Lei <ming.lei@...hat.com>
commit 986f75c876dbafed98eba7cb516c5118f155db23 upstream.
NVMe may add request into requeue list simply and not kick off the
requeue if hw queues are stopped. Then blk_mq_abort_requeue_list()
is called in both nvme_kill_queues() and nvme_ns_remove() for
dealing with this issue.
Unfortunately blk_mq_abort_requeue_list() is absolutely a
race maker, for example, one request may be requeued during
the aborting. So this patch just calls blk_mq_kick_requeue_list() in
nvme_kill_queues() to handle this issue like what nvme_start_queues()
does. Now all requests in requeue list when queues are stopped will be
handled by blk_mq_kick_requeue_list() when queues are restarted, either
in nvme_start_queues() or in nvme_kill_queues().
Reported-by: Zhang Yi <yizhan@...hat.com>
Reviewed-by: Keith Busch <keith.busch@...el.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@...e.de>
Signed-off-by: Ming Lei <ming.lei@...hat.com>
Signed-off-by: Christoph Hellwig <hch@....de>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/nvme/host/core.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1725,7 +1725,6 @@ static void nvme_ns_remove(struct nvme_n
sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
&nvme_ns_attr_group);
del_gendisk(ns->disk);
- blk_mq_abort_requeue_list(ns->queue);
blk_cleanup_queue(ns->queue);
}
@@ -2048,7 +2047,6 @@ void nvme_kill_queues(struct nvme_ctrl *
continue;
revalidate_disk(ns->disk);
blk_set_queue_dying(ns->queue);
- blk_mq_abort_requeue_list(ns->queue);
/*
* Forcibly start all queues to avoid having stuck requests.
@@ -2056,6 +2054,9 @@ void nvme_kill_queues(struct nvme_ctrl *
* when the final removal happens.
*/
blk_mq_start_hw_queues(ns->queue);
+
+ /* draining requests in requeue list */
+ blk_mq_kick_requeue_list(ns->queue);
}
mutex_unlock(&ctrl->namespaces_mutex);
}
Powered by blists - more mailing lists