[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170601193304.GA19500@dhcp-216.srv.tuxera.com>
Date: Thu, 1 Jun 2017 22:33:04 +0300
From: Rakesh Pandit <rakesh@...era.com>
To: Ming Lei <ming.lei@...hat.com>
CC: Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...com>,
Sagi Grimberg <sagi@...mberg.me>,
<linux-kernel@...r.kernel.org>, <linux-nvme@...ts.infradead.org>,
Keith Busch <keith.busch@...el.com>,
"Andy Lutomirski" <luto@...nel.org>
Subject: Re: [PATCH V2] nvme: fix nvme_remove going to uninterruptible sleep
for ever
On Thu, Jun 01, 2017 at 10:56:10PM +0800, Ming Lei wrote:
> On Thu, Jun 01, 2017 at 02:46:32PM +0200, Christoph Hellwig wrote:
> > On Thu, Jun 01, 2017 at 03:36:50PM +0300, Rakesh Pandit wrote:
> > > Also Sagi pointed out that user space set_features ioctl if fired up
> > > in a window after nvme_removal it can also result in this issue seems
> > > to be correct. I would prefer to keep this as it is and introduce
> > > similar check higher up in nvme_ioctrl instead so that we don't send
> > > sync commands if queues are killed already.
> > >
> > > Would you prefer a patch ? Thanks,
> >
> > If we want to kill everyone we probably should do it in ->queue_rq.
>
> Looks ->queue_rq has done it already via checking nvmeq->cq_vector
>
> > Or is the block layer blocking you somewhere else?
>
> blk-mq doesn't handle dying in the I/O path.
>
> Maybe it is similar with 806f026f9b901eaf1a(nvme: use blk_mq_start_hw_queues() in
> nvme_kill_queues()), seems we need to do it for admin_q too.
>
> Can the following change fix the issue?
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index e44326d5cf19..360758488124 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2438,6 +2438,7 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl)
> struct nvme_ns *ns;
>
> mutex_lock(&ctrl->namespaces_mutex);
> + blk_mq_start_hw_queues(ctrl->admin_q);
> list_for_each_entry(ns, &ctrl->namespaces, list) {
> /*
> * Revalidating a dead namespace sets capacity to 0. This will
>
>
Yes change fixes the issue.
Powered by blists - more mailing lists