[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220215191456.GB1934598@dhcp-10-100-145-180.wdc.com>
Date: Tue, 15 Feb 2022 11:14:56 -0800
From: Keith Busch <kbusch@...nel.org>
To: Christoph Hellwig <hch@....de>
Cc: Markus Blöchl <markus.bloechl@...tronik.com>,
Jens Axboe <axboe@...nel.dk>, Sagi Grimberg <sagi@...mberg.me>,
linux-nvme@...ts.infradead.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, Stefan Roese <sr@...x.de>
Subject: Re: [RFC PATCH] nvme: prevent hang on surprise removal of NVMe disk
On Tue, Feb 15, 2022 at 07:47:04PM +0100, Christoph Hellwig wrote:
> On Tue, Feb 15, 2022 at 07:22:40AM -0800, Keith Busch wrote:
> > I can't actually tell if not checking the DYING flag check was
> > intentional or not, since the comments in blk_queue_start_drain() say
> > otherwise.
> >
> > Christoph, do you know the intention here? Should __bio_queue_enter()
> > check the queue DYING flag, or do you prefer drivers explicity set the
> > disk state like this? It looks to me the queue flags should be checked
> > since that's already tied to the freeze wait_queue_head_t.
>
> It was intentional but maybe not fully thought out. Do you remember why
> we're doing the manual setting of the dying flag instead of just calling
> del_gendisk early on in nvme? Because calling del_gendisk is supposed
> to be all that a tree needs to do.
When the driver concludes new requests can't ever succeed, we had been
setting the queue to DYING first so new requests can't enter, which can
prevent forward progress.
AFAICT, just calling del_gendisk() is fine for a graceful removal. It
calls fsync_bdev() to flush out pending writes before setting the disk
state to "DEAD".
Setting the queue to dying first will "freeze" the queue, which is why
fsync_bdev() is blocked. We were relying on the queue DYING flag to
prevent that from blocking.
Perhaps another way to do this might be to remove the queue DYING
setting, and let the driver internally fail new requests instead? There
may be some issues with doing it that way IIRC, but blk-mq is a bit
evolved from where we started, so I'd need to test it out to confirm.
Powered by blists - more mailing lists