lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220216125937.d6brzu7labgywxcg@ipetronik.com>
Date:   Wed, 16 Feb 2022 13:59:37 +0100
From:   Markus Blöchl <Markus.Bloechl@...tronik.com>
To:     Christoph Hellwig <hch@....de>, Hannes Reinecke <hare@...e.de>
Cc:     Keith Busch <kbusch@...nel.org>, Jens Axboe <axboe@...nel.dk>,
        Sagi Grimberg <sagi@...mberg.me>,
        linux-nvme@...ts.infradead.org, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, Stefan Roese <sr@...x.de>
Subject: Re: [RFC PATCH] nvme: prevent hang on surprise removal of NVMe disk

On Tue, Feb 15, 2022 at 09:17:38PM +0100, Christoph Hellwig wrote:
> So, I think in th short run setting GD_DEAD is the right thing as a
> non-invasive fix, but I'd rather do it in blk_set_queue_dying to also
> cover the other drivers with a similar pattern.  blk_set_queue_dying
> is never used for cases where q->disk isn't valid so that should be
> fine.

blk_set_queue_dying() is called from blk_cleanup_queue(), which in turn
is called for all kinds of queues without associated disk, even ones
with failed allocations.
But according to disk_release() a disk which was once associated should
always outlive its queue, thus q->disk should indeed always be safe.


It's a little unfortunate that the existing doc on request queues was
so outdated that it had to be removed.

So I was trying to figure out if blk_queue_dying(q) should always imply
(!q->disk || test_bit(GD_DEAD, &q->disk->state)).
Assuming that there is a 1:1 relationship between q and q->disk and that
blk_queue_dying(q) was previously used where GD_DEAD is now, I do hope
so, since GD_DEAD should just be a weaker form of QUEUE_FLAG_DYING to
allow killing the disk way before the queue.


> 
> In the long run I think we just need to remove the fsync_bdev in
> del_gendisk or at least make it conditional.

But wouldn't that require you to pass a flag like SURPRISE_REMOVAL into
del_gendisk and passing it along through delete_partition()?
You would still need GD_DEAD since someone might remove the disk while
you are syncing, so this only sounds like an error-prone optimization to
me. After all, syncing on graceful removal seems to be a sensible thing
to do.


Fiddling with qemu will probably take me too long unless Hannes already
has something up his sleeves.


So I'll submit V2 after I regained access to the test setup.

It will probably just look something like this:

```
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -286,6 +286,8 @@ void blk_queue_start_drain(struct request_queue *q)

 void blk_set_queue_dying(struct request_queue *q)
 {
+       if (q->disk)
+               set_bit(GD_DEAD, &q->disk->state);
        blk_queue_flag_set(QUEUE_FLAG_DYING, q);
        blk_queue_start_drain(q);
 }
```


Thanks for all your input,

Markus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ