[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZFBf/CXN2ktVYL/N@ovpn-8-16.pek2.redhat.com>
Date: Tue, 2 May 2023 08:57:32 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Christoph Hellwig <hch@....de>
Cc: Theodore Ts'o <tytso@....edu>, Baokun Li <libaokun1@...wei.com>,
Matthew Wilcox <willy@...radead.org>,
linux-ext4@...r.kernel.org,
Andreas Dilger <adilger.kernel@...ger.ca>,
linux-block@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
Dave Chinner <dchinner@...hat.com>,
Eric Sandeen <sandeen@...hat.com>,
Zhang Yi <yi.zhang@...hat.com>,
yangerkun <yangerkun@...wei.com>, ming.lei@...hat.com
Subject: Re: [ext4 io hang] buffered write io hang in balance_dirty_pages
On Mon, May 01, 2023 at 06:47:44AM +0200, Christoph Hellwig wrote:
> On Sat, Apr 29, 2023 at 01:10:49PM +0800, Ming Lei wrote:
> > Not sure if it is needed for non s_bdev
>
> So you don't want to work this at all for btrfs? Or the XFS log device,
> or ..
Basically FS can provide one generic API of shutdown_filesystem() which
shutdown FS generically, meantime calls each fs's ->shutdown() for
dealing with fs specific shutdown.
If there isn't superblock attached for one bdev, can you explain a bit what
filesystem code can do? Same with block layer bdev.
The current bio->bi_status together disk_live()(maybe bdev_live() is
needed) should be enough for FS code to handle non s_bdev.
>
> > , because FS is over stackable device
> > directly. Stackable device has its own logic for handling underlying disks dead
> > or deleted, then decide if its own disk needs to be deleted, such as, it is
> > fine for raid1 to work from user viewpoint if one underlying disk is deleted.
>
> We still need to propagate the even that device has been removed upwards.
> Right now some file systems (especially XFS) are good at just propagating
> it from an I/O error. And explicity call would be much better.
It depends on the above question about how FS code handle non s_bdev
deletion/dead.
Thanks,
Ming
Powered by blists - more mailing lists