[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230913151651.gzmyjvqwan3euhwi@quack3>
Date: Wed, 13 Sep 2023 17:16:51 +0200
From: Jan Kara <jack@...e.cz>
To: Chunhai Guo <guochunhai@...o.com>
Cc: jack@...e.cz, chao@...nel.org, jaegeuk@...nel.org,
brauner@...nel.org, viro@...iv.linux.org.uk,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] fs-writeback: writeback_sb_inodes: Do not increase
'total_wrote' when nothing is written
On Wed 13-09-23 07:15:01, Chunhai Guo wrote:
> > On Wed 13-09-23 10:42:21, Christian Brauner wrote:
> > > [+Cc Jan]
> >
> > Thanks!
> >
> > > On Tue, Sep 12, 2023 at 08:20:43AM -0600, Chunhai Guo wrote:
> > > > I am encountering a deadlock issue as shown below. There is a commit
> > > > 344150999b7f ("f2fs: fix to avoid potential deadlock") can fix this
> > > > issue.
> > > > However, from log analysis, it appears that this is more likely a
> > > > fake progress issue similar to commit 68f4c6eba70d ("fs-writeback:
> > > > writeback_sb_inodes: Recalculate 'wrote' according skipped pages").
> > > > In each writeback iteration, nothing is written, while
> > > > writeback_sb_inodes() increases 'total_wrote' each time, causing an
> > > > infinite loop. This patch fixes this issue by not increasing
> > > > 'total_wrote' when nothing is written.
> > > >
> > > > wb_writeback fsync (inode-Y)
> > > > blk_start_plug(&plug)
> > > > for (;;) {
> > > > iter i-1: some reqs with page-X added into plug->mq_list // f2fs node
> > > > page-X with PG_writeback
> > > > filemap_fdatawrite
> > > > __filemap_fdatawrite_range // write inode-Y
> > > > with sync_mode WB_SYNC_ALL
> > > > do_writepages
> > > > f2fs_write_data_pages
> > > > __f2fs_write_data_pages //
> > > > wb_sync_req[DATA]++ for WB_SYNC_ALL
> > > > f2fs_write_cache_pages
> > > > f2fs_write_single_data_page
> > > > f2fs_do_write_data_page
> > > > f2fs_outplace_write_data
> > > > f2fs_update_data_blkaddr
> > > > f2fs_wait_on_page_writeback
> > > > wait_on_page_writeback // wait for
> > > > f2fs node page-X
> > > > iter i:
> > > > progress = __writeback_inodes_wb(wb, work)
> > > > . writeback_sb_inodes
> > > > . __writeback_single_inode // write inode-Y with sync_mode
> > > > WB_SYNC_NONE
> > > > . . do_writepages
> > > > . . f2fs_write_data_pages
> > > > . . . __f2fs_write_data_pages // skip writepages due to
> > > > (wb_sync_req[DATA]>0)
> > > > . . . wbc->pages_skipped += get_dirty_pages(inode) //
> > > > wbc->pages_skipped = 1
> > > > . if (!(inode->i_state & I_DIRTY_ALL)) // i_state = I_SYNC |
> > > > I_SYNC_QUEUED
> > > > . total_wrote++; // total_wrote = 1
> > > > . requeue_inode // requeue inode-Y to wb->b_dirty queue due to
> > > > non-zero pages_skipped
> > > > if (progress) // progress = 1
> > > > continue;
> > > > iter i+1:
> > > > queue_io
> > > > // similar process with iter i, infinite for-loop !
> > > > }
> > > > blk_finish_plug(&plug) // flush plug won't be called
> > > >
> > > > Signed-off-by: Chunhai Guo <guochunhai@...o.com>
> >
> > Thanks for the patch but did you test this patch fixed your deadlock?
> > Because the patch seems like a noop to me. Look:
>
> Yes. I have tested this patch and it indeed fixed this deadlock issue, too.
OK, thanks for letting me know!
> > > > diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index
> > > > 969ce991b0b0..54cdee906be9 100644
> > > > --- a/fs/fs-writeback.c
> > > > +++ b/fs/fs-writeback.c
> > > > @@ -1820,6 +1820,7 @@ static long writeback_sb_inodes(struct
> > > > super_block *sb,
> > > > struct inode *inode = wb_inode(wb->b_io.prev);
> > > > struct bdi_writeback *tmp_wb;
> > > > long wrote;
> > > > + bool is_dirty_before;
> > > >
> > > > if (inode->i_sb != sb) {
> > > > if (work->sb) {
> > > > @@ -1881,6 +1882,7 @@ static long writeback_sb_inodes(struct
> > > > super_block *sb,
> > > > continue;
> > > > }
> > > > inode->i_state |= I_SYNC;
> > > > + is_dirty_before = inode->i_state & I_DIRTY_ALL;
> >
> > is_dirty_before is going to be set if there's anything dirty - inode, page,
> > timestamp. So it can be unset only if there are no dirty pages, in which
> > case there are no pages that can be skipped during page writeback, which
> > means that requeue_inode() will go and remove inode from b_io/b_dirty lists
> > and it will not participate in writeback anymore.
> >
> > So I don't see how this patch can be helping anything... Please correct me
> > if I'm missing anything.
> > Honza
>
> From the dump info, there are only two pages as shown below. One is updated
> and another is under writeback. Maybe f2fs counts the writeback page as a
> dirty one, so get_dirty_pages() got one. As you said, maybe this is
> unreasonable.
>
> Jaegeuk & Chao, what do you think of this?
>
>
> crash_32> files -p 0xE5A44678
> INODE NRPAGES
> e5a44678 2
>
> PAGE PHYSICAL MAPPING INDEX CNT FLAGS
> e8d0e338 641de000 e5a44810 0 5 a095 locked,waiters,uptodate,lru,private,writeback
> e8ad59a0 54528000 e5a44810 1 2 2036 referenced,uptodate,lru,active,private
Indeed, incrementing pages_skipped when there's no dirty page is a bit odd.
That being said we could also harden requeue_inode() - in particular we
could do there:
if (wbc->pages_skipped) {
/*
* Writeback is not making progress due to locked buffers.
* Skip this inode for now. Although having skipped pages
* is odd for clean inodes, it can happen for some
* filesystems so handle that gracefully.
*/
if (inode->i_state & I_DIRTY_ALL)
redirty_tail_locked(inode, wb);
else
inode_cgwb_move_to_attached(inode, wb);
}
Does this fix your problem as well?
Honza
>
> Thanks,
>
> >
> >
> > > > wbc_attach_and_unlock_inode(&wbc, inode);
> > > >
> > > > write_chunk = writeback_chunk_size(wb, work); @@ -1918,7
> > > > +1920,7 @@ static long writeback_sb_inodes(struct super_block *sb,
> > > > */
> > > > tmp_wb = inode_to_wb_and_lock_list(inode);
> > > > spin_lock(&inode->i_lock);
> > > > - if (!(inode->i_state & I_DIRTY_ALL))
> > > > + if (!(inode->i_state & I_DIRTY_ALL) && is_dirty_before)
> > > > total_wrote++;
> > > > requeue_inode(inode, tmp_wb, &wbc);
> > > > inode_sync_complete(inode);
> > > > --
> > > > 2.25.1
> > > >
> > --
> > Jan Kara <jack@...e.com>
> > SUSE Labs, CR
>
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists