[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090921134511.GG1099@duck.suse.cz>
Date: Mon, 21 Sep 2009 15:45:11 +0200
From: Jan Kara <jack@...e.cz>
To: Wu Fengguang <fengguang.wu@...el.com>
Cc: Jan Kara <jack@...e.cz>,
"jens.axboe@...cle.com" <jens.axboe@...cle.com>,
LKML <linux-kernel@...r.kernel.org>,
"chris.mason@...cle.com" <chris.mason@...cle.com>,
Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH] fs: Fix busyloop in wb_writeback()
On Mon 21-09-09 09:08:59, Wu Fengguang wrote:
> On Mon, Sep 21, 2009 at 01:43:56AM +0800, Jan Kara wrote:
> > On Sun 20-09-09 10:35:28, Wu Fengguang wrote:
> > > On Thu, Sep 17, 2009 at 01:22:48AM +0800, Jan Kara wrote:
> > > > If all inodes are under writeback (e.g. in case when there's only one inode
> > > > with dirty pages), wb_writeback() with WB_SYNC_NONE work basically degrades
> > > > to busylooping until I_SYNC flags of the inode is cleared. Fix the problem by
> > > > waiting on I_SYNC flags of an inode on b_more_io list in case we failed to
> > > > write anything.
> > >
> > > Sorry, I realized that inode_wait_for_writeback() waits for I_SYNC.
> > > But inodes in b_more_io are not expected to have I_SYNC set. So your
> > > patch looks like a big no-op?
> > Hmm, I don't think so. writeback_single_inode() does:
> > if (inode->i_state & I_SYNC) {
> > /*
> > * If this inode is locked for writeback and we are not
> > * doing
> > * writeback-for-data-integrity, move it to b_more_io so
> > * that
> > * writeback can proceed with the other inodes on s_io.
> > *
> > * We'll have another go at writing back this inode when we
> > * completed a full scan of b_io.
> > */
> > if (!wait) {
> > requeue_io(inode);
> > return 0;
> > }
> >
> > So when we see inode under writeback, we put it to b_more_io. So I think
> > my patch really fixes the issue when two threads are racing on writing the
> > same inode.
>
> Ah OK. So it busy loops when there are more syncing threads than dirty
> files. For example, one bdi flush thread plus one process running
> balance_dirty_pages().
Yes.
> > > The busy loop does exists, when bdi is congested.
> > > In this case, write_cache_pages() will refuse to write anything,
> > > we used to be calling congestion_wait() to take a breath, but now
> > > wb_writeback() purged that call and thus created a busy loop.
> > I don't think congestion is an issue here. The device needen't be
> > congested for the busyloop to happen.
>
> bdi congestion is a different case. When there are only one syncing
> thread, b_more_io inodes won't have I_SYNC, so your patch is a no-op.
> wb_writeback() or any of its sub-routines must wait/yield for a while
> to avoid busy looping on the congestion. Where is the wait with Jens'
> new code?
I agree someone must wait when we bail out due to congestion. But we bail
out only when wbc->nonblocking is set. So I'd feel that callers setting
this flag should handle it when we stop the writeback due to congestion.
> Another question is, why wbc.more_io can be ignored for kupdate syncs?
> I guess it would lead to slow writeback of large files.
>
> This patch reflects my concerns on the two problems.
>
> Thanks,
> Fengguang
> ---
> fs/fs-writeback.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> --- linux.orig/fs/fs-writeback.c 2009-09-20 10:44:25.000000000 +0800
> +++ linux/fs/fs-writeback.c 2009-09-21 08:53:09.000000000 +0800
> @@ -818,8 +818,10 @@ static long wb_writeback(struct bdi_writ
> /*
> * If we ran out of stuff to write, bail unless more_io got set
> */
> - if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
> - if (wbc.more_io && !wbc.for_kupdate)
> + if (wbc.nr_to_write > 0) {
> + if (wbc.encountered_congestion)
> + congestion_wait(BLK_RW_ASYNC, HZ);
> + if (wbc.more_io)
> continue;
> break;
> }
OK, this change looks reasonable but I think we'll have to revisit
the writeback logic more in detail as we discussed in the other thread.
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists