lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090921134042.GY23126@kernel.dk>
Date:	Mon, 21 Sep 2009 15:40:42 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Wu Fengguang <fengguang.wu@...el.com>
Cc:	Jan Kara <jack@...e.cz>, LKML <linux-kernel@...r.kernel.org>,
	Theodore Tso <tytso@....edu>
Subject: Re: [PATCH] fs: Fix busyloop in wb_writeback()

On Mon, Sep 21 2009, Wu Fengguang wrote:
> On Mon, Sep 21, 2009 at 09:06:34PM +0800, Jens Axboe wrote:
> > On Mon, Sep 21 2009, Wu Fengguang wrote:
> > > On Thu, Sep 17, 2009 at 02:41:06AM +0800, Jens Axboe wrote:
> > > > On Wed, Sep 16 2009, Jan Kara wrote:
> > > > > If all inodes are under writeback (e.g. in case when there's only one inode
> > > > > with dirty pages), wb_writeback() with WB_SYNC_NONE work basically degrades
> > > > > to busylooping until I_SYNC flags of the inode is cleared. Fix the problem by
> > > > > waiting on I_SYNC flags of an inode on b_more_io list in case we failed to
> > > > > write anything.
> > > > 
> > > > Interesting, so this will happen if the dirtier and flush thread end up
> > > > "fighting" each other over the same inode. I'll throw this into the
> > > > testing mix.
> > > > 
> > > > How did you notice?
> > > 
> > > Jens, I found another busy loop. Not sure about the solution, but here
> > > is the quick fact.
> > > 
> > > Tested git head is 1ef7d9aa32a8ee054c4d4fdcd2ea537c04d61b2f, which
> > > seems to be the last writeback patch in the linux-next tree. I cannot
> > > run the plain head of linux-next because it just refuses boot up.
> > > 
> > > On top of which Jan Kara's I_SYNC waiting patch and the attached
> > > debugging patch is applied.
> > > 
> > > Test commands are:
> > > 
> > >         # mount /mnt/test # ext4 fs
> > >         # echo 1 > /proc/sys/fs/dirty_debug
> > > 
> > >         # cp /dev/zero /mnt/test/zero0
> > > 
> > > After that the box is locked up, the system is busy doing these:
> > > 
> > > [   54.740295] requeue_io() +457: inode=79232
> > > [   54.740300] mm/page-writeback.c +539 balance_dirty_pages(): comm=cp pid=3327 n=0
> > > [   54.740303] global dirty=60345 writeback=10145 nfs=0 flags=_M towrite=1536 skipped=0
> > > [   54.740317] requeue_io() +457: inode=79232
> > > [   54.740322] mm/page-writeback.c +539 balance_dirty_pages(): comm=cp pid=3327 n=0
> > > [   54.740325] global dirty=60345 writeback=10145 nfs=0 flags=_M towrite=1536 skipped=0
> > > [   54.740339] requeue_io() +457: inode=79232
> > > [   54.740344] mm/page-writeback.c +539 balance_dirty_pages(): comm=cp pid=3327 n=0
> > > [   54.740347] global dirty=60345 writeback=10145 nfs=0 flags=_M towrite=1536 skipped=0
> > > ......
> > > 
> > > Basically the traces show that balance_dirty_pages() is busy looping.
> > > It cannot write anything because the inode always be requeued by this line:
> > > 
> > >         if (inode->i_state & I_SYNC) {
> > >                if (!wait) {
> > >                         requeue_io(inode);
> > >                         return 0;
> > >                 }
> > > 
> > > This seem to happen when the partition is FULL.
> > 
> > OK, I'll have to look into these issues soonish. I'll be travelling next
> > week though, so I cannot do more about it right now. If you have time to
> > work on tested patches, then that would be MUCH appreciated it!
> 
> Jens, your bug fix patches arrive just before I submit this bug report :)

Good!

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ