lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090923134039.GA1196@localhost>
Date:	Wed, 23 Sep 2009 21:40:39 +0800
From:	Wu Fengguang <fengguang.wu@...el.com>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Jens Axboe <jens.axboe@...cle.com>, Jan Kara <jack@...e.cz>,
	Theodore Tso <tytso@....edu>,
	Dave Chinner <david@...morbit.com>,
	Chris Mason <chris.mason@...cle.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 5/6] writeback: don't delay inodes redirtied by a fast
	dirtier

On Wed, Sep 23, 2009 at 09:23:51PM +0800, Christoph Hellwig wrote:
> On Wed, Sep 23, 2009 at 09:20:08PM +0800, Wu Fengguang wrote:
> > I noticed that
> > - the write chunk size of balance_dirty_pages() is 12, which is pretty
> >   small and inefficient.
> > - during copy, the inode is sometimes redirty_tail (old behavior) and
> >   sometimes requeue_io (new behavior).
> > - during copy, the directory inode will always be synced and then
> >   redirty_tail.
> > - after copy, the inode will be redirtied after sync.
> 
> Yeah, XFS uses generic_file_uffered_write and the heuristics in there
> for balance_dirty_pages turned out to be really bad.  So far we didn't
> manage to sucessfully get that fixed, though.

Ah sorry. It's because of the first patch, it does not always "bump up"
the write chunk. In your case it is obviously decreased (the original
ratelimit_pages=4096 is much larger value).  I'll fix it.

> > It shall not be a problem to use requeue_io for XFS, because whether
> > it be requeue_io or redirty_tail, write_inode() will be called once
> > for every 4MB.
> > 
> > It would be inefficient if XFS really tries to write inode and
> > directory inode's metadata every time it synced 4MB page. If
> > that write attempt is turned into _real_ IO, that would be bad
> > and kill performance. Increasing MAX_WRITEBACK_PAGES may help
> > reduce the frequency of write_inode() though.
> 
> The way we call write_inode for XFS is extremly inefficient for XFS.  As
> you noticed XFS tends to redirty the inode on I/O completion, and we
> also cluster inode writeouts.  For XFS we'd really prefer to not
> intermix data and inode writeout, but first do the data writeout and
> then later push out the inodes, preferably with as many as possible
> inodes to sweep out in one go.

I guess the difficult part would be possible policy requirements
on the max batch size (max number of inodes or pages to write before
switching to write metadata) and the delay time (between the sync of
data and metadata). It may take a long time to make a full scan of
the dirty list.

Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ