lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090520124856.GY11363@kernel.dk>
Date:	Wed, 20 May 2009 14:48:56 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	Jan Kara <jack@...e.cz>, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, chris.mason@...cle.com,
	david@...morbit.com, akpm@...ux-foundation.org,
	yanmin_zhang@...ux.intel.com
Subject: Re: [PATCH 02/11] writeback: switch to per-bdi threads for
	flushing data

On Wed, May 20 2009, Christoph Hellwig wrote:
> On Wed, May 20, 2009 at 02:16:30PM +0200, Jens Axboe wrote:
> > It's a fine rule, I agree ;-)
> > 
> > I'll take another look at this when splitting the sync paths.
> 
> Btw, there has been quite a bit of work on the higher level sync code in
> the VFS tree, and I have some TODO list items for the lower level sync
> code.  The most important one would be splitting data and metadata
> writeback.
> 
> Currently __sync_single_inode first calls do_writepages to write back
> the data, then write_inode to potentially write the metadata and then
> finally filemap_fdatawait to wait for the inode to be completed.
> 
> Now for one thing doing the data wait after the metadata writeout is
> wrong for all those filesystems performing some kind of metadata updates
> in the I/O completion handler, and e.g. XFS has to work around this
> by doing a wait by itself in it's write_inode handler.
> 
> Second inodes are usually clustered together, so if a filesystem can
> issue multiple dirty inodes at the same time performance will be much
> better.
> 
> So an optimal sync could would first issue data I/O for all inodes it
> wants to write back, then wait for the data I/O to finish and finally
> write out the inodes in big clusters.
> 
> I'm not quite sure when we'll get to that, just making sure we don't
> work against this direction anywhere.
> 
> And yeah, I really need to take a detailed look at the current
> incarnation of your patchset :)

Please do, I'm particularly interested in the possibility of having
multiple inode placements. Would it be feasible to have the inode
backing be differentiated by type (eg data or meta-data)?

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ