lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101129144436.GT2767@thunk.org>
Date:	Mon, 29 Nov 2010 09:44:36 -0500
From:	Ted Ts'o <tytso@....edu>
To:	Jonathan Nieder <jrnieder@...il.com>
Cc:	linux-ext4@...r.kernel.org
Subject: Re: Bug#605009: serious performance regression with ext4

On Mon, Nov 29, 2010 at 01:29:30AM -0600, Jonathan Nieder wrote:
> 
> >                        sync_file_range() is a Linux specific system
> > call that has been around for a while.  It allows program to control
> > when writeback happens in a very low-level fashion.  The first set of
> > sync_file_range() system calls causes the system to start writing back
> > each file once it has finished being extracted.  It doesn't actually
> > wait for the write to finish; it just starts the writeback.
> 
> True, using sync_file_range(..., SYNC_FILE_RANGE_WRITE) for each file
> makes later fsync() much faster.  But why?  Is this a matter of allowing
> writeback to overlap with write() or is something else going on?

So what's going on is this.  dpkg is writing a series of files.
fsync() causes the following to happen: 

	* force the file specified to be written to disk; in the case
		of ext4 with delayed allocation, this means blocks
		have to be allocated, so the block bitmap gets
		dirtied, etc.
	* force a journal commit.   This causes the block bitmap,
		inode table block for the inode, etc., to be written
		to the journal, followed by a barrier operation to make
		sure all of the file system metadata as well as the
		data blocks in the previous step, are written to disk.

If you call fsync() for each file, these two steps get done for each
file.  This means we have to do a journal commit for each and every
file.

By using sync_file_range() first, for all files, this forces the
delayed allocation to be resolved, so all of the block bitmaps, inode
data structures, etc., are updated.  Then on the first fdatasync(),
the resulting journal commit updates all of the block bitmaps and all
of the inode table blocks(), and we're done.  The subsequent
fdatasync() calls become no-ops --- which the ftrace shell script will
show.

We could imagine a new kernel interface which took an array of file
descriptors, say call it fsync_array(), which would force writeback on
all of the specified file descriptors, as well as forcing the journal
commit that would guarantee the metadata had been written to disk.
But calling sync_file_range() for each file, and then calling
fdatasync() for all of the, is something that exists today with
currently shipping kernels (and sync_file_range() has been around for
over four years; whereas a new system call wouldn't see wide
deployment for at least 2-3 years).

						- Ted

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ