lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090327215454.GH31071@duck.suse.cz>
Date:	Fri, 27 Mar 2009 22:54:54 +0100
From:	Jan Kara <jack@...e.cz>
To:	Theodore Tso <tytso@....edu>, Chris Mason <chris.mason@...cle.com>,
	Ric Wheeler <rwheeler@...hat.com>,
	Linux Kernel Developers List <linux-kernel@...r.kernel.org>,
	Ext4 Developers List <linux-ext4@...r.kernel.org>,
	jack@...e.cz
Subject: Re: [PATCH 0/3] Ext3 latency improvement patches

On Fri 27-03-09 17:30:52, Theodore Tso wrote:
> On Fri, Mar 27, 2009 at 05:03:38PM -0400, Chris Mason wrote:
> > > Ric had asked me about a test program that would show the worst case
> > > ext3 behavior.  So I've modified your ext3 program a little.  It now
> > > creates a 8G file and forks off another proc to do random IO to that
> > > file.
> > > 
> > 
> > My understanding of ext4 delalloc is that once blocks are allocated to
> > file, we go back to data=ordered.  
> 
> Yes, that's correct.
> 
> > Ext4 is going pretty slowly for this fsync test (slower than ext3), it
> > looks like we're going for a very long time in
> > jbd2_journal_commit_transaction -> write_cache_pages.
> 
> One of the things that we can do to optimize this case for ext4 (and
> ext3) is that if block has already been written out to disk once, we
> don't have to flush it to disk a second time.  So if we add a new
> buffer_head flag which can distinguish between blocks that have been
> newly allocated (and not yet been flushed to disk) versus blocks that
> have already been flushed to disk at least once, we wouldn't need to
> force I/O for blocks in the latter case.
>
> After all, most of the applications which do random I/O to a file
> normally will use fsync() appropriately such that they are rewriting
> already allocated blocks.  So there really is no reason to flush those
> blocks out to disk even in data=ordered mode.
> 
> We currently flush *all* blocks out to disk in data=ordered mode
> because we don't have a good way of telling the difference between the
> two cases.
  Yes, but OTOH this will make the problem "my data was lost after crash"
worse... Although rewrites aren't that common either so maybe it won't
be that bad.

								Honza
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ