lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 30 Mar 2009 08:41:26 -0400
From:	Chris Mason <chris.mason@...cle.com>
To:	Måns Rullgård <mans@...sr.com>
Cc:	linux-kernel@...r.kernel.org, linux-ext4@...r.kernel.org
Subject: Re: Zero length files - an alternative approach?

On Sun, 2009-03-29 at 12:22 +0100, Måns Rullgård wrote:
> Graham Murray <graham@...rray.org.uk> writes:
> 
> > Just a thought on the ongoing discussion of dataloss with ext4 vs ext3.
> >
> > Taking the common scenario:
> > Read oldfile
> > create newfile file
> > write newfile data
> > close newfile
> > rename newfile to oldfile
> >
> > When using this scenario, the application writer wants to ensure that
> > either the old or new content are present. With delayed allocation, this
> > can lead to zero length files. Most of the suggestions on how to address
> > this have involved syncing the data either before the rename or making
> > the rename sync the data.
> >
> > What about, instead of 'bringing forward' the allocation and flushing of
> > the data, would it be possible to instead delay the rename until after
> > the blocks for newfile have been allocated and the data buffers flushed?
> > This would keep the performance benefits of delayed allocation etc and
> > also satisfy the applications developers' apparent dislike of using
> > fsync(). It would give better performance that syncing the data at
> > rename time (either using fsync() or automatically) and satisfy the
> > requirements that either the old or new content is present.
> 
> Consider this scenario:
> 
> 1. Create/write/close newfile
> 2. Rename newfile to oldfile

2a. create oldfile again
2b. fsync oldfile

> 3. Open/read oldfile.  This must return the new contents.
> 4. System crash and reboot before delayed allocation/flush complete
> 5. Open/read oldfile.  Old contents now returned.
> 

What happens to the new generation of oldfile?  We could insert
dependency tracking so that we know the fsync of oldfile is supposed to
also fsync the rename'd new file.  But then picture a loop of operations
doing renames and creating files in the place of the old one...that
dependency tracking gets ugly in a hurry.

Databases know how to do all of this, but filesystems don't implement
most of the database transactional features.

-chris


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ