[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0904011507590.28893@asgard.lang.hm>
Date: Wed, 1 Apr 2009 15:28:03 -0700 (PDT)
From: david@...g.hm
To: Harald Arnesen <skogtun.harald@...il.com>
cc: Bill Davidsen <davidsen@....com>, linux-kernel@...r.kernel.org
Subject: Re: Linux 2.6.29
On Thu, 2 Apr 2009, Harald Arnesen wrote:
> david@...g.hm writes:
>
>>> Understood that it's not deliberate just careless. The two behaviors
>>> which are reported are (a) updating a record in an existing file and
>>> having the entire file content vanish, and (b) finding some one
>>> else's old data in my file - a serious security issue. I haven't
>>> seen any report of the case where a process unlinks or truncates a
>>> file, the disk space gets reused, and then the systems fails before
>>> the metadata is updated, leaving the data written by some other
>>> process in the file where it can be read - another possible security
>>> issue.
>>
>> ext3 eliminates this security issue by writing the data before the
>> metadata. ext4 (and I thing XFS) eliminate this security issue by not
>> allocating the blocks until it goes to write the data out. I don't
>> know how other filesystems deal with this.
>
> I've been wondering about that during the last days. How abut JFS and
> data loss (files containing zeroes after a crash), as compared to ext3,
> ext4, ordered and writeback journal modes? Is is safe?
if you don't do a fsync you can (and will) loose data if there is a crash
period, end of statement, with all filesystems
for all filesystems except ext3 in data=ordered or data=journaled modes
journaling does _not_ mean that your files will have valid data in them.
all it means is that your metadata will not be inconsistant (things like
one block on disk showing up as being part of two different files)
this guarantee means that a crash is not likely to scramble your entire
disk, but any data written shortly before the crash may not have made it
to disk (and the files may contain garbage in the space that was allocated
but not written). as such it is not nessasary to do a fsck after every
crash (it's still a good idea to do so every once in a while)
that's _ALL_ that journaling is protecting you from.
delayed allocateion and data=ordered are ways to address the security
problem that the garbage data that could end up as part of the file could
contain sensitive data that had been part of other files in the past.
data=ordered and data=journaled address this security risk by writing the
data before they write the metadata (at the cost of long delays in writing
the metadata out, and therefor long fsync times)
XFS and ext4 solve the problem by not allocating the data blocks until
they are actually ready to write the data.
David Lang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists