lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B9EDE7D.4040809@codemonkey.ws>
Date:	Mon, 15 Mar 2010 20:27:25 -0500
From:	Anthony Liguori <anthony@...emonkey.ws>
To:	Christoph Hellwig <hch@...radead.org>
CC:	Chris Webb <chris@...chsys.com>, Avi Kivity <avi@...hat.com>,
	balbir@...ux.vnet.ibm.com,
	KVM development list <kvm@...r.kernel.org>,
	Rik van Riel <riel@...riel.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH][RF C/T/D] Unmapped page cache control - via boot parameter

On 03/15/2010 07:43 PM, Christoph Hellwig wrote:
> On Mon, Mar 15, 2010 at 06:43:06PM -0500, Anthony Liguori wrote:
>    
>> I knew someone would do this...
>>
>> This really gets down to your definition of "safe" behaviour.  As it
>> stands, if you suffer a power outage, it may lead to guest corruption.
>>
>> While we are correct in advertising a write-cache, write-caches are
>> volatile and should a drive lose power, it could lead to data
>> corruption.  Enterprise disks tend to have battery backed write caches
>> to prevent this.
>>
>> In the set up you're emulating, the host is acting as a giant write
>> cache.  Should your host fail, you can get data corruption.
>>
>> cache=writethrough provides a much stronger data guarantee.  Even in the
>> event of a host failure, data integrity will be preserved.
>>      
> Actually cache=writeback is as safe as any normal host is with a
> volatile disk cache, except that in this case the disk cache is
> actually a lot larger.  With a properly implemented filesystem this
> will never cause corruption.

Metadata corruption, not necessarily corruption of data stored in a file.

>    You will lose recent updates after
> the last sync/fsync/etc up to the size of the cache, but filesystem
> metadata should never be corrupted, and data that has been forced to
> disk using fsync/O_SYNC should never be lost either.

Not all software uses fsync as much as they should.  And often times, 
it's for good reason (like ext3).  This is mitigated by the fact that 
there's usually a short window of time before metadata is flushed to 
disk.  Adding another layer increases that delay.

IIUC, an O_DIRECT write using cache=writeback is not actually on the 
spindle when the write() completes.  Rather, an explicit fsync() would 
be required.  That will cause data corruption in many applications (like 
databases) regardless of whether the fs gets metadata corruption.

You could argue that the software should disable writeback caching on 
the virtual disk, but we don't currently support that so even if the 
application did, it's not going to help.

Regards,

Anthony Liguori

>    If it is that's
> a bug somewhere in the stack, but in my powerfail testing we never did
> so using xfs or ext3/4 after I fixed up the fsync code in the latter
> two.
>
>    

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ