lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47BC2D4B.6070000@emc.com>
Date:	Wed, 20 Feb 2008 08:38:19 -0500
From:	Ric Wheeler <ric@....com>
To:	Jeremy Higdon <jeremy@....com>
CC:	David Chinner <dgc@....com>, Michael Tokarev <mjt@....msk.ru>,
	device-mapper development <dm-devel@...hat.com>,
	Andi Kleen <andi@...stfloor.org>, linux-kernel@...r.kernel.org
Subject: Re: [dm-devel] Re: [PATCH] Implement barrier support for single device
 DM devices

Jeremy Higdon wrote:
> On Tue, Feb 19, 2008 at 09:16:44AM +1100, David Chinner wrote:
>> On Mon, Feb 18, 2008 at 04:24:27PM +0300, Michael Tokarev wrote:
>>> First, I still don't understand why in God's sake barriers are "working"
>>> while regular cache flushes are not.  Almost no consumer-grade hard drive
>>> supports write barriers, but they all support regular cache flushes, and
>>> the latter should be enough (while not the most speed-optimal) to ensure
>>> data safety.  Why to require write cache disable (like in XFS FAQ) instead
>>> of going the flush-cache-when-appropriate (as opposed to write-barrier-
>>> when-appropriate) way?
>> Devil's advocate:
>>
>> Why should we need to support multiple different block layer APIs
>> to do the same thing? Surely any hardware that doesn't support barrier
>> operations can emulate them with cache flushes when they receive a
>> barrier I/O from the filesystem....
>>
>> Also, given that disabling the write cache still allows CTQ/NCQ to
>> operate effectively and that in most cases WCD+CTQ is as fast as
>> WCE+barriers, the simplest thing to do is turn off volatile write
>> caches and not require any extra software kludges for safe
>> operation.
> 
> 
> I'll put it even more strongly.  My experience is that disabling write
> cache plus disabling barriers is often much faster than enabling both
> barriers and write cache enabled, when doing metadata intensive
> operations, as long as you have a drive that is good at CTQ/NCQ.
> 
> The only time write cache + barriers is significantly faster is when
> doing single threaded data writes, such as direct I/O, or if CTQ/NCQ
> is not enabled, or the drive does a poor job at it.
> 
> jeremy
> 

It would be interesting to compare numbers.

In the large, single threaded write case, what I have measured is 
roughly 2x faster writes with barriers/write cache enabled on S-ATA/ATA 
class drives. I think that this case alone is a fairly common one.

For very small file sizes, I have seen write cache off beat barriers + 
write cache enabled as well but barriers start out performing write 
cache disabled when you get up to moderate sizes (need to rerun tests to 
  get precise numbers/cross over data).

The type of workload is also important. In the test cases that I ran, 
the application needs to fsync() each file so we beat up on the barrier 
code pretty heavily.

ric

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ