lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <480CE6E1.20302@emc.com>
Date:	Mon, 21 Apr 2008 15:11:29 -0400
From:	Ric Wheeler <ric@....com>
To:	Matthew Wilcox <matthew@....cx>
CC:	Eric Sandeen <sandeen@...hat.com>,
	Andi Kleen <andi@...stfloor.org>, Theodore Tso <tytso@....edu>,
	Alexey Zaytsev <alexey.zaytsev@...il.com>,
	linux-ext4@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	Rik van Riel <riel@...riel.com>
Subject: Re: Mentor for a GSoC application wanted (Online ext2/3 filesystem
 checker)


Matthew Wilcox wrote:
> On Mon, Apr 21, 2008 at 02:44:45PM -0400, Ric Wheeler wrote:
>> Turning the drive write cache off is the default case for most RAID 
>> products (including our mid and high end arrays).
>>
>> I have not seen an issue with drives wearing out with either setting (cache 
>> disabled or enabled with barriers).
>>
>> The theory does make some sense, but does not map into my experience ;-)
> 
> To be fair though, the gigabytes of NVRAM on the array perform the job
> that the drive's cache would do on a lower-end system.

The population I deal with personally is a huge number of 1U Centera nodes, each 
of which has 4 high capacity ATA or S-ATA drives (no NVRAM). We run with 
barriers (and write cache) enabled and I have not seen anything that leads me to 
think that this is an issue.

One way to think about this is that even with barriers, relatively few 
operations actually turn into cache flushes (fsync's, journal syncs, unmounts?).

Another thing to keep in mind is that drives are constantly writing and moving 
heads - disabling write cache or doing a flush just adds an incremental number 
of writes/head movements.

Using barriers or disabling write cache matters only when you are doing a write 
intensive load, read intensive loads are not impacted (and random, cache miss 
reads will move the heads often).

I just don't see it being an issue for any normal user (laptop user or desktop 
user) since the write workload more people have is a small fraction of what we 
run into in production data centers.

Running your drives in a moderate way will probably help them last longer, but I 
am just not convinced that the write cache/barrier load makes much of a 
difference...

ric

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ