lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 14 Mar 2008 13:04:39 -0400
From:	Ric Wheeler <ric@....com>
To:	Theodore Tso <tytso@....edu>, Ric Wheeler <ric@....com>,
	Benny Amorsen <benny+usenet@...rsen.dk>,
	linux-kernel@...r.kernel.org
Subject: Re: [ANNOUNCE] Ramback: faster than a speeding bullet

Theodore Tso wrote:
> On Fri, Mar 14, 2008 at 11:47:04AM -0400, Ric Wheeler wrote:
>> The ingest rate at the time of a power hit makes a huge difference as well 
>> - basically, pulling the power cord when a box is idle is normally not 
>> harmful. Try that when you are really pounding on the disks and you will 
>> see corruptions a plenty without barriers ;-)
> 
> Oh, no question.  But the fact that it mostly works when the box is
> idle means the hard drive firmware is reasonably aggressive about
> pushing data from the write cache out to the platters when it can.
> 
>> One note - the barrier hit for apps that use fsync() is just half an order 
>> of magnitude (say 35 files/sec instead of 120 files/sec). If you don't 
>> fsync() each file, the impact is lower still.
>>
>> Still expensive, but might be reasonable for home users on a box with 
>> family photos, etc.
> 
> It depends on the workload, obviously.  I thought I remember someone
> on this thread talking about benchmark where they went from ~2000 to
> ~20 ops/sec once they added fsync().  I'm sure that was an extreme
> benchmarking workload that isn't at all representative of real-life
> usage, where you're usually do something else modifying the metadata
> of many tiny files over and over again.  :-)

I think those were the numbers comparing a ramdisk, s-ata drive and a 
clariion all doing barriers ;-)

I just reran some quick tests on a home box with a s-ata drive writing 
50k files single threaded:

barriers off &    fsync: 133 files/sec
barriers off & no fsync: 2306 files/sec
barriers on  & no fsync: 2312 files/sec
barriers on  &    fsync: 22   files/sec

So no slowdown without fsync & a 5x slowdown when you fsync every write.

Doing the fsync is the only way to make (mostly sure) that all data is 
on platter, but you can write the files in a batch and then go back and 
reopen/fsync/close all files afterwards.

That helps a lot:

barriers on & bulk fsync (in order written)      : 218 files/sec
barriers on & bulk fsync (reverse order written) : 340 files/sec

All of this was measured with my fs_mark tool (it is also on 
sourceforge) with variations of the following:

fs_mark -d /home/ric/test -n 10000 -D 20 -N 500

(-S 0 no fsync, -S 1 fsync per file as written, -S 2 bulk reverse order, 
-S 3 bulk in order).

> It's also the case that a home user's fileserver is generally
> quiscent, which is probably why we aren't hearing lots of stories
> about home NAS servers (which I bet probably don't enable write
> barriers) trashing vast amounts of user data.....
> 
> 						- Ted

A lot of NAS boxes and storage boxes in general disable all write cache 
on drives just to be safe. It would be interesting to benchmark the nfs 
server with and without barriers from a client ;-)

ric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ