lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 27 Mar 2009 14:11:37 -0700
From:	Jeremy Fitzhardinge <jeremy@...p.org>
To:	Theodore Tso <tytso@....edu>,
	Matthew Garrett <mjg59@...f.ucam.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	David Rees <drees76@...il.com>, Jesper Krogh <jesper@...gh.cc>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 2.6.29

Theodore Tso wrote:
> When I was growing up we were trained to *always* check error returns
> from *all* system calls, and to *always* fsync() if it was critical
> that the data survive a crash.  That was what competent Unix
> programmers did.  And if you are always checking error returns, the
> difference in the Lines of Code between doing it right and doing
> really wasn't that big --- and again, back then fsync() wan't
> expensive.  Making fsync expensive was ext3's data=ordered mode's
> fault.

This is a fairly narrow view of correct and possible.  How can you make 
"cat" fsync? grep? sort?  How do they know they're not dealing with 
critical data?  Apps in general don't know, because "criticality" is a 
property of the data itself and how its used, not the tools operating on it.

My point isn't that "there should be a way of doing fsync from a shell 
script" (which is probably true anyway), but that authors can't 
generally anticipate when their program is going to be dealing with 
something important.  The conservative approach would be to fsync all 
data on every close, but that's almost certainly the wrong thing for 
everyone.

If the filesystem has reasonably strong inherent data-preserving 
properties, then that's much better than scattering fsync everywhere.

fsync obviously makes sense in specific applications; it makes sense to 
fsync when you're guaranteeing that a database commit hits stable 
storage, etc.  But generic tools can't reasonably perform fsyncs, and 
its not reasonable to say that "important data is always handled by 
special important data tools".

    J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ