lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121207212743.GE29435@thunk.org>
Date:	Fri, 7 Dec 2012 16:27:43 -0500
From:	Theodore Ts'o <tytso@....edu>
To:	Chris Mason <chris.mason@...ionio.com>,
	Chris Mason <clmason@...ionio.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ric Wheeler <rwheeler@...hat.com>,
	Ingo Molnar <mingo@...nel.org>,
	Christoph Hellwig <hch@...radead.org>,
	Martin Steigerwald <Martin@...htvoll.de>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Dave Chinner <david@...morbit.com>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH, 3.7-rc7, RESEND] fs: revert commit bbdd6808 to fallocate
 UAPI

On Fri, Dec 07, 2012 at 04:09:32PM -0500, Chris Mason wrote:
> Persistent trim is what I had in mind, but there are other ideas that do
> imply a change in behavior as well.  Can we safely assume this feature
> won't matter on spinning media?  New features like persistent
> trim do make it much easier to solve securely, and using a bit for it
> means we can toss back an error to the app if the underlying storage
> isn't safe.

We originally implemented no hide stale for spinning media.  Some
folks have claimed that for XFS their superior technology means that
no hide stale doesn't buy them anything for HDD's.  I'm not entirely
sure I buy this, since if you need to update metadata, it means at
least one extra seek for each random write into 4k preallocated space,
and 7200 RPM disks only have about 200 seeks per second.

One of the problems that I've seen is that as disks get bigger, the
number of seeks per second have remained constant, and so an
application which required N TB spread out over a large number of
disks might now only require a fraction of the number of disks --- so
it's very easy for a cluster file system to become seek constrained by
the number of spindles that you have, and not capacity constrained.

This to me seems to be a fundamental problem, and I don't think it's
possible to wave one's hands to get rid of it.  All you can say is
that the people who care about this are crazy (that's OK, I don't mind
when Christoph or Dave call me crazy :-), and that their workload
doesn't matter.  But if you are trying to optimize out every last
seek, because you desperately care about latency and seeks are a
precious and scarce resource[1], then I don't see around away the
technique of not requiring an update to the metadata at the time that
you write the data block, and that kinda implies no-hide-stale.

Regards,

					- Ted

[1] Even if you don't care about the latency of the write operation,
the fact that the write operation has to do two seeks and not one can
very well slow down a subsequent high priority read request, where you
*do* care about latency.  The problem is that you only have about 200
seeks per spindle.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ