[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170509163612.32fdm2z5g3jqwkd2@thunk.org>
Date: Tue, 9 May 2017 12:36:12 -0400
From: Theodore Ts'o <tytso@....edu>
To: "Darrick J. Wong" <darrick.wong@...cle.com>
Cc: Eric Biggers <ebiggers3@...il.com>, linux-ext4@...r.kernel.org,
Eric Biggers <ebiggers@...gle.com>
Subject: Re: [PATCH] misc: fix 'zero_hugefiles = false' regression
On Mon, May 08, 2017 at 06:56:36PM -0700, Darrick J. Wong wrote:
>
> Soooo... I don't know that I like the potential for stale data exposure,
> which predisposes me not to like this patch.
We like causing mke2fs to take 24+ hours per high capacity HDD even less. :-)
Especially when the only user of the huge file is a trusted storage
server where all data is stored encrypted at rest with multiple layers
of encryption[1], so forcing a zero write pass is really pointless.
[1] https://cloud.google.com/security/encryption-at-rest/default-encryption/
Cheers,
- Ted
Powered by blists - more mailing lists