lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Apr 2018 04:19:48 -0700
From:   Matthew Wilcox <willy@...radead.org>
To:     Jeff Layton <jlayton@...hat.com>
Cc:     Andres Freund <andres@...razel.de>,
        Andreas Dilger <adilger@...ger.ca>,
        20180410184356.GD3563@...nk.org,
        "Theodore Y. Ts'o" <tytso@....edu>,
        Ext4 Developers List <linux-ext4@...r.kernel.org>,
        Linux FS Devel <linux-fsdevel@...r.kernel.org>,
        "Joshua D. Drake" <jd@...mandprompt.com>
Subject: Re: fsync() errors is unsafe and risks data loss

On Thu, Apr 12, 2018 at 07:09:14AM -0400, Jeff Layton wrote:
> On Wed, 2018-04-11 at 20:02 -0700, Matthew Wilcox wrote:
> > At the moment, when we open a file, we sample the current state of the
> > writeback error and only report new errors.  We could set it to zero
> > instead, and report the most recent error as soon as anything happens
> > which would report an error.  That way err = close(open("file")); would
> > report the most recent error.
> > 
> > That's not going to be persistent across the data structure for that inode
> > being removed from memory; we'd need filesystem support for persisting
> > that.  But maybe it's "good enough" to only support it for recent files.
> > 
> > Jeff, what do you think?
> 
> I hate it :). We could do that, but....yecchhhh.
> 
> Reporting errors only in the case where the inode happened to stick
> around in the cache seems too unreliable for real-world usage, and might
> be problematic for some use cases. I'm also not sure it would really be
> helpful.

Yeah, it's definitely half-arsed.  We could make further changes to
improve the situation, but they'd have wider impact.  For example, we can
tell if the error has been sampled by any existing fd, so we could bias
our inode reaping to have inodes with unreported errors stick around in
the cache for longer.

> I think the crux of the matter here is not really about error reporting,
> per-se. I asked this at LSF last year, and got no real answer:
> 
> When there is a writeback error, what should be done with the dirty
> page(s)? Right now, we usually just mark them clean and carry on. Is
> that the right thing to do?

I suspect it isn't.  If there's a transient error then we should reattempt
the write.  OTOH if the error is permanent then reattempting the write
isn't going to do any good and it's just going to cause the drive to go
through the whole error handling dance again.  And what do we do if we're
low on memory and need these pages back to avoid going OOM?  There's a
lot of options here, all of them bad in one situation or another.

> One possibility would be to invalidate the range that failed to be
> written (or the whole file) and force the pages to be faulted in again
> on the next access. It could be surprising for some applications to not
> see the results of their writes on a subsequent read after such an
> event.
> 
> Maybe that's ok in the face of a writeback error though? IDK.

I don't know either.  It'd force the application to face up to the fact
that the data is gone immediately rather than only finding it out after
a reboot.  Again though that might cause more problems than it solves.
It's hard to know what the right thing to do is.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ