lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 12 Apr 2018 07:09:14 -0400
From:   Jeff Layton <>
To:     Matthew Wilcox <>,
        Andres Freund <>
Cc:     Andreas Dilger <>,,
        "Theodore Y. Ts'o" <>,
        Ext4 Developers List <>,
        Linux FS Devel <>,
        "Joshua D. Drake" <>
Subject: Re: fsync() errors is unsafe and risks data loss

On Wed, 2018-04-11 at 20:02 -0700, Matthew Wilcox wrote:
> On Wed, Apr 11, 2018 at 07:17:52PM -0700, Andres Freund wrote:
> > > > While there's some differing opinions on the referenced postgres thread,
> > > > the fundamental problem isn't so much that a retry won't fix the
> > > > problem, it's that we might NEVER see the failure.  If writeback happens
> > > > in the background, encounters an error, undirties the buffer, we will
> > > > happily carry on because we've never seen that. That's when we're
> > > > majorly screwed.
> > > 
> > > 
> > > I think there are two issues here - "fsync() on an fd that was just opened"
> > > and "persistent error state (without keeping dirty pages in memory)".
> > > 
> > > If there is background data writeback *without an open file descriptor*,
> > > there is no mechanism for the kernel to return an error to any application
> > > which may exist, or may not ever come back.
> > 
> > And that's *horrible*. If I cp a file, and writeback fails in the
> > background, and I then cat that file before restarting, I should be able
> > to see that that failed. Instead of returning something bogus.

What are you expecting to happen in this case? Are you expecting a read
error due to a writeback failure? Or are you just saying that we should
be invalidating pages that failed to be written back, so that they can
be re-read?

> At the moment, when we open a file, we sample the current state of the
> writeback error and only report new errors.  We could set it to zero
> instead, and report the most recent error as soon as anything happens
> which would report an error.  That way err = close(open("file")); would
> report the most recent error.
> That's not going to be persistent across the data structure for that inode
> being removed from memory; we'd need filesystem support for persisting
> that.  But maybe it's "good enough" to only support it for recent files.
> Jeff, what do you think?

I hate it :). We could do that, but....yecchhhh.

Reporting errors only in the case where the inode happened to stick
around in the cache seems too unreliable for real-world usage, and might
be problematic for some use cases. I'm also not sure it would really be

I think the crux of the matter here is not really about error reporting,
per-se. I asked this at LSF last year, and got no real answer:

When there is a writeback error, what should be done with the dirty
page(s)? Right now, we usually just mark them clean and carry on. Is
that the right thing to do?

One possibility would be to invalidate the range that failed to be
written (or the whole file) and force the pages to be faulted in again
on the next access. It could be surprising for some applications to not
see the results of their writes on a subsequent read after such an

Maybe that's ok in the face of a writeback error though? IDK.
Jeff Layton <>

Powered by blists - more mailing lists