lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sat, 21 Apr 2018 20:14:29 +0200
From:   Jan Kara <jack@...e.cz>
To:     Jeff Layton <jlayton@...hat.com>
Cc:     Matthew Wilcox <willy@...radead.org>,
        Andres Freund <andres@...razel.de>,
        Andreas Dilger <adilger@...ger.ca>,
        20180410184356.GD3563@...nk.org,
        "Theodore Y. Ts'o" <tytso@....edu>,
        Ext4 Developers List <linux-ext4@...r.kernel.org>,
        Linux FS Devel <linux-fsdevel@...r.kernel.org>,
        "Joshua D. Drake" <jd@...mandprompt.com>
Subject: Re: fsync() errors is unsafe and risks data loss

On Thu 12-04-18 07:09:14, Jeff Layton wrote:
> On Wed, 2018-04-11 at 20:02 -0700, Matthew Wilcox wrote:
> > At the moment, when we open a file, we sample the current state of the
> > writeback error and only report new errors.  We could set it to zero
> > instead, and report the most recent error as soon as anything happens
> > which would report an error.  That way err = close(open("file")); would
> > report the most recent error.
> > 
> > That's not going to be persistent across the data structure for that inode
> > being removed from memory; we'd need filesystem support for persisting
> > that.  But maybe it's "good enough" to only support it for recent files.
> > 
> > Jeff, what do you think?
> 
> I hate it :). We could do that, but....yecchhhh.
> 
> Reporting errors only in the case where the inode happened to stick
> around in the cache seems too unreliable for real-world usage, and might
> be problematic for some use cases. I'm also not sure it would really be
> helpful.

So this is never going to be perfect but I think we could do good enough
by:
1) Mark inodes that hit IO error.
2) If the inode gets evicted from memory we store the fact that we hit an
error for this IO in a more space efficient data structure (sparse bitmap,
radix tree, extent tree, whatever).
3) If the underlying device gets destroyed, we can just switch the whole SB
to an error state and forget per inode info.
4) If there's too much of per-inode error info (probably per-fs configurable
limit in terms of number of inodes), we would yell in the kernel log,
switch the whole fs to the error state and forget per inode info.

This way there won't be silent loss of IO errors. Memory usage would be
reasonably limited. It could happen the whole fs would switch to error state
"prematurely" but if that's a problem for the machine, admin could tune the
limit for number of inodes to keep IO errors for...

> I think the crux of the matter here is not really about error reporting,
> per-se.

I think this is related but a different question.

> I asked this at LSF last year, and got no real answer:
> 
> When there is a writeback error, what should be done with the dirty
> page(s)? Right now, we usually just mark them clean and carry on. Is
> that the right thing to do?
> 
> One possibility would be to invalidate the range that failed to be
> written (or the whole file) and force the pages to be faulted in again
> on the next access. It could be surprising for some applications to not
> see the results of their writes on a subsequent read after such an
> event.
> 
> Maybe that's ok in the face of a writeback error though? IDK.

I can see the admin wanting to rather kill the machine with OOM than having
to deal with data loss due to IO errors (e.g. if he has HA server fail over
set up).  Or retry for some time before dropping the dirty data.  Or do
what we do now (possibly with invalidating pages as you say). As Dave said
elsewhere there's not one strategy that's going to please everybody. So it
might be beneficial to have this configurable like XFS has it for metadata.

OTOH if I look at the problem from application developer POV, most apps
will just declare game over at the face of IO errors (if they take care to
check for them at all). And the sophisticated apps that will try some kind
of error recovery have to be prepared that the data is just gone (as
depending on what exactly the kernel does is rather fragile) so I'm not
sure how much practical value the configurable behavior on writeback errors
would bring.

								Honza
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ