lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1523545730.4532.82.camel@redhat.com>
Date:   Thu, 12 Apr 2018 11:08:50 -0400
From:   Jeff Layton <jlayton@...hat.com>
To:     Dave Chinner <david@...morbit.com>,
        lsf-pc <lsf-pc@...ts.linuxfoundation.org>
Cc:     Matthew Wilcox <willy@...radead.org>,
        Andres Freund <andres@...razel.de>,
        Andreas Dilger <adilger@...ger.ca>,
        20180410184356.GD3563@...nk.org,
        "Theodore Y. Ts'o" <tytso@....edu>,
        Ext4 Developers List <linux-ext4@...r.kernel.org>,
        Linux FS Devel <linux-fsdevel@...r.kernel.org>,
        "Joshua D. Drake" <jd@...mandprompt.com>
Subject: Re: fsync() errors is unsafe and risks data loss

On Thu, 2018-04-12 at 22:01 +1000, Dave Chinner wrote:
> On Thu, Apr 12, 2018 at 07:09:14AM -0400, Jeff Layton wrote:
> > When there is a writeback error, what should be done with the dirty
> > page(s)? Right now, we usually just mark them clean and carry on. Is
> > that the right thing to do?
> 
> There isn't a right thing. Whatever we do will be wrong for someone.
> 
> > One possibility would be to invalidate the range that failed to be
> > written (or the whole file) and force the pages to be faulted in again
> > on the next access. It could be surprising for some applications to not
> > see the results of their writes on a subsequent read after such an
> > event.
> 
> Not to mention a POSIX IO ordering violation. Seeing stale data
> after a "successful" write is simply not allowed.
> 

I'm not so sure here, given that we're dealing with an error condition.
Are we really obligated not to allow any changes to pages that we can't
write back?

Given that the pages are clean after these failures, we aren't doing
this even today:

Suppose we're unable to do writes but can do reads vs. the backing
store. After a wb failure, the page has the dirty bit cleared. If it
gets kicked out of the cache before the read occurs, it'll have to be
faulted back in. Poof -- your write just disappeared.

That can even happen before you get the chance to call fsync, so even a
write()+read()+fsync() is not guaranteed to be safe in this regard
today, given sufficient memory pressure.

I think the current situation is fine from a "let's not OOM at all
costs" standpoint, but not so good for application predictability. We
should really consider ways to do better here.


> > Maybe that's ok in the face of a writeback error though? IDK.
> 
> No matter what we do for async writeback error handling, it will be
> slightly different from filesystem to filesystem, not to mention OS
> to OS. The is no magic bullet here, so I'm not sure we should worry
> too much. There's direct IO for anyone who cares that need to know
> about the completion status of every single write IO....

I think we we have an opportunity here to come up with better defined
and hopefully more useful behavior for buffered I/O in the face of
writeback errors. The first step would be to hash out what we'd want it
to look like.

Maybe we need a plenary session at LSF/MM?
-- 
Jeff Layton <jlayton@...hat.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ