lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 04 Sep 2018 07:09:34 -0400
From:   Jeff Layton <jlayton@...hat.com>
To:     焦晓冬 <milestonejxd@...il.com>,
        R.E.Wolff@...wizard.nl
Cc:     linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: POSIX violation by writeback error

On Tue, 2018-09-04 at 16:58 +0800, 焦晓冬 wrote:
> On Tue, Sep 4, 2018 at 3:53 PM Rogier Wolff <R.E.Wolff@...wizard.nl> wrote:
> 
> ...
> > > 
> > > Jlayton's patch is simple but wonderful idea towards correct error
> > > reporting. It seems one crucial thing is still here to be fixed. Does
> > > anyone have some idea?
> > > 
> > > The crucial thing may be that a read() after a successful
> > > open()-write()-close() may return old data.
> > > 
> > > That may happen where an async writeback error occurs after close()
> > > and the inode/mapping get evicted before read().
> > 
> > Suppose I have 1Gb of RAM. Suppose I open a file, write 0.5Gb to it
> > and then close it. Then I repeat this 9 times.
> > 
> > Now, when writing those files to storage fails, there is 5Gb of data
> > to remember and only 1Gb of RAM.
> > 
> > I can choose any part of that 5Gb and try to read it.
> > 
> > Please make a suggestion about where we should store that data?
> 
> That is certainly not possible to be done. But at least, shall we report
> error on read()? Silently returning wrong data may cause further damage,
> such as removing wrong files since it was marked as garbage in the old file.
> 

Is the data wrong though? You tried to write and then that failed.
Eventually we want to be able to get at the data that's actually in the
file -- what is that point?

If I get an error back on a read, why should I think that it has
anything at all to do with writes that previously failed? It may even
have been written by a completely separate process that I had nothing at
all to do with.

> As I can see, that is all about error reporting.
> 
> As for suggestion, maybe the error flag of inode/mapping, or the entire inode
> should not be evicted if there was an error. That hopefully won't take much
> memory. On extreme conditions, where too much error inode requires staying
> in memory, maybe we should panic rather then spread the error.
> 
> > 
> > In the easy case, where the data easily fits in RAM, you COULD write a
> > solution. But when the hardware fails, the SYSTEM will not be able to
> > follow the posix rules.
> 
> Nope, we are able to follow the rules. The above is one way that follows the
> POSIX rules.
> 

This is something we discussed at LSF this year.

We could attempt to keep dirty data around for a little while, at least
long enough to ensure that reads reflect earlier writes until the errors
can be scraped out by fsync. That would sort of redefine fsync from
being "ensure that my writes are flushed" to "synchronize my cache with
the current state of the file".

The problem of course is that applications are not required to do fsync
at all. At what point do we give up on it, and toss out the pages that
can't be cleaned?

We could allow for a tunable that does a kernel panic if writebacks fail
and the errors are never fetched via fsync, and we run out of memory. I
don't think that is something most users would want though.

Another thought: maybe we could OOM kill any process that has the file
open and then toss out the page data in that situation?

I'm wide open to (good) ideas here.
-- 
Jeff Layton <jlayton@...hat.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ