lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <486f6105fd4076c1af67dae7fdfe6826019f7ff4.camel@redhat.com>
Date:   Tue, 04 Sep 2018 06:56:53 -0400
From:   Jeff Layton <jlayton@...hat.com>
To:     焦晓冬 <milestonejxd@...il.com>,
        linux-fsdevel@...r.kernel.org
Cc:     linux-kernel@...r.kernel.org, Rogier Wolff <R.E.Wolff@...Wizard.nl>
Subject: Re: POSIX violation by writeback error

On Tue, 2018-09-04 at 13:42 +0800, 焦晓冬 wrote:
> Hi,
> 
> After reading several writeback error handling articles from LWN, I
> begin to be upset about writeback error handling.
> 
> Jlayton's patch is simple but wonderful idea towards correct error
> reporting. It seems one crucial thing is still here to be fixed. Does
> anyone have some idea?
> 
> The crucial thing may be that a read() after a successful open()-
> write()-close() may return old data. 
> That may happen where an async writeback error occurs after close()
> and the inode/mapping get evicted before read().
>
> That violate POSIX as POSIX requires that a read() that can be proved
> to occur after a write() has returned will return the new data.

That can happen even before a close(), and it varies by filesystem. Most
filesystems just pretend the page is clean after writeback failure. It's
quite possible to do:

write()
kernel attempts to write back page and fails
page is marked clean and evicted from the cache
read()

Now your write is gone and there were no calls between the write and
read.

The question we still need to answer is this:

When we attempt to write back some data from the cache and that fails,
what should happen to the dirty pages?

Unfortunately, there are no good answers given the write/fsync/read
model for I/O. I tend to think that in the long run we may need new
interfaces to handle this better.
-- 
Jeff Layton <jlayton@...hat.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ