lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 1 Nov 2016 18:38:26 -0700 (PDT)
From:   Hugh Dickins <hughd@...gle.com>
To:     Dave Chinner <david@...morbit.com>
cc:     Hugh Dickins <hughd@...gle.com>,
        Jakob Unterwurzacher <jakobunt@...il.com>,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: tmpfs returns incorrect data on concurrent pread() and
 truncate()

On Wed, 2 Nov 2016, Dave Chinner wrote:
> On Tue, Nov 01, 2016 at 04:51:30PM -0700, Hugh Dickins wrote:
> > On Wed, 26 Oct 2016, Jakob Unterwurzacher wrote:
> > 
> > > tmpfs seems to be incorrectly returning 0-bytes when reading from
> > > a file that is concurrently being truncated.
> > 
> > That is an interesting observation, and you got me worried;
> > but in fact, it is not a tmpfs problem: if we call it a
> > problem at all, it's a VFS problem or a userspace problem.
> > 
> > You chose a ratio of 3 preads to 1 ftruncate in your program below:
> > let's call that the Unterwurzacher Ratio, 3 for tmpfs; YMMV, but for
> > me 4 worked well to show the same issue on ramfs, and 15 on ext4.
> > 
> > The Linux VFS does not serialize reads against writes or truncation
> > very strictly:
> 
> Which is a fine, because...
> 
> > it's unusual to need that serialization, and most
> 
> .... many filesystems need more robust serialisation as hole punching
> (and other fallocate-based extent manipulations) have much stricter
> serialisation requirements than truncate and these ....
> 
> > users prefer maximum speed to the additional locking, or intermediate
> > buffering, that would be required to avoid the issue you've seen.
> 
> .... require additional locking to be done at the filesystem level
> to avoid race conditions.
> 
> Throw in the fact that we already have to do this serialisation in
> the filesystem for direct IO as there are no page locks to serialise
> direct IO against truncate.  And we need to lock out page faults
> from refaulting while we are doing things like punching holes (to
> avoid data coherency and corruption bugs), so we need more
> filesystem level locks to serialise mmap against fallocate().
> 
> And DAX has similar issues - there are no struct pages to serialise
> read or mmap access against truncate, so again we need filesystem
> level serialisation for this.
> 
> Put simple: page locks are insufficient as a generic mechanism for
> serialising filesystem operations. The locking required for this is
> generally deeply filesystem implementation specific, so it's fine
> that the VFS doesn't attempt to provide anything stricter than it
> currently does....

I think you are saying that: xfs already provides the extra locking
that avoids this issue; most other filesystems do not; but more can
be expected to add that extra locking in the coming months?

Hugh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ