[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250128212316.2bba477e@pumpkin>
Date: Tue, 28 Jan 2025 21:23:15 +0000
From: David Laight <david.laight.linux@...il.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: Dave Chinner <david@...morbit.com>, Chi Zhiling <chizhiling@....com>,
Brian Foster <bfoster@...hat.com>, "Darrick J. Wong" <djwong@...nel.org>,
Amir Goldstein <amir73il@...il.com>, cem@...nel.org,
linux-xfs@...r.kernel.org, linux-kernel@...r.kernel.org, Chi Zhiling
<chizhiling@...inos.cn>, John Garry <john.g.garry@...cle.com>
Subject: Re: [PATCH] xfs: Remove i_rwsem lock in buffered read
On Mon, 27 Jan 2025 21:15:41 -0800
Christoph Hellwig <hch@...radead.org> wrote:
> On Tue, Jan 28, 2025 at 07:49:17AM +1100, Dave Chinner wrote:
> > > As for why an exclusive lock is needed for append writes, it's because
> > > we don't want the EOF to be modified during the append write.
> >
> > We don't care if the EOF moves during the append write at the
> > filesystem level. We set kiocb->ki_pos = i_size_read() from
> > generic_write_checks() under shared locking, and if we then race
> > with another extending append write there are two cases:
> >
> > 1. the other task has already extended i_size; or
> > 2. we have two IOs at the same offset (i.e. at i_size).
> >
> > In either case, we don't need exclusive locking for the IO because
> > the worst thing that happens is that two IOs hit the same file
> > offset. IOWs, it has always been left up to the application
> > serialise RWF_APPEND writes on XFS, not the filesystem.
>
> I disagree. O_APPEND (RWF_APPEND is just the Linux-specific
> per-I/O version of that) is extensively used for things like
> multi-thread loggers where you have multiple threads doing O_APPEND
> writes to a single log file, and they expect to not lose data
> that way. The fact that we currently don't do that for O_DIRECT
> is a bug, which is just papered over that barely anyone uses
> O_DIRECT | O_APPEND as that's not a very natural use case for
> most applications (in fact NFS got away with never allowing it
> at all). But extending racy O_APPEND to buffered writes would
> break a lot of applications.
It is broken in windows :-)
You get two writes to the same file offset and then (IIRC) two advances of EOF
(usually) giving a block of '\0' bytes in the file.
You might get away with doing an atomic update of EOF and then writing
into the gap.
But you have to decide what to do if there is a seg fault on the user buffer
It could be a multi-TB write from an mmap-ed file (maybe even over NFS) that
hits a disk read error.
Actually I suspect that if you let the two writes proceed in parallel you can't
let later ones complete first.
If the returns are sequenced a write can then be redone if an earlier write
got shortened.
David
Powered by blists - more mailing lists