[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZG/KH9cQluA5e30N@moria.home.lan>
Date: Thu, 25 May 2023 16:50:39 -0400
From: Kent Overstreet <kent.overstreet@...ux.dev>
To: Christoph Hellwig <hch@...radead.org>
Cc: Jan Kara <jack@...e.cz>, cluster-devel@...hat.com,
"Darrick J . Wong" <djwong@...nel.org>,
linux-kernel@...r.kernel.org, dhowells@...hat.com,
linux-bcachefs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Kent Overstreet <kent.overstreet@...il.com>
Subject: Re: [Cluster-devel] [PATCH 06/32] sched: Add
task_struct->faults_disabled_mapping
On Thu, May 25, 2023 at 01:58:13AM -0700, Christoph Hellwig wrote:
> On Wed, May 24, 2023 at 04:09:02AM -0400, Kent Overstreet wrote:
> > > Well, it seems like you are talking about something else than the
> > > existing cases in gfs2 and btrfs, that is you want full consistency
> > > between direct I/O and buffered I/O. That's something nothing in the
> > > kernel has ever provided, so I'd be curious why you think you need it
> > > and want different semantics from everyone else?
> >
> > Because I like code that is correct.
>
> Well, start with explaining your definition of correctness, why everyone
> else is "not correct", an how you can help fixing this correctness
> problem in the existing kernel. Thanks for your cooperation!
A cache that isn't actually consistent is a _bug_. You're being
Obsequious. And any time this has come up in previous discussions
(including at LSF), that was never up for debate, the only question has
been whether it was even possible to practically fix it.
The DIO code recognizes cache incoherency as something to be avoided by
shooting down the page cache both at the beginning of the IO _and again
at the end_. That's the kind of obvious hackery for a race condition
that we would like to avoid.
Regarding the consequences of this kind of bug - stale data exposed to
userspace, possibly stale data overwriting a write we acked, and worse
any filesystem state that hangs off the page cache being inconsistent
with the data on disk.
And look, we've been over all this before, so I don't see what this adds
to the discussion.
Powered by blists - more mailing lists