[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aRYXuwtSQUz6buBs@redhat.com>
Date: Thu, 13 Nov 2025 18:39:07 +0100
From: Kevin Wolf <kwolf@...hat.com>
To: Christoph Hellwig <hch@....de>
Cc: Jan Kara <jack@...e.cz>, Keith Busch <kbusch@...nel.org>,
Dave Chinner <david@...morbit.com>,
Carlos Maiolino <cem@...nel.org>,
Christian Brauner <brauner@...nel.org>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
linux-kernel@...r.kernel.org, linux-xfs@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-raid@...r.kernel.org,
linux-block@...r.kernel.org
Subject: Re: fall back from direct to buffered I/O when stable writes are
required
Am 03.11.2025 um 13:21 hat Christoph Hellwig geschrieben:
> On Mon, Nov 03, 2025 at 12:14:06PM +0100, Jan Kara wrote:
> > I also think the performance cost of the unconditional bounce buffering is
> > so heavy that it's just a polite way of pushing the app to do proper IO
> > buffer synchronization itself (assuming it cares about IO performance but
> > given it bothered with direct IO it presumably does).
> >
> > So the question is how to get out of this mess with the least disruption
> > possible which IMO also means providing easy way for well-behaved apps to
> > avoid the overhead.
>
> Remember the cases where this matters is checksumming and parity, where
> we touch all the cache lines anyway and consume the DRAM bandwidth,
> although bounce buffering upgrades this from pure reads to also writes.
> So the overhead is heavy, but if we handle it the right way, that is
> doing the checksum/parity calculation while the cache line is still hot
> it should not be prohibitive. And getting this right in the direct
> I/O code means that the low-level code could stop bounce buffering
> for buffered I/O, providing a major speedup there.
>
> I've been thinking a bit more on how to better get the copy close to the
> checksumming at least for PI, and to avoid the extra copies for RAID5
> buffered I/O. M maybe a better way is to mark a bio as trusted/untrusted
> so that the checksumming/raid code can bounce buffer it, and I start to
> like that idea.
This feels like the right idea to me. It's also what I thought of after
reading your problem description.
The problem is not that RAID5 uses bounce buffers. That's the correct
and safe thing to do when you don't know that the buffer can't change.
I'd argue changing that would be a RAID5 bug, and the corruption you
showed earlier in the thread is not a sign of a buggy filesystem or
application [1], but that you told the device to operate incorrectly.
What is the problem is that it still uses bounce buffers when you do
know that the buffer can't change. Then it's just wasteful and doesn't
contribute to correctness.
Passing down a flag to the device so that it can decide whether the
bounce buffer is needed seems like the obvious solution for that.
> A complication is that PI could relax that requirement if we support
> PI passthrough from userspace (currently only for block device, but I
> plan to add file system support), where the device checks it, but we
> can't do that for parity RAID.
Not sure I understand the problem here. If it's passed through from
userspace, isn't its validity the problem of userspace, too? I'd expect
that you only need a bounce buffer in the kernel if the kernel itself
does something like a checksum calculation?
Kevin
[1] For a QEMU developer like me, not blaming the application may sound
like an excuse, but we're really only in the same position as the
kernel here for anything that comes from the guest. Whenever we rely
on stable buffers, we already have to use bounce buffers, too.
Powered by blists - more mailing lists