lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251031130050.GA15719@lst.de>
Date: Fri, 31 Oct 2025 14:00:50 +0100
From: Christoph Hellwig <hch@....de>
To: Dave Chinner <david@...morbit.com>
Cc: Christoph Hellwig <hch@....de>, Carlos Maiolino <cem@...nel.org>,
	Christian Brauner <brauner@...nel.org>, Jan Kara <jack@...e.cz>,
	"Martin K. Petersen" <martin.petersen@...cle.com>,
	linux-kernel@...r.kernel.org, linux-xfs@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, linux-raid@...r.kernel.org,
	linux-block@...r.kernel.org
Subject: Re: fall back from direct to buffered I/O when stable writes are
 required

On Fri, Oct 31, 2025 at 10:18:46AM +1100, Dave Chinner wrote:
> I'm not asking about btrfs - I'm asking about actual, real world
> problems reported in production XFS environments.

The same things applies once we have checksums with PI.  But it seems
like you don't want to listen anyway.

> > For RAID you probably won't see too many reports, as with RAID the
> > problem will only show up as silent corruption long after a rebuild
> > rebuild happened that made use of the racy data.
> 
> Yet we are not hearing about this, either. Nobody is reporting that
> their data is being found to be corrupt days/weeks/months/years down
> the track.
> 
> This is important, because software RAID5 is pretty much the -only-
> common usage of BLK_FEAT_STABLE_WRITES that users are exposed to.

RAID5 bounce buffers by default.  It has a tunable to disable that:

https://sbsfaq.com/qnap-fails-to-reveal-data-corruption-bug-that-affects-all-4-bay-and-higher-nas-devices/

and once that was turned on it pretty much immediately caused data
corruption:

https://sbsfaq.com/qnap-fails-to-reveal-data-corruption-bug-that-affects-all-4-bay-and-higher-nas-devices/
https://sbsfaq.com/synology-nas-confirmed-to-have-same-data-corruption-bug-as-qnap/

> This patch set is effectively disallowing direct IO for anyone
> using software RAID5. That is simply not an acceptible outcome here.

Quite contrary, fixing this properly allows STABLE_WRITES to actually
work without bouncing in lower layers and at least get efficient
buffered I/O.

> 
> > With checksums
> > it is much easier to reproduce and trivially shown by various xfstests.
> 
> Such as? 

Basically anything using fsstress long enough plus a few others.

> 
> > With increasing storage capacities checksums are becoming more and
> > more important, and I'm trying to get Linux in general and XFS
> > specifically to use them well.
> 
> So when XFS implements checksums and that implementation is
> incompatible with Direct IO, then we can talk about disabling Direct
> IO on XFS when that feature is enabled. But right now, that feature
> does not exist, and ....

Every Linux file system supports checksums with PI capable device.
I'm trying to make it actually work for all case and perform well for a
while.

> 
> > Right now I don't think anyone is
> > using PI with XFS or any Linux file system given the amount of work
> > I had to put in to make it work well, and how often I see regressions
> > with it.
> 
> .... as you say, "nobody is using PI with XFS".
> 
> So patchset is a "fix" for a problem that no-one is actually having
> right now.

I'm making it work.

> Modifying an IO buffer whilst a DIO is in flight on that buffer has
> -always- been an application bug.

Says who?

> It is a vector for torn writes
> that don't get detected until the next read. It is a vector for
> in-memory data corruption of read buffers.

That assumes that particular use case cares about torn writes.  We've
never ever documented any such requirement.  We can't just make that
up 20+ years later.

> Indeed, it does not matter if the underlying storage asserts
> BLK_FEAT_STABLE_WRITES or not, modifying DIO buffers that are under
> IO will (eventually) result in data corruption.

It doesn't if that's not your assumption.  But more importantly with
RAID5 if you modify them you do not primarily corrupt your own data,
but other data in the stripe.  It is a way how a malicious user can
corrupt other users data.

> Hence, by your
> logic, we should disable Direct IO for everyone.

That's your weird logic, not mine.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ