lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aF3IPcneKbUe9IdH@dread.disaster.area>
Date: Fri, 27 Jun 2025 08:22:53 +1000
From: Dave Chinner <david@...morbit.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: Yafang Shao <laoar.shao@...il.com>, Jeff Layton <jlayton@...nel.org>,
	djwong@...nel.org, linux-fsdevel@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-xfs@...r.kernel.org,
	yc1082463@...il.com
Subject: Re: [PATCH] xfs: report a writeback error on a read() call

On Thu, Jun 26, 2025 at 03:25:21AM -0700, Christoph Hellwig wrote:
> On Thu, Jun 26, 2025 at 01:57:59PM +1000, Dave Chinner wrote:
> > writeback errors. Because scientists and data analysts that wrote
> > programs to chew through large amounts of data didn't care about
> > persistence of their data mid-processing. They just wanted what they
> > wrote to be there the next time the processing pipeline read it.
> 
> That's only going to work if your RAM is as large as your permanent
> storage :)

No, the old behaviour worked just fine with data sets larger than
RAM. When there is a random writeback error in a big data stream,
only those pages remained dirty and so never get tossed out of RAM. Hence
when a re-read of that file range occurred, the data was already in
RAM and the read succeeded, regardless of the fact that writeback
has been failing.

IOWs the behavioural problems that the user is reporting are present
because we got rid of the historic XFS writeback error handling
(leave the dirty pages in RAM and retry again later) and replaced it
with the historic Linux behaviour (toss the data out and mark the
mapping with an error).

The result of this change is exactly what the OP is having problems
with - reread of a range that had a writeback failure returns zeroes
or garbage, not the original data. If we kept the original XFS
behaviour, the user applications would handle these flakey writeback
failures just fine...

Put simply: we used to have more robust writeback failure handling
than we do now. That could (and probably should) be considered a
regression....

-Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ