lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 26 Jan 2010 17:19:54 +1100
From:	Dave Chinner <david@...morbit.com>
To:	Nick Piggin <npiggin@...e.de>
Cc:	tytso@....edu, Ric Wheeler <rwheeler@...hat.com>,
	Anton Altaparmakov <aia21@....ac.uk>, Jan Kara <jack@...e.cz>,
	Hidehiro Kawai <hidehiro.kawai.ez@...achi.com>,
	linux-kernel@...r.kernel.org, linux-ext4@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Andreas Dilger <adilger@....com>,
	Satoshi OSHIMA <satoshi.oshima.fk@...achi.com>,
	linux-fsdevel@...r.kernel.org
Subject: Re: IO error semantics

On Tue, Jan 26, 2010 at 04:55:30AM +1100, Nick Piggin wrote:
> On Mon, Jan 25, 2010 at 12:47:23PM -0500, tytso@....edu wrote:
> > On Mon, Jan 25, 2010 at 10:23:57AM -0500, Ric Wheeler wrote:
> > > 
> > > For permanent write errors, I would expect any modern drive to do a
> > > sector remapping internally. We should never need to track this kind
> > > of information for any modern device that I know of (S-ATA, SAS,
> > > SSD's and raid arrays should all handle this).
> > 
> > ... and if the device is run out of all of its blocks in its spare
> > blocks pool, it's probably well past the time to replace said disk.
> > 
> > BTW, I really liked Dave Chinner's summary of the issues involved; I
> > ran into Kawai-san last week at Linux.conf.au, and we discussed pretty
> > much the same thing over lunch.  (i.e., that it's a hard problem, and
> > in some cases we need to retry the writes, such as a transient FC path
> > problem --- but some kind of write throttling is critical or we could
> > end up choking the VM due to too many pages getting dirtied and no way
> > of cleaning them.)
> 
> Well I just don't think we can ever discard them by default.

We have done this for a long time in XFS. e.g. If we can't issue IO
on the page (e.g. allocation fails or we are in a shutdown
situation already) we invalidate the page immediately, clear the
page uptodate flag and return an error to mark the address space
with an error. See xfs_page_state_convert() for more detail.

And besides, if there is an error of some kind sufficient to shut
down the filesystem, the last thing you want to do is write more
data to it and potentially make the problem worse, especially if
async transactions that the data write might rely on were cancelled
by the shutdown rather than pushed to disk....

> Therefore
> we must default to not discarding them, therefore we need to solve or
> work around the dirty page congestion problem some how.

Agreed. The way XFS treats data IO errors is because that's the only
thing we can do right now if we want the system to continue to function
in the face of IO errors....

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ