[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B5DD9DB.7070300@redhat.com>
Date: Mon, 25 Jan 2010 12:50:19 -0500
From: Ric Wheeler <rwheeler@...hat.com>
To: tytso@....edu, Anton Altaparmakov <aia21@....ac.uk>,
Nick Piggin <npiggin@...e.de>,
Dave Chinner <david@...morbit.com>, Jan Kara <jack@...e.cz>,
Hidehiro Kawai <hidehiro.kawai.ez@...achi.com>,
linux-kernel@...r.kernel.org, linux-ext4@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Andreas Dilger <adilger@....com>,
Satoshi OSHIMA <satoshi.oshima.fk@...achi.com>,
linux-fsdevel@...r.kernel.org
Subject: Re: IO error semantics
On 01/25/2010 12:47 PM, tytso@....edu wrote:
> On Mon, Jan 25, 2010 at 10:23:57AM -0500, Ric Wheeler wrote:
>>
>> For permanent write errors, I would expect any modern drive to do a
>> sector remapping internally. We should never need to track this kind
>> of information for any modern device that I know of (S-ATA, SAS,
>> SSD's and raid arrays should all handle this).
>
> ... and if the device is run out of all of its blocks in its spare
> blocks pool, it's probably well past the time to replace said disk.
>
> BTW, I really liked Dave Chinner's summary of the issues involved; I
> ran into Kawai-san last week at Linux.conf.au, and we discussed pretty
> much the same thing over lunch. (i.e., that it's a hard problem, and
> in some cases we need to retry the writes, such as a transient FC path
> problem --- but some kind of write throttling is critical or we could
> end up choking the VM due to too many pages getting dirtied and no way
> of cleaning them.)
>
> - Ted
Also note that retrying writes (or reads for that matter) often are counter
productive. For those of us who have suffered with trying to migrate data off of
an old, failing disk onto a new, shiny one, excessive retries can be painful...
ric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists