[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45C0B0DC.8030501@rtr.ca>
Date: Wed, 31 Jan 2007 10:08:12 -0500
From: Mark Lord <liml@....ca>
To: Ric Wheeler <ric@....com>
Cc: "Eric D. Mudama" <edmudama@...il.com>,
James Bottomley <James.Bottomley@...senpartnership.com>,
linux-kernel@...r.kernel.org,
IDE/ATA development list <linux-ide@...r.kernel.org>,
linux-scsi <linux-scsi@...r.kernel.org>, dougg@...que.net
Subject: Re: [PATCH] scsi_lib.c: continue after MEDIUM_ERROR
Ric Wheeler wrote:
> Mark Lord wrote:
>> Eric D. Mudama wrote:
>>> Actually, it's possibly worse, since each failure in libata will
>>> generate 3-4 retries.
(note: libata does *not* generate retries for medium errors;
the looping is driven by the SCSI mid-layer code).
>> It really beats the alternative of a forced reboot
>> due to, say, superblock I/O failing because it happened
>> to get merged with an unrelated I/O which then failed..
>> Etc..
>>
>> Definitely an improvement.
>>
>> The number of retries is an entirely separate issue.
>> If we really care about it, then we should fix SD_MAX_RETRIES.
>>
>> The current value of 5 is *way* too high. It should be zero or one.
..
> I think that drives retry enough, we should leave retry at zero for
> normal (non-removable) drives. Should this be a policy we can set like
> we do with NCQ queue depth via /sys ?
Or perhaps we could have the mid-layer always "early-exit"
without retries for "MEDIUM_ERROR", and still do retries for the rest.
When libata reports a MEDIUM_ERROR to us, we *know* it's non-recoverable,
as the drive itself has already done internal retries (libata uses the
"with retry" ATA opcodes for this).
But meanwhile, we still have the original issue too, where a single stray
bad sector can blow a system out of the water, because the mid-layer
currently aborts everything after it from a large merged request.
Thus the original patch from this thread. :)
Cheers
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists