lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45C00AEE.1090708@emc.com>
Date:	Tue, 30 Jan 2007 22:20:14 -0500
From:	Ric Wheeler <ric@....com>
To:	Mark Lord <liml@....ca>
CC:	"Eric D. Mudama" <edmudama@...il.com>,
	James Bottomley <James.Bottomley@...senpartnership.com>,
	linux-kernel@...r.kernel.org,
	IDE/ATA development list <linux-ide@...r.kernel.org>,
	linux-scsi <linux-scsi@...r.kernel.org>
Subject: Re: [PATCH] scsi_lib.c: continue after MEDIUM_ERROR



Mark Lord wrote:

> Eric D. Mudama wrote:
>
>>
>> Actually, it's possibly worse, since each failure in libata will 
>> generate 3-4 retries.  With existing ATA error recovery in the 
>> drives, that's about 3 seconds per retry on average, or 12 seconds 
>> per failure.  Multiply that by the number of blocks past the error to 
>> complete the request..
>
>
> It really beats the alternative of a forced reboot
> due to, say, superblock I/O failing because it happened
> to get merged with an unrelated I/O which then failed..
> Etc..
>
> Definitely an improvement.
>
> The number of retries is an entirely separate issue.
> If we really care about it, then we should fix SD_MAX_RETRIES.
>
> The current value of 5 is *way* too high.  It should be zero or one.
>
> Cheers
>
I think that drives retry enough, we should leave retry at zero for 
normal (non-removable) drives. Should this  be a policy we can set like 
we do with NCQ queue depth via /sys ?

We need to be able to layer things like MD on top of normal drive errors 
in a way that will produce a system that provides reasonable response 
time despite any possible IO error on a single component.  Another case 
that we end up doing on a regular basis is drive recovery. Errors need 
to be limited in scope to just the impacted area and dispatched up to 
the application layer as quickly as we can so that you don't spend days 
watching a copy of  huge drive (think 750GB or more) ;-)

ric

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ