lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 21 Mar 2009 11:20:21 -0400
From:	Mark Lord <liml@....ca>
To:	James Bottomley <James.Bottomley@...senPartnership.com>
Cc:	Mark Lord <lkml@....ca>, Norman Diamond <n0diamond@...oo.co.jp>,
	linux-kernel@...r.kernel.org, linux-ide@...r.kernel.org
Subject: Re: Overagressive failing of disk reads, both LIBATA and IDE

James Bottomley wrote:
> On Sat, 2009-03-21 at 10:55 -0400, Mark Lord wrote:
..
>> The patch *does* use the disk supplied data about the error,
>> and returns success for sectors up to that point.  Where it differs
>> from mainline SCSI, is that it then continues attempting the remaining
>> 2000 sectors (or whatever) of the request, hoping that not all of
>> them are bad.
> 
> Um, but so does SCSI without your patch ... that was my point.
..

Does it?  I thought it still just failed everything after the first
bad sector?  Kudos are due if that's working now.

..
> I don't really think we'd do that.  The problem, as you say is request
> combination.  I think if we really wanted to do this, we'd have block do
> it.  Each separate request that's merged gets a separate bio, and block
> already has capabilities to pick up per bio errors, so we'd do the
> partial completion of the failing bio then skip to the next one in the
> request to try.  That would completely solve both readahead problems and
> request merging ones.
..

Yeah, that's a reasonable way to tackle.  And you're right, we *did* discuss
this back two years ago.  It just never made it as far as new code.  :)

Something else that might be good here, would be to have the md layer
pass down a (per-bio?) flag indicating whether it has redundacy capability
or not for the I/O.  Eg. healthy RAID1/4/5/10 etc.. would set the flag,
and SCSI could then just abort immediately on a bad sector, with NO retries
beyond the first bad one.

On RAID0, or a degraded (no spares) RAID1 etc, it would not set the flag,
so SCSI would try harder to recover the data, as we're discussing above.

This sounds like FAST_FAIL, but is different.  And the hint needs to
come from the upper layer that is performing redundancy.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ