lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 4 Mar 2010 21:23:09 -0500
From:	s ponnusa <foosaa@...il.com>
To:	Mike Hayward <hayward@...p.net>
Cc:	akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
	linux-ide@...r.kernel.org, jens.axboe@...cle.com,
	linux-mm@...ck.org
Subject: Re: Linux kernel - Libata bad block error handling to user mode 
	program

On Thu, Mar 4, 2010 at 7:42 PM, Mike Hayward <hayward@...p.net> wrote:
>  > The write cache is turned off at the hdd level. I am using O_DIRECT
>  > mode with aligned buffers of the 4k page size. I have turned off the
>  > page cache and read ahead during read as well using the fadvise
>  > function.
> If O_DIRECT and no write cache, either the sector finally was
> remapped, or the successful return is very disturbing.  Doesn't matter
> what operating system, it should not silently corrupt with write cache
> off.  Test by writing nonzero random data on one of these 'retry'
> sectors.  Reread to see if data returned after successful write.  If
> so, you'll know it's just slow to remap.
>
> Because timeouts can take awhile, if you have many bad blocks I
> imagine this could be a very painful process :-) It's one thing to
> wipe a functioning drive, another to wipe a failed one.  If drive
> doesn't have a low level function to do it more quickly (cut out the
> long retries), after a couple of hours I'd give up on that, literally
> disassemble and destroy the platters.  It is probably faster and
> cheaper than spending a week trying to chew through the bad section.
> Keep in mind, zeroing the drive is not going to erase the data all
> that well anyway.  Might as well skip regions when finding a bad
> sequence and scrub as much of the rest as you can without getting hung
> up on 5% of the data, then mash it to bits or take a nasty magnet or
> some equally destructive thing to it!
>
> If it definitely isn't storing the data you write after it returns
> success (reread it to check), I'd definitely call that a write-read
> corruption, either in the kernel or in the drive.  If in kernel it
> should be fixed as that is seriously broken to silently ignore data
> corruption and I think we'd all like to trust the kernel if not the
> drive :-)
>
> Please let me know if you can prove data corruption.  I'm writing a
> sophisticated storage app and would like to know if kernel has such a
> defect.  My bet is it's just a drive that is slowly remapping.
>
> - Mike
>
Mike,

The data written through linux cannot be read back by any other means.
Does that prove any data corruption? I wrote a signature on to a bad
drive. (With all the before mentioned permutation and combinations).
The program returned 0 (zero) errors and said the data was
successfully written to all the sectors of the drive and it had taken
5 hrs (The sample size of the drive is 20 GB). And I tried to verify
it using another program on linux. It produced read errors across a
couple of million sectors after almost 13 hours of grinding the hdd.

I can understand the slow remapping process during the write
operations. But what if the drive has used up all the available
sectors for mapping and is slowly dying. The SMART data displays
thousands of seek, read, crc errors and still linux does not notify
the program which has asked it to write some data. ????

I don't know how one can handle the data integrity / protection with
it. The data might be just be surviving because of the personnel
vigilance (constant look out on SMART data / HDD health) and probably
due to existing redundancy options! :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ