[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48CEC76E.7020101@kernel.org>
Date: Mon, 15 Sep 2008 13:37:02 -0700
From: Tejun Heo <tj@...nel.org>
To: Mark Lord <liml@....ca>
CC: Bruno Prémont <bonbons@...ux-vserver.org>,
Linux Kernel <linux-kernel@...r.kernel.org>,
linux-ide@...r.kernel.org, Jeff Garzik <jgarzik@...ox.com>
Subject: Re: XFS shutting down due to IO timeout on SATA disk (pata_via for
CX700)
Mark Lord wrote:
>> Timeout on FLUSH_EXT. That's a bad sign. Patch to retry FLUSH is
>> pending but at any rate FLUSH failure is often accompanied by loss of
>> data and XFS is doing the right thing of giving up on it.
> ..
>
> Tejun, are we *sure* that's really a timeout?
> The status shows 0x40 "drive ready" there, aka. "command complete".
Heh... on timeout, libata EH doesn't touch status register as some
controllers lock the whole machine up on that, so the 0x40 is just the
fill value libata used during qc initialization. It definitely
requires clarification.
> I have a client who is also seeing this exact scenario on 750GB drives,
> using a patched SLES10 kernel (2.6.16 + libata from 2.6.18 or so).
Hmm.. most of FLUSH timeouts I've seen are either a dying drive or bad
PSU. There just isn't much which can go wrong from the driver side.
IIRC, there was a problem when the unused part of TF is not cleared
but that was the only one.
> Smartctl output is clean (no logged errors), and the drives themselves
> are fine after a reboot -- necessary since libata/scsi kicked the drive out
> of the RAID array.
>
> Something strange is going on here.
Any chance you can trick the client to hook up the drive to a separate
PSU?
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists