[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51f3faa70910261807h54f05f73j48b67067ee3dde70@mail.gmail.com>
Date: Mon, 26 Oct 2009 19:07:18 -0600
From: Robert Hancock <hancockrwd@...il.com>
To: David Miller <davem@...emloft.net>
Cc: phdm@...qel.be, linux-ide@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH ide] : Increase WAIT_DRQ to support slow CF cards
On Mon, Oct 26, 2009 at 6:45 PM, David Miller <davem@...emloft.net> wrote:
> From: Robert Hancock <hancockrwd@...il.com>
> Date: Mon, 26 Oct 2009 18:34:57 -0600
>
>> This has come up before:
>>
>> http://marc.info/?l=linux-ide&m=123064513313466&w=2
>>
>> I think this timeout should not even exist. libata has no such timeout
>> (only the overall command completion timeout), and I can't find any
>> reference in current ATA specs to the device being required to raise
>> DRQ in any particular amount of time.
>
> So is the issue that, whilst we should wait for BUSY to clear,
> waiting around for DRQ is unreasonable?
>
> It seems that WAIT_DRQ is passed to ide_wait_stat() but that
> only controls how long we wait for BUSY to clear, the ATA_DRQ
> 'bad' bit we pass there only gets probed in a fixed limit loop:
>
> for (i = 0; i < 10; i++) {
> udelay(1);
> stat = tp_ops->read_status(hwif);
>
> if (OK_STAT(stat, good, bad)) {
> *rstat = stat;
> return 0;
> }
> }
> *rstat = stat;
> return -EFAULT;
>
> Therefore, if increasing WAIT_DRQ helps things for people, it's
> because the BUSY bit needs that much time to clear in these
> cases.
>
> The talking in that thread seems to state that the ATA layer
> waits only for BUSY to clear, it does not wait for DRQ. But
> from the data we're seeing here, it is in fact BUSY which needs
> so much more time to clear so removing the DRQ bit probe to
> be more like ATA won't fix anything.
Hmm, I think you're right.. seems it expects BSY to be de-asserted
within 100ms when issuing a write, which is fairly ridiculous. Maybe
not a problem for a hard drive in typical cases, but if a CF or SSD is
in an erase cycle or something it's quite possible for this not to
work.
Of course, just jacking up the timeout may make the problem alluded to
in the comment in __ide_wait_stat more evident ("This routine should
get fixed to not hog the cpu during extra long waits"), as it just
does a tight loop polling the status with no sleeps.
libata only busy-waits for 50 microseconds, if not set then it sleeps
for 2ms and polls for another 10 microseconds, if still not set it
tries the whole thing again at 16ms intervals. Only after (typically)
30 seconds does it give up.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists