[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACRpkdYhGCBX5soSY6Dn5jK3jjZ3A0XXX=_Ffr+3auHJHWphvA@mail.gmail.com>
Date: Mon, 5 Sep 2011 15:11:25 +0200
From: Linus Walleij <linus.walleij@...aro.org>
To: NamJae Jeon <linkinjeon@...il.com>
Cc: linux-mmc@...r.kernel.org, linux-kernel@...r.kernel.org,
Chris Ball <cjb@...top.org>,
Kyungmin Park <kmpark@...radead.org>,
Sebastian Rasmussen <Sebastian.Rasmussen@...ricsson.com>,
Ulf Hansson <Ulf.Hansson@...ricsson.com>
Subject: Re: RFC : mmc : Use wait_for_completion_timeout() instead of
wait_for_completion in case of multiple write blocks.
On Mon, Sep 5, 2011 at 2:30 AM, NamJae Jeon <linkinjeon@...il.com> wrote:
>>> host
>>> controller can just know whether card is finish to program to use busy
>>> line. If unstable card is holding busy line while writing using DMA,
>>> hang problem will happen by wait_for_completion.
>>> so I think that mmc driver need some exception to avoid this problem.
>>
>> Yes you can add a timeout in the driver itself. Just set up
>> a common timer, no big deal.
>
> I didn't decide, how much timeout should I add ? first, I think that I
> try to add timeout_ns * the number of blocks. but If timeout_ns is 1.6
> sec and the number of blocks is 512, the timeout will be very long. or
> If I just add 10*HZ(10 sec), Is it proper ?
The timeout_ns comes from the card CSD bits 112 thru 119.
"Data read access-time 1".
According to the spec this "Defines the asynchronous part of the data
access time."
I don't know how to interpret this, but it says nothing about the
number of blocks involved and nor does the drivers we have
interpret it that way.
This time AFAICT is for the entire transaction, not per-block.
In the mmci driver we calculate the clock cycles required for
timeout_ns and the number of clock cycles in
timeout_clks and that's it. That's the timeout for the
entire operation. This is how all drivers work I think.
Yours,
Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists