[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53A16517.7050705@realsil.com.cn>
Date: Wed, 18 Jun 2014 18:08:23 +0800
From: micky <micky_ching@...lsil.com.cn>
To: Ulf Hansson <ulf.hansson@...aro.org>
CC: Samuel Ortiz <sameo@...ux.intel.com>,
Lee Jones <lee.jones@...aro.org>,
Chris Ball <chris@...ntf.net>, <devel@...uxdriverproject.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
linux-mmc <linux-mmc@...r.kernel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Dan Carpenter <dan.carpenter@...cle.com>,
Roger <rogerable@...ltek.com>, Wei WANG <wei_wang@...lsil.com.cn>
Subject: Re: [PATCH 2/2] mmc: rtsx: add support for async request
On 06/18/2014 03:39 PM, Ulf Hansson wrote:
> On 18 June 2014 03:17, micky <micky_ching@...lsil.com.cn> wrote:
>> On 06/17/2014 03:45 PM, Ulf Hansson wrote:
>>> On 17 June 2014 03:04, micky <micky_ching@...lsil.com.cn> wrote:
>>>> On 06/16/2014 08:40 PM, Ulf Hansson wrote:
>>>>> On 16 June 2014 11:09, micky <micky_ching@...lsil.com.cn> wrote:
>>>>>> On 06/16/2014 04:42 PM, Ulf Hansson wrote:
>>>>>>>> @@ -36,7 +37,10 @@ struct realtek_pci_sdmmc {
>>>>>>>>> struct rtsx_pcr *pcr;
>>>>>>>>> struct mmc_host *mmc;
>>>>>>>>> struct mmc_request *mrq;
>>>>>>>>> + struct workqueue_struct *workq;
>>>>>>>>> +#define SDMMC_WORKQ_NAME "rtsx_pci_sdmmc_workq"
>>>>>>>>>
>>>>>>>>> + struct work_struct work;
>>>>>>> I am trying to understand why you need a work/workqueue to implement
>>>>>>> this feature. Is that really the case?
>>>>>>>
>>>>>>> Could you elaborate on the reasons?
>>>>>> Hi Uffe,
>>>>>>
>>>>>> we need return as fast as possible in mmc_host_ops
>>>>>> request(ops->request)
>>>>>> callback,
>>>>>> so the mmc core can continue handle next request.
>>>>>> when next request everything is ready, it will wait previous done(if
>>>>>> not
>>>>>> done),
>>>>>> then call ops->request().
>>>>>>
>>>>>> we can't use atomic context, because we use mutex_lock() to protect
>>>>> ops->request should never executed in atomic context. Is that your
>>>>> concern?
>>>> Yes.
>>> Okay. Unless I missed your point, I don't think you need the
>>> work/workqueue.
>> any other method?
>>
>>> Because, ops->request isn't ever executed in atomic context. That's
>>> due to the mmc core, which handles the async mechanism, are waiting
>>> for a completion variable in process context, before it invokes the
>>> ops->request() callback.
>>>
>>> That completion variable will be kicked, from your host driver, when
>>> you invoke mmc_request_done(), .
>> Sorry, I don't understand here, how kicked?
> mmc_request_done()
> ->mrq->done()
> ->mmc_wait_done()
> ->complete(&mrq->completion);
>
>> I think the flow is:
>> - not wait for first req
>> - init mrq->done
>> - ops->request() --- A.rtsx: start queue
>> work.
>> - continue fetch next req
>> - prepare next req ok,
>> - wait previous done. --- B.(mmc_request_done() may be called
>> at any time from A to B)
>> - init mrq->done
>> - ops->request() --- C.rtsx: start queue
>> next work.
>> ...
>> and seems no problem.
> Right, I don't think there are any _problem_ by using the workqueue as
> you have implemented, but I am questioning if it's correct. Simply
> because I don't think there are any reasons to why you need a
> workqueue, it doesn't solve any problem for you - it just adds
> overhead.
Hi Uffe,
we have two driver under mfd, the rtsx-mmc and rtsx-ms,
we use mutex lock(pcr_mutex) to protect resource,
when we handle mmc request, we need hold the mutex until we finish the
request,
so it will not interruptted by rtsx-ms request.
If we not use workq, once the request hold the mutex, we have to wait
until the request finish,
then release mutex, so the mmc core is blocking at here.
To implement nonblocking request, we have to use workq.
Best Regards.
micky.
>
> Kind regards
> Ulf Hansson
> .
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists