[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <539EB43B.8070707@realsil.com.cn>
Date: Mon, 16 Jun 2014 17:09:15 +0800
From: micky <micky_ching@...lsil.com.cn>
To: Ulf Hansson <ulf.hansson@...aro.org>
CC: Samuel Ortiz <sameo@...ux.intel.com>,
Lee Jones <lee.jones@...aro.org>,
Chris Ball <chris@...ntf.net>, <devel@...uxdriverproject.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
linux-mmc <linux-mmc@...r.kernel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Dan Carpenter <dan.carpenter@...cle.com>,
Roger <rogerable@...ltek.com>, Wei WANG <wei_wang@...lsil.com.cn>
Subject: Re: [PATCH 2/2] mmc: rtsx: add support for async request
On 06/16/2014 04:42 PM, Ulf Hansson wrote:
>> @@ -36,7 +37,10 @@ struct realtek_pci_sdmmc {
>> > struct rtsx_pcr *pcr;
>> > struct mmc_host *mmc;
>> > struct mmc_request *mrq;
>> >+ struct workqueue_struct *workq;
>> >+#define SDMMC_WORKQ_NAME "rtsx_pci_sdmmc_workq"
>> >
>> >+ struct work_struct work;
> I am trying to understand why you need a work/workqueue to implement
> this feature. Is that really the case?
>
> Could you elaborate on the reasons?
Hi Uffe,
we need return as fast as possible in mmc_host_ops request(ops->request)
callback,
so the mmc core can continue handle next request.
when next request everything is ready, it will wait previous done(if not
done),
then call ops->request().
we can't use atomic context, because we use mutex_lock() to protect
resource, and we have to hold the lock during handle request.
So I use workq, we just queue a work and return in ops->request(),
The mmc core can continue without blocking at ops->request().
Best Regards.
micky.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists