[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CADBw62qp0=4iq5yZNQzB7jCd0LCa4Jy7Xi7ErBpuDcj10DSdxQ@mail.gmail.com>
Date: Wed, 8 Apr 2020 10:18:11 +0800
From: Baolin Wang <baolin.wang7@...il.com>
To: Adrian Hunter <adrian.hunter@...el.com>
Cc: Ulf Hansson <ulf.hansson@...aro.org>,
Orson Zhai <orsonzhai@...il.com>,
Chunyan Zhang <zhang.lyra@...il.com>,
Arnd Bergmann <arnd@...db.de>,
linux-mmc <linux-mmc@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v4 1/3] mmc: host: Introduce the request_atomic() for the host
On Tue, Apr 7, 2020 at 6:15 PM Adrian Hunter <adrian.hunter@...el.com> wrote:
>
> On 7/04/20 10:21 am, Baolin Wang wrote:
> > On Tue, Apr 7, 2020 at 2:38 PM Adrian Hunter <adrian.hunter@...el.com> wrote:
> >>
> >> On 3/04/20 10:05 am, Baolin Wang wrote:
> >>> The SD host controller can process one request in the atomic context if
> >>> the card is nonremovable, which means we can submit next request in the
> >>> irq hard handler when using the MMC host software queue to reduce the
> >>> latency. Thus this patch adds a new API request_atomic() for the host
> >>> controller, as well as adding support for host software queue to submit
> >>> a request by the new request_atomic() API.
> >>>
> >>> Moreover there is an unusual case that the card is busy when trying to
> >>> send a command, and we can not polling the card status in interrupt
> >>> context by using request_atomic() to dispatch requests. Thus we should
> >>> queue a work to try again in the non-atomic context in case the host
> >>> releases the busy signal later.
> >>>
> >>> Suggested-by: Adrian Hunter <adrian.hunter@...el.com>
> >>> Signed-off-by: Baolin Wang <baolin.wang7@...il.com>
> >>
> >>
> >> One minor point below, otherwise:
> >>
> >> Acked-by: Adrian Hunter <adrian.hunter@...el.com>
> >>
> >>> ---
> >>> drivers/mmc/host/mmc_hsq.c | 29 ++++++++++++++++++++++++++++-
> >>> drivers/mmc/host/mmc_hsq.h | 1 +
> >>> include/linux/mmc/host.h | 3 +++
> >>> 3 files changed, 32 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/drivers/mmc/host/mmc_hsq.c b/drivers/mmc/host/mmc_hsq.c
> >>> index b90b2c9..a57f802 100644
> >>> --- a/drivers/mmc/host/mmc_hsq.c
> >>> +++ b/drivers/mmc/host/mmc_hsq.c
> >>> @@ -16,11 +16,20 @@
> >>> #define HSQ_NUM_SLOTS 64
> >>> #define HSQ_INVALID_TAG HSQ_NUM_SLOTS
> >>>
> >>> +static void mmc_hsq_retry_handler(struct work_struct *work)
> >>> +{
> >>> + struct mmc_hsq *hsq = container_of(work, struct mmc_hsq, retry_work);
> >>> + struct mmc_host *mmc = hsq->mmc;
> >>> +
> >>> + mmc->ops->request(mmc, hsq->mrq);
> >>> +}
> >>> +
> >>> static void mmc_hsq_pump_requests(struct mmc_hsq *hsq)
> >>> {
> >>> struct mmc_host *mmc = hsq->mmc;
> >>> struct hsq_slot *slot;
> >>> unsigned long flags;
> >>> + int ret = 0;
> >>>
> >>> spin_lock_irqsave(&hsq->lock, flags);
> >>>
> >>> @@ -42,7 +51,24 @@ static void mmc_hsq_pump_requests(struct mmc_hsq *hsq)
> >>>
> >>> spin_unlock_irqrestore(&hsq->lock, flags);
> >>>
> >>> - mmc->ops->request(mmc, hsq->mrq);
> >>> + if (mmc->ops->request_atomic)
> >>> + ret = mmc->ops->request_atomic(mmc, hsq->mrq);
> >>> + else
> >>> + mmc->ops->request(mmc, hsq->mrq);
> >>> +
> >>> + /*
> >>> + * If returning BUSY from request_atomic(), which means the card
> >>> + * may be busy now, and we should change to non-atomic context to
> >>> + * try again for this unusual case, to avoid time-consuming operations
> >>> + * in the atomic context.
> >>> + *
> >>> + * Note: we just give a warning for other error cases, since the host
> >>> + * driver will handle them.
> >>> + */
> >>> + if (ret == -EBUSY)
> >>> + schedule_work(&hsq->retry_work);
> >>> + else
> >>> + WARN_ON_ONCE(ret && ret != -EBUSY);
> >>
> >> 'ret != -EBUSY' is redundant because it is always true in the 'else' clause.
> >
> > Ah, Yes, thanks for pointing this out and I will fix it ine next version.
> >
> > By the way, could you help to review patch 2 and 3 in this patch set? Thanks.
> >
>
> I'd like to handle the inhibit wait differently. I will make some patches
> for that and send them out.
OK, great. I'd like to test them. Thanks.
--
Baolin Wang
Powered by blists - more mailing lists