[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <adecb267-0013-9eb2-42c3-89c660724176@intel.com>
Date: Thu, 7 May 2020 20:21:55 +0300
From: Adrian Hunter <adrian.hunter@...el.com>
To: Veerabhadrarao Badiganti <vbadigan@...eaurora.org>,
ulf.hansson@...aro.org
Cc: stummala@...eaurora.org, linux-mmc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-msm@...r.kernel.org,
Sarthak Garg <sartgarg@...eaurora.org>, stable@...r.kernel.org,
Yoshihiro Shimoda <yoshihiro.shimoda.uh@...esas.com>,
Baolin Wang <baolin.wang@...aro.org>,
Kate Stewart <kstewart@...uxfoundation.org>,
Allison Randal <allison@...utok.net>,
Thomas Gleixner <tglx@...utronix.de>,
Linus Walleij <linus.walleij@...aro.org>
Subject: Re: [PATCH V2] mmc: core: Fix recursive locking issue in CQE recovery
path
On 7/05/20 7:15 pm, Veerabhadrarao Badiganti wrote:
> From: Sarthak Garg <sartgarg@...eaurora.org>
>
> Consider the following stack trace
>
> -001|raw_spin_lock_irqsave
> -002|mmc_blk_cqe_complete_rq
> -003|__blk_mq_complete_request(inline)
> -003|blk_mq_complete_request(rq)
> -004|mmc_cqe_timed_out(inline)
> -004|mmc_mq_timed_out
>
> mmc_mq_timed_out acquires the queue_lock for the first
> time. The mmc_blk_cqe_complete_rq function also tries to acquire
> the same queue lock resulting in recursive locking where the task
> is spinning for the same lock which it has already acquired leading
> to watchdog bark.
>
> Fix this issue with the lock only for the required critical section.
>
> Cc: <stable@...r.kernel.org>
> Fixes: 1e8e55b67030 ("mmc: block: Add CQE support")
> Suggested-by: Sahitya Tummala <stummala@...eaurora.org>
> Signed-off-by: Sarthak Garg <sartgarg@...eaurora.org>
Acked-by: Adrian Hunter <adrian.hunter@...el.com>
> ---
> drivers/mmc/core/queue.c | 13 ++++---------
> 1 file changed, 4 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
> index 25bee3d..b5fd3bc 100644
> --- a/drivers/mmc/core/queue.c
> +++ b/drivers/mmc/core/queue.c
> @@ -107,7 +107,7 @@ static enum blk_eh_timer_return mmc_cqe_timed_out(struct request *req)
> case MMC_ISSUE_DCMD:
> if (host->cqe_ops->cqe_timeout(host, mrq, &recovery_needed)) {
> if (recovery_needed)
> - __mmc_cqe_recovery_notifier(mq);
> + mmc_cqe_recovery_notifier(mrq);
> return BLK_EH_RESET_TIMER;
> }
> /* No timeout (XXX: huh? comment doesn't make much sense) */
> @@ -127,18 +127,13 @@ static enum blk_eh_timer_return mmc_mq_timed_out(struct request *req,
> struct mmc_card *card = mq->card;
> struct mmc_host *host = card->host;
> unsigned long flags;
> - int ret;
> + bool ignore_tout;
>
> spin_lock_irqsave(&mq->lock, flags);
> -
> - if (mq->recovery_needed || !mq->use_cqe || host->hsq_enabled)
> - ret = BLK_EH_RESET_TIMER;
> - else
> - ret = mmc_cqe_timed_out(req);
> -
> + ignore_tout = mq->recovery_needed || !mq->use_cqe || host->hsq_enabled;
> spin_unlock_irqrestore(&mq->lock, flags);
>
> - return ret;
> + return ignore_tout ? BLK_EH_RESET_TIMER : mmc_cqe_timed_out(req);
> }
>
> static void mmc_mq_recovery_handler(struct work_struct *work)
>
Powered by blists - more mailing lists