[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200819090637.GE2639@vkoul-mobl>
Date: Wed, 19 Aug 2020 14:36:37 +0530
From: Vinod Koul <vkoul@...nel.org>
To: Bard Liao <yung-chuan.liao@...ux.intel.com>
Cc: alsa-devel@...a-project.org, linux-kernel@...r.kernel.org,
tiwai@...e.de, broonie@...nel.org, gregkh@...uxfoundation.org,
jank@...ence.com, srinivas.kandagatla@...aro.org,
rander.wang@...ux.intel.com, ranjani.sridharan@...ux.intel.com,
hui.wang@...onical.com, pierre-louis.bossart@...ux.intel.com,
sanyog.r.kale@...el.com, mengdong.lin@...el.com,
bard.liao@...el.com
Subject: Re: [PATCH] soundwire: cadence: fix race condition between suspend
and Slave device alerts
On 18-08-20, 06:23, Bard Liao wrote:
> From: Pierre-Louis Bossart <pierre-louis.bossart@...ux.intel.com>
>
> In system suspend stress cases, the SOF CI reports timeouts. The root
> cause is that an alert is generated while the system suspends. The
> interrupt handling generates transactions on the bus that will never
> be handled because the interrupts are disabled in parallel.
>
> As a result, the transaction never completes and times out on resume.
> This error doesn't seem too problematic since it happens in a work
> queue, and the system recovers without issues.
>
> Nevertheless, this race condition should not happen. When doing a
> system suspend, or when disabling interrupts, we should make sure the
> current transaction can complete, and prevent new work from being
> queued.
>
> BugLink: https://github.com/thesofproject/linux/issues/2344
> Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@...ux.intel.com>
> Reviewed-by: Ranjani Sridharan <ranjani.sridharan@...ux.intel.com>
> Reviewed-by: Rander Wang <rander.wang@...ux.intel.com>
> Signed-off-by: Bard Liao <yung-chuan.liao@...ux.intel.com>
> ---
> drivers/soundwire/cadence_master.c | 24 +++++++++++++++++++++++-
> drivers/soundwire/cadence_master.h | 1 +
> 2 files changed, 24 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c
> index 24eafe0aa1c3..1330ffc47596 100644
> --- a/drivers/soundwire/cadence_master.c
> +++ b/drivers/soundwire/cadence_master.c
> @@ -791,7 +791,16 @@ irqreturn_t sdw_cdns_irq(int irq, void *dev_id)
> CDNS_MCP_INT_SLAVE_MASK, 0);
>
> int_status &= ~CDNS_MCP_INT_SLAVE_MASK;
> - schedule_work(&cdns->work);
> +
> + /*
> + * Deal with possible race condition between interrupt
> + * handling and disabling interrupts on suspend.
> + *
> + * If the master is in the process of disabling
> + * interrupts, don't schedule a workqueue
> + */
> + if (cdns->interrupt_enabled)
> + schedule_work(&cdns->work);
would it not make sense to mask the interrupts first and then cancel the
work? that way you are guaranteed that after this call you dont have
interrupts and work scheduled?
> }
>
> cdns_writel(cdns, CDNS_MCP_INTSTAT, int_status);
> @@ -924,6 +933,19 @@ int sdw_cdns_enable_interrupt(struct sdw_cdns *cdns, bool state)
> slave_state = cdns_readl(cdns, CDNS_MCP_SLAVE_INTSTAT1);
> cdns_writel(cdns, CDNS_MCP_SLAVE_INTSTAT1, slave_state);
> }
> + cdns->interrupt_enabled = state;
> +
> + /*
> + * Complete any on-going status updates before updating masks,
> + * and cancel queued status updates.
> + *
> + * There could be a race with a new interrupt thrown before
> + * the 3 mask updates below are complete, so in the interrupt
> + * we use the 'interrupt_enabled' status to prevent new work
> + * from being queued.
> + */
> + if (!state)
> + cancel_work_sync(&cdns->work);
>
> cdns_writel(cdns, CDNS_MCP_SLAVE_INTMASK0, slave_intmask0);
> cdns_writel(cdns, CDNS_MCP_SLAVE_INTMASK1, slave_intmask1);
> diff --git a/drivers/soundwire/cadence_master.h b/drivers/soundwire/cadence_master.h
> index fdec62b912d3..4d1aab5b5ec2 100644
> --- a/drivers/soundwire/cadence_master.h
> +++ b/drivers/soundwire/cadence_master.h
> @@ -133,6 +133,7 @@ struct sdw_cdns {
>
> bool link_up;
> unsigned int msg_count;
> + bool interrupt_enabled;
>
> struct work_struct work;
>
> --
> 2.17.1
--
~Vinod
Powered by blists - more mailing lists