[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f11f3370-0b72-cb1f-21cf-c574ef03fd72@linux.intel.com>
Date: Tue, 23 May 2023 10:00:29 -0500
From: Pierre-Louis Bossart <pierre-louis.bossart@...ux.intel.com>
To: "Mukunda,Vijendar" <vijendar.mukunda@....com>, broonie@...nel.org
Cc: alsa-devel@...a-project.org, Basavaraj.Hiregoudar@....com,
Sunil-kumar.Dommati@....com, Mastan.Katragadda@....com,
Arungopal.kondaveeti@....com, mario.limonciello@....com,
Liam Girdwood <lgirdwood@...il.com>,
Jaroslav Kysela <perex@...ex.cz>,
Takashi Iwai <tiwai@...e.com>,
Syed Saba Kareem <Syed.SabaKareem@....com>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH V2 5/9] ASoC: amd: ps: add support for SoundWire DMA
interrupts
On 5/23/23 02:36, Mukunda,Vijendar wrote:
> On 22/05/23 23:42, Pierre-Louis Bossart wrote:
>>
>> On 5/22/23 08:31, Vijendar Mukunda wrote:
>>> Initialize workqueue for SoundWire DMA interrupts handling.
>>> Whenever audio data equal to the SoundWire FIFO watermark level
>>> are produced/consumed, interrupt is generated.
>>> Acknowledge the interrupt and schedule the workqueue.
>> It would help to explain why a work queue is needed is the first place,
>> as opposed to handling periods in the interrupt thread.
> For SoundWire DAI link, we are setting nonatomic flag to true.
> If we return period elapsed from hard irq handler instead of workqueue,
> soft lock up is observed during stream closure.
>
> We can use interrupt thread as well. To have a symmetry with
> SoundWire manager work queues, we have used workqueue for
> DMA interrupts.
Oh, I completely missed the model here.
If you are using the bottom half/hard irq handler to read status
information, the natural thing to do would be to have an irq thread, no?
Not sure I see the benefit of aligning with the manager work queues -
unless it makes your life simpler to avoid race conditions with
cancel_work_sync()?
>>> +static void acp63_sdw_dma_workthread(struct work_struct *work)
>>> +{
>>> + struct acp63_dev_data *adata = container_of(work, struct acp63_dev_data,
>>> + acp_sdw_dma_work);
>>> + struct sdw_dma_dev_data *sdw_dma_data;
>>> + u32 stream_index;
>>> + u16 pdev_index;
>>> +
>>> + pdev_index = adata->sdw_dma_dev_index;
>>> + sdw_dma_data = dev_get_drvdata(&adata->pdev[pdev_index]->dev);
>>> +
>>> + for (stream_index = 0; stream_index < ACP63_SDW0_DMA_MAX_STREAMS; stream_index++) {
>>> + if (adata->sdw0_dma_intr_stat[stream_index]) {
>>> + if (sdw_dma_data->sdw0_dma_stream[stream_index])
>>> + snd_pcm_period_elapsed(sdw_dma_data->sdw0_dma_stream[stream_index]);
>>> + adata->sdw0_dma_intr_stat[stream_index] = 0;
>>> + }
>>> + }
>>> + for (stream_index = 0; stream_index < ACP63_SDW1_DMA_MAX_STREAMS; stream_index++) {
>>> + if (adata->sdw1_dma_intr_stat[stream_index]) {
>>> + if (sdw_dma_data->sdw1_dma_stream[stream_index])
>>> + snd_pcm_period_elapsed(sdw_dma_data->sdw1_dma_stream[stream_index]);
>>> + adata->sdw1_dma_intr_stat[stream_index] = 0;
>>> + }
>>> + }
>> I am not clear on the benefits of the workqueue which only tests a flag
>> that's set ...
> In top half, we are checking all stream irq mask and setting
> corresponding stream id index in interrupt status array when dma
> irq is raised.
>
> Our intention is to handle snd_pcm_period_elapsed in process context.
> if the flag is set, call the period elapsed for the substream based on stream
> id in work queue.
Powered by blists - more mailing lists