[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <61498763c60e488a825e8dd270732b62@skidata.com>
Date: Mon, 17 Aug 2020 07:28:31 +0000
From: Benjamin Bara - SKIDATA <Benjamin.Bara@...data.com>
To: Robin Gong <yibin.gong@....com>
CC: "timur@...nel.org" <timur@...nel.org>,
"nicoleotsuka@...il.com" <nicoleotsuka@...il.com>,
"vkoul@...nel.org" <vkoul@...nel.org>,
"dan.j.williams@...el.com" <dan.j.williams@...el.com>,
dl-linux-imx <linux-imx@....com>,
"shawnguo@...nel.org" <shawnguo@...nel.org>,
"kernel@...gutronix.de" <kernel@...gutronix.de>,
"dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"alsa-devel@...a-project.org" <alsa-devel@...a-project.org>,
Richard Leitner - SKIDATA <Richard.Leitner@...data.com>
Subject: RE: pcm|dmaengine|imx-sdma race condition on i.MX6
We think this is not an i.MX6-specific problem, but a problem of the DMAengine usage from the PCM.
In case of a XRUN, the DMA channel is never closed but first a SNDRV_PCM_TRIGGER_STOP next a
SNDRV_PCM_TRIGGER_START is triggered.
The SNDRV_PCM_TRIGGER_STOP simply executes a dmaengine_terminate_async() [1]
but does not await the termination by calling dmaengine_synchronize(),
which is required as stated by the docu [2].
Anyways, we are not able to fix it in the pcm_dmaengine layer either at the end of
SNDRV_PCM_TRIGGER_STOP (called from the DMA on complete interrupt handler)
or at the beginning of SNDRV_PCM_TRIGGER_START (called from a PCM ioctl),
since the dmaengine_synchronize() requires a non-atomic context.
Based on my understanding, most of the DMA implementations don't even implement device_synchronize
and if they do, it might not really be necessary since the terminate_all operation is synchron.
With the i.MX6, it looks a bit different:
Since [4], the terminate_all operation really schedules a worker which waits the required ~1ms and
then does the context freeing.
Now, the ioctl(SNDRV_PCM_IOCTL_PREPARE) and the following ioctl(SNDRV_PCM_IOCTL_READI_FRAMES),
which are called from US to handle/recover from a XRUN, are in a race with the terminate_worker.
If the terminate_worker finishes earlier, everything is fine.
Otherwise, the sdma_prep_dma_cyclic() is called, sets up everything and
as soon as it is scheduled out to wait for data, the terminate_worker is scheduled and kills it.
In this case, we wait in [5] until the timeout is reached and return with -EIO.
Based on our understanding, there exist two different fixing approaches:
We thought that the pcm_dmaengine should handle this by either synchronizing the DMA on a trigger or
terminating it synchronously.
However, as we are in an atomic context, we either have to give up the atomic context of the PCM
to finish the termination or we have to design a synchronous terminate variant which is callable
from an atomic context.
For the first option, which is potentially more performant, we have to leave the atomic PCM context
and we are not sure if we are allowed to.
For the second option, we would have to divide the dma_device terminate_all into an atomic sync and
an async one, which would align with the dmaengine API, giving it the option to ensure termination
in an atomic context.
Based on my understanding, most of them are synchronous anyways, for the currently async ones we
would have to implement busy waits.
However, with this approach, we reach the WARN_ON [6] inside of an atomic context,
indicating we might not do the right thing.
Ad Failure Log (at bottom):
I haven't added the ioctl syscalls, but this is basically the output with additional prints to
be able to follow the code execution path.
A XRUN (buffer size is 480 but 960 available) leads to a SNDRV_PCM_TRIGGER_STOP.
This leads to terminate_async, starting the terminate_worker.
Next, the XRUN recovery triggers SNDRV_PCM_TRIGGER_START, calling sdma_prep_dma_cyclic
and then waits for the DMA in wait_for_avail().
Next we see the two freeings, first the old, then the newly added one;
so the terminate_worker is back at work.
Now the DMA is terminated, while we are still waiting on data from it.
What do you think about it? Is any of the provided solutions practicable?
If you need further information or additional logging, feel free to ask.
Best regards
Benjamin
[1] https://elixir.bootlin.com/linux/latest/source/sound/core/pcm_dmaengine.c#L209
[2] https://www.kernel.org/doc/html/latest/driver-api/dmaengine/client.html#further-apis
[3] https://elixir.bootlin.com/linux/latest/source/sound/core/pcm_dmaengine.c#L189
[4] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b8603d2a5795c42f78998e70dc792336e0dc20c9
[5] https://elixir.bootlin.com/linux/v5.8/source/sound/core/pcm_lib.c#L1875
[6] https://elixir.bootlin.com/linux/latest/source/kernel/dma/mapping.c#L306
*Failure Log from latest 5.4 LTS kernel:*
[ 535.201598] imx-sgtl5000 sound: snd_pcm_period_elapsed()
[ 535.201610] imx-sgtl5000 sound: snd_pcm_period_elapsed: calls snd_pcm_update_hw_ptr0()
[ 535.201626] imx-sdma 20ec000.sdma: sdma_tx_status channel: 2
[ 535.201640] snd_pcm_capture_avail: hw_ptr: 960, appl_ptr: 0, avail: 960, boundary: 2013265920
[ 535.201655] imx-sgtl5000 sound: snd_dmaengine_pcm_trigger command: 0
[ 535.201664] imx-sdma 20ec000.sdma: sdma_disable_channel_async channel: 2
[ 535.201672] imx-sdma 20ec000.sdma: sdma_disable_channel channel: 2
[ 535.201752] imx-sgtl5000 sound: wait_for_avail: tout=999, state=4
[ 535.201760] imx-sdma 20ec000.sdma: sdma_channel_terminate_work channel: 2
[ 535.201877] imx-sgtl5000 sound: snd_pcm_do_reset: ioctl SNDRV_PCM_IOCTL1_RESET
[ 535.201888] imx-sgtl5000 sound: snd_pcm_lib_ioctl_reset: calls snd_pcm_update_hw_ptr()
[ 535.201912] imx-sgtl5000 sound: snd_dmaengine_pcm_trigger command: 1
[ 535.201922] imx-sdma 20ec000.sdma: sdma_prep_dma_cyclic channel: 2
[ 535.201931] imx-sdma 20ec000.sdma: sdma_config_write channel: 1
[ 535.201939] imx-sdma 20ec000.sdma: sdma_config_channel channel: 2
[ 535.201948] imx-sdma 20ec000.sdma: sdma_disable_channel channel: 2
[ 535.201959] imx-sdma 20ec000.sdma: sdma_load_context channel: 2
[ 535.201967] imx-sdma 20ec000.sdma: sdma_transfer_init channel: 2
[ 535.201983] imx-sdma 20ec000.sdma: sdma_load_context channel: 2
[ 535.201995] imx-sdma 20ec000.sdma: entry 0: count: 256 dma: 0x4a300000 intr
[ 535.202005] imx-sdma 20ec000.sdma: entry 1: count: 256 dma: 0x4a300100 intr
[ 535.202014] imx-sdma 20ec000.sdma: entry 2: count: 256 dma: 0x4a300200 intr
[ 535.202023] imx-sdma 20ec000.sdma: entry 3: count: 256 dma: 0x4a300300 intr
[ 535.202033] imx-sdma 20ec000.sdma: entry 4: count: 256 dma: 0x4a300400 intr
[ 535.202042] imx-sdma 20ec000.sdma: entry 5: count: 256 dma: 0x4a300500 intr
[ 535.202050] imx-sdma 20ec000.sdma: entry 6: count: 256 dma: 0x4a300600 intr
[ 535.202059] imx-sdma 20ec000.sdma: entry 7: count: 256 dma: 0x4a300700 intr
[ 535.202067] imx-sdma 20ec000.sdma: entry 8: count: 256 dma: 0x4a300800 intr
[ 535.202077] imx-sdma 20ec000.sdma: entry 9: count: 256 dma: 0x4a300900 intr
[ 535.202086] imx-sdma 20ec000.sdma: entry 10: count: 256 dma: 0x4a300a00 intr
[ 535.202094] imx-sdma 20ec000.sdma: entry 11: count: 256 dma: 0x4a300b00 intr
[ 535.202103] imx-sdma 20ec000.sdma: entry 12: count: 256 dma: 0x4a300c00 intr
[ 535.202111] imx-sdma 20ec000.sdma: entry 13: count: 256 dma: 0x4a300d00 intr
[ 535.202120] imx-sdma 20ec000.sdma: entry 14: count: 256 dma: 0x4a300e00 wrap intr
[ 535.202135] imx-sdma 20ec000.sdma: vchan 8aa58994: txd 0a262722[8]: submitted
[ 535.202145] imx-sdma 20ec000.sdma: sdma_issue_pending channel: 2
[ 535.202181] snd_pcm_capture_avail: hw_ptr: 0, appl_ptr: 0, avail: 0, boundary: 2013265920
[ 535.202192] snd_pcm_capture_avail: hw_ptr: 0, appl_ptr: 0, avail: 0, boundary: 2013265920
[ 535.202202] imx-sgtl5000 sound: wait_for_avail: avail=0, state=3, twake=64
[ 535.203182] imx-sdma 20ec000.sdma: txd 19499aa8: freeing
[ 535.203207] imx-sdma 20ec000.sdma: txd 0a262722: freeing
[ 545.766059] imx-sgtl5000 sound: wait_for_avail: tout=0, state=3
[ 545.766075] imx-sgtl5000 sound: capture write error (DMA or IRQ trouble?)
Powered by blists - more mailing lists