[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <VE1PR04MB66386A43E2BCC5B758D1A71A895B0@VE1PR04MB6638.eurprd04.prod.outlook.com>
Date: Fri, 21 Aug 2020 09:52:00 +0000
From: Robin Gong <yibin.gong@....com>
To: Sascha Hauer <s.hauer@...gutronix.de>,
Lars-Peter Clausen <lars@...afoo.de>
CC: Benjamin Bara - SKIDATA <Benjamin.Bara@...data.com>,
"alsa-devel@...a-project.org" <alsa-devel@...a-project.org>,
"timur@...nel.org" <timur@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"nicoleotsuka@...il.com" <nicoleotsuka@...il.com>,
"vkoul@...nel.org" <vkoul@...nel.org>,
dl-linux-imx <linux-imx@....com>,
"kernel@...gutronix.de" <kernel@...gutronix.de>,
"dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>,
"dan.j.williams@...el.com" <dan.j.williams@...el.com>,
"shawnguo@...nel.org" <shawnguo@...nel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
Richard Leitner - SKIDATA <Richard.Leitner@...data.com>
Subject: RE: pcm|dmaengine|imx-sdma race condition on i.MX6
On 2020/08/20 14:52 Sascha Hauer <s.hauer@...gutronix.de> wrote:
> On Wed, Aug 19, 2020 at 01:08:29PM +0200, Lars-Peter Clausen wrote:
> > > For the first option, which is potentially more performant, we have
> > > to leave the atomic PCM context and we are not sure if we are allowed to.
> > > For the second option, we would have to divide the dma_device
> > > terminate_all into an atomic sync and an async one, which would
> > > align with the dmaengine API, giving it the option to ensure termination in
> an atomic context.
> > > Based on my understanding, most of them are synchronous anyways, for
> > > the currently async ones we would have to implement busy waits.
> > > However, with this approach, we reach the WARN_ON [6] inside of an
> > > atomic context, indicating we might not do the right thing.
> >
> > I don't know how feasible this is to implement in the SDMA dmaengine
> driver.
> > But I think what is should do is to have some flag to indicate if a
> > terminate is in progress. If a new transfer is issued while terminate
> > is in progress the transfer should go on a list. Once terminate
> > finishes it should check the list and start the transfer if there are any on the
> list.
>
> The list is already there in form of the vchan helpers the driver uses.
Seems Lars major concern is on the race condition between next descriptor
and sdma_channel_terminate_work which free the last terminated descriptor,
not the ability of vchan to support multi descriptors. But anyway, I think we
should take care vchan_get_all_descriptors to free descriptors during terminate
phase in case it's done in worker like sdma_channel_terminate_work, since that
may free the next descriptor wrongly. That's what my patch attached in
0001-dmaengine-imx-sdma-add-terminated-list-for-freed-des.patch
https://www.spinics.net/lists/arm-kernel/msg829972.html
>
> I think the big mistake the driver makes is to configure fields in struct
> sdma_channel and also the hardware directly in sdma_prep_memcpy(),
> sdma_prep_slave_sg() and sdma_prep_dma_cyclic(). All information should be
> stored in the struct sdma_desc allocated in the prep functions and only be used
> when it's time to fire that specific descriptor.
Sorry Sascha, seems that's another topic and your intention is to make sure only
software involved in sdma_prep_* and all HW moved into one function inside
sdma_start_desc. I agree that will make code more clean but my concern is
sdma_start_desc is protect by spin_lock which should be short as possible while
some HW touch as context_load may cost some time. Anyway, that's another topic,
maybe we can refine it in the future.
>
> More specifically sdma_config_write() may not be called from
> sdma_prep_slave_sg() or sdma_prep_dma_cyclic(), but instead must be called
> from sdma_start_desc(). sdma_config_ownership() also must be called later
> in sdma_start_desc(). 'direction' must be a member of struct sdma_desc, not of
> struct sdma_channel.
>
> Overall this sounds like a fair amount of work to do, but should be feasible and
> IMO is a step in the right direction.
>
> Sascha
>
> --
Powered by blists - more mailing lists