[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c7e4d99a-f02f-e7a2-a4c2-81496ee54d24@nvidia.com>
Date: Wed, 19 Jun 2019 13:22:14 +0100
From: Jon Hunter <jonathanh@...dia.com>
To: Dmitry Osipenko <digetx@...il.com>,
Ben Dooks <ben.dooks@...ethink.co.uk>,
Laxman Dewangan <ldewangan@...dia.com>,
Vinod Koul <vkoul@...nel.org>,
Thierry Reding <thierry.reding@...il.com>
CC: <dmaengine@...r.kernel.org>, <linux-tegra@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v1] dmaengine: tegra-apb: Support per-burst residue
granularity
On 19/06/2019 12:10, Dmitry Osipenko wrote:
> 19.06.2019 13:55, Jon Hunter пишет:
>>
>> On 19/06/2019 11:27, Dmitry Osipenko wrote:
>>> 19.06.2019 13:04, Jon Hunter пишет:
>>>>
>>>> On 19/06/2019 00:27, Dmitry Osipenko wrote:
>>>>> 19.06.2019 1:22, Ben Dooks пишет:
>>>>>> On 13/06/2019 22:08, Dmitry Osipenko wrote:
>>>>>>> Tegra's APB DMA engine updates words counter after each transferred burst
>>>>>>> of data, hence it can report transfer's residual with more fidelity which
>>>>>>> may be required in cases like audio playback. In particular this fixes
>>>>>>> audio stuttering during playback in a chromiuim web browser. The patch is
>>>>>>> based on the original work that was made by Ben Dooks [1]. It was tested
>>>>>>> on Tegra20 and Tegra30 devices.
>>>>>>>
>>>>>>> [1] https://lore.kernel.org/lkml/20190424162348.23692-1-ben.dooks@codethink.co.uk/
>>>>>>>
>>>>>>> Inspired-by: Ben Dooks <ben.dooks@...ethink.co.uk>
>>>>>>> Signed-off-by: Dmitry Osipenko <digetx@...il.com>
>>>>>>> ---
>>>>>>> drivers/dma/tegra20-apb-dma.c | 35 ++++++++++++++++++++++++++++-------
>>>>>>> 1 file changed, 28 insertions(+), 7 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
>>>>>>> index 79e9593815f1..c5af8f703548 100644
>>>>>>> --- a/drivers/dma/tegra20-apb-dma.c
>>>>>>> +++ b/drivers/dma/tegra20-apb-dma.c
>>>>>>> @@ -797,12 +797,36 @@ static int tegra_dma_terminate_all(struct dma_chan *dc)
>>>>>>> return 0;
>>>>>>> }
>>>>>>> +static unsigned int tegra_dma_update_residual(struct tegra_dma_channel *tdc,
>>>>>>> + struct tegra_dma_sg_req *sg_req,
>>>>>>> + struct tegra_dma_desc *dma_desc,
>>>>>>> + unsigned int residual)
>>>>>>> +{
>>>>>>> + unsigned long status, wcount = 0;
>>>>>>> +
>>>>>>> + if (!list_is_first(&sg_req->node, &tdc->pending_sg_req))
>>>>>>> + return residual;
>>>>>>> +
>>>>>>> + if (tdc->tdma->chip_data->support_separate_wcount_reg)
>>>>>>> + wcount = tdc_read(tdc, TEGRA_APBDMA_CHAN_WORD_TRANSFER);
>>>>>>> +
>>>>>>> + status = tdc_read(tdc, TEGRA_APBDMA_CHAN_STATUS);
>>>>>>> +
>>>>>>> + if (!tdc->tdma->chip_data->support_separate_wcount_reg)
>>>>>>> + wcount = status;
>>>>>>> +
>>>>>>> + if (status & TEGRA_APBDMA_STATUS_ISE_EOC)
>>>>>>> + return residual - sg_req->req_len;
>>>>>>> +
>>>>>>> + return residual - get_current_xferred_count(tdc, sg_req, wcount);
>>>>>>> +}
>>>>>>
>>>>>> I am unfortunately nowhere near my notes, so can't completely
>>>>>> review this. I think the complexity of my patch series is due
>>>>>> to an issue with the count being updated before the EOC IRQ
>>>>>> is actually flagged (and most definetly before it gets to the
>>>>>> CPU IRQ handler).
>>>>>>
>>>>>> The test system I was using, which i've not really got any
>>>>>> access to at the moment would show these internal inconsistent
>>>>>> states every few hours, however it was moving 48kHz 8ch 16bit
>>>>>> TDM data.
>>>>>>
>>>>>> Thanks for looking into this, I am not sure if I am going to
>>>>>> get any time to look into this within the next couple of
>>>>>> months.
>>>>>
>>>>> I'll try to add some debug checks to try to catch the case where count is updated before EOC
>>>>> is set. Thank you very much for the clarification of the problem. So far I haven't spotted
>>>>> anything going wrong.
>>>>>
>>>>> Jon / Laxman, are you aware about the possibility to get such inconsistency of words count
>>>>> vs EOC? Assuming the cyclic transfer mode.
>>>>
>>>> I can't say that I am. However, for the case of cyclic transfer, given
>>>> that the next transfer is always programmed into the registers before
>>>> the last one completes, I could see that by the time the interrupt is
>>>> serviced that the DMA has moved on to the next transfer (which I assume
>>>> would reset the count).
>>>>
>>>> Interestingly, our downstream kernel implemented a change to avoid the
>>>> count appearing to move backwards. I am curious if this also works,
>>>> which would be a lot simpler that what Ben has implemented and may
>>>> mitigate that race condition that Ben is describing.
>>>>
>>>> Cheers
>>>> Jon
>>>>
>>>> [0]
>>>> https://nv-tegra.nvidia.com/gitweb/?p=linux-4.4.git;a=commit;h=c7bba40c6846fbf3eaad35c4472dcc7d8bbc02e5
>>>>
>>>
>>> The downstream patch doesn't check for EOC and has no comments about it, so it's hard to
>>> tell if it's intentional. Secondly, looks like the downstream patch is mucked up because it
>>> doesn't check whether the dma_desc is *the active* transfer and not a pending!
>>
>> I agree that it should check to see if it is active. I assume that what
>> this patch is doing is not updating the dma position if it appears to
>> have gone backwards, implying we have moved on to the next buffer. Yes
>> this is still probably not as accurate as Ben's implementation because
>> most likely we have finished that transfer and this patch would report
>> that it is not quite finished.
>>
>> If Ben's patch works for you then why not go with this?
>
> Because I'm doubtful that it is really the case and not something else. It will be very odd
> if hardware updates words count and sets EOC asynchronously, I'd call it as a faulty design
> and thus a bug that need to worked around in software if that's really happening.
I don't see it that way. Probably as soon as the EOC happens, if there
is another transfer queued up, the next transfer will start and count
gets reset. So if you happen to asynchronously read the count at the
very end of the transfer, then it is possible you are doing so at the
same time that the EOC occurs but before the ISR has been triggered.
Jon
--
nvpublic
Powered by blists - more mailing lists