[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <607af3f8-9fb2-da00-1867-5ab59ce9d3e8@gmail.com>
Date: Sat, 10 Sep 2022 21:57:20 +0300
From: Péter Ujfalusi <peter.ujfalusi@...il.com>
To: Vaishnav Achath <vaishnav.a@...com>, vkoul@...nel.org,
broonie@...nel.org, dmaengine@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-spi@...r.kernel.org
Cc: vigneshr@...com, kishon@...com
Subject: Re: [PATCH 1/2] dmaengine: ti: k3-udma: Respond TX done if
DMA_PREP_INTERRUPT is not requested
On 05/09/2022 06:02, Vaishnav Achath wrote:
>> Let me think about over the weekend... Do you have performance numbers for this
>> change?
>>
> Thank you, yes we tested mainly for the SPI cases(Master and Slave mode), there
> we saw a peak delay of 400ms for transaction completion and this varied with CPU
> load, after adding the patch to not wait for DMA TX completion and use EOW
> interrupt the peak latency reduced to 2ms.
Thank you for the details.
>> If we make sure that this is only affecting non cyclic transfers with a in code
>> comment to explain the expectations from the user I think this can be safe.
>> \
> Sure I will add this in the next revision.
You can add my Acked-by when you send the next version:
Acked-by: Peter Ujfalusi <peter.ujfalusi@...il.com>
>>>
>>>>>
>>>>> Signed-off-by: Vaishnav Achath <vaishnav.a@...com>
>>>>> ---
>>>>> drivers/dma/ti/k3-udma.c | 5 ++++-
>>>>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
>>>>> index 39b330ada200..03d579068453 100644
>>>>> --- a/drivers/dma/ti/k3-udma.c
>>>>> +++ b/drivers/dma/ti/k3-udma.c
>>>>> @@ -263,6 +263,7 @@ struct udma_chan_config {
>>>>> enum udma_tp_level channel_tpl; /* Channel Throughput Level */
>>>>> u32 tr_trigger_type;
>>>>> + unsigned long tx_flags;
>>>>> /* PKDMA mapped channel */
>>>>> int mapped_channel_id;
>>>>> @@ -1057,7 +1058,7 @@ static bool udma_is_desc_really_done(struct udma_chan
>>>>> *uc, struct udma_desc *d)
>>>>> /* Only TX towards PDMA is affected */
>>>>> if (uc->config.ep_type == PSIL_EP_NATIVE ||
>>>>> - uc->config.dir != DMA_MEM_TO_DEV)
>>>>> + uc->config.dir != DMA_MEM_TO_DEV || !(uc->config.tx_flags &
>>>>> DMA_PREP_INTERRUPT))
>>>>> return true;
>>>>> peer_bcnt = udma_tchanrt_read(uc, UDMA_CHAN_RT_PEER_BCNT_REG);
>>>>> @@ -3418,6 +3419,8 @@ udma_prep_slave_sg(struct dma_chan *chan, struct
>>>>> scatterlist *sgl,
>>>>> if (!burst)
>>>>> burst = 1;
>>>>> + uc->config.tx_flags = tx_flags;
>>>>> +
>>>>> if (uc->config.pkt_mode)
>>>>> d = udma_prep_slave_sg_pkt(uc, sgl, sglen, dir, tx_flags,
>>>>> context);
>>>>
>>>
>>
>
--
Péter
Powered by blists - more mailing lists