lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 4 Jul 2019 13:49:56 +0300
From:   Dmitry Osipenko <digetx@...il.com>
To:     Jon Hunter <jonathanh@...dia.com>,
        Laxman Dewangan <ldewangan@...dia.com>,
        Vinod Koul <vkoul@...nel.org>,
        Thierry Reding <thierry.reding@...il.com>,
        Ben Dooks <ben.dooks@...ethink.co.uk>
Cc:     dmaengine@...r.kernel.org, linux-tegra@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] dmaengine: tegra-apb: Support per-burst residue
 granularity

04.07.2019 10:10, Jon Hunter пишет:
> 
> On 03/07/2019 18:00, Dmitry Osipenko wrote:
>> 03.07.2019 19:37, Jon Hunter пишет:
>>>
>>> On 03/07/2019 02:28, Dmitry Osipenko wrote:
>>>> Tegra's APB DMA engine updates words counter after each transferred burst
>>>> of data, hence it can report transfer's residual with more fidelity which
>>>> may be required in cases like audio playback. In particular this fixes
>>>> audio stuttering during playback in a chromium web browser. The patch is
>>>> based on the original work that was made by Ben Dooks and a patch from
>>>> downstream kernel. It was tested on Tegra20 and Tegra30 devices.
>>>>
>>>> Link: https://lore.kernel.org/lkml/20190424162348.23692-1-ben.dooks@codethink.co.uk/
>>>> Link: https://nv-tegra.nvidia.com/gitweb/?p=linux-4.4.git;a=commit;h=c7bba40c6846fbf3eaad35c4472dcc7d8bbc02e5
>>>> Inspired-by: Ben Dooks <ben.dooks@...ethink.co.uk>
>>>> Reviewed-by: Jon Hunter <jonathanh@...dia.com>
>>>> Signed-off-by: Dmitry Osipenko <digetx@...il.com>
>>>> ---
>>>>
>>>> Changelog:
>>>>
>>>> v4: The words_xferred is now also reset on a new iteration of a cyclic
>>>>     transfer by ISR, so that dmaengine_tx_status() won't produce a
>>>>     misleading warning splat on TX status re-checking after a cycle
>>>>     completion when cyclic transfer consists of a single SG.
>>>>
>>>> v3: Added workaround for a hardware design shortcoming that results
>>>>     in a words counter wraparound before end-of-transfer bit is set
>>>>     in a cyclic mode.
>>>>
>>>> v2: Addressed review comments made by Jon Hunter to v1. We won't try
>>>>     to get words count if dma_desc is on free list as it will result
>>>>     in a NULL dereference because this case wasn't handled properly.
>>>>
>>>>     The residual value is now updated properly, avoiding potential
>>>>     integer overflow by adding the "bytes" to the "bytes_transferred"
>>>>     instead of the subtraction.
>>>>
>>>>  drivers/dma/tegra20-apb-dma.c | 72 +++++++++++++++++++++++++++++++----
>>>>  1 file changed, 65 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
>>>> index 79e9593815f1..148d136191d7 100644
>>>> --- a/drivers/dma/tegra20-apb-dma.c
>>>> +++ b/drivers/dma/tegra20-apb-dma.c
>>>> @@ -152,6 +152,7 @@ struct tegra_dma_sg_req {
>>>>  	bool				last_sg;
>>>>  	struct list_head		node;
>>>>  	struct tegra_dma_desc		*dma_desc;
>>>> +	unsigned int			words_xferred;
>>>>  };
>>>>  
>>>>  /*
>>>> @@ -496,6 +497,7 @@ static void tegra_dma_configure_for_next(struct tegra_dma_channel *tdc,
>>>>  	tdc_write(tdc, TEGRA_APBDMA_CHAN_CSR,
>>>>  				nsg_req->ch_regs.csr | TEGRA_APBDMA_CSR_ENB);
>>>>  	nsg_req->configured = true;
>>>> +	nsg_req->words_xferred = 0;
>>>>  
>>>>  	tegra_dma_resume(tdc);
>>>>  }
>>>> @@ -511,6 +513,7 @@ static void tdc_start_head_req(struct tegra_dma_channel *tdc)
>>>>  					typeof(*sg_req), node);
>>>>  	tegra_dma_start(tdc, sg_req);
>>>>  	sg_req->configured = true;
>>>> +	sg_req->words_xferred = 0;
>>>>  	tdc->busy = true;
>>>>  }
>>>>  
>>>> @@ -638,6 +641,8 @@ static void handle_cont_sngl_cycle_dma_done(struct tegra_dma_channel *tdc,
>>>>  		list_add_tail(&dma_desc->cb_node, &tdc->cb_desc);
>>>>  	dma_desc->cb_count++;
>>>>  
>>>> +	sgreq->words_xferred = 0;
>>>> +
>>>>  	/* If not last req then put at end of pending list */
>>>>  	if (!list_is_last(&sgreq->node, &tdc->pending_sg_req)) {
>>>>  		list_move_tail(&sgreq->node, &tdc->pending_sg_req);
>>>> @@ -797,6 +802,62 @@ static int tegra_dma_terminate_all(struct dma_chan *dc)
>>>>  	return 0;
>>>>  }
>>>>  
>>>> +static unsigned int tegra_dma_sg_bytes_xferred(struct tegra_dma_channel *tdc,
>>>> +					       struct tegra_dma_sg_req *sg_req)
>>>> +{
>>>> +	unsigned long status, wcount = 0;
>>>> +
>>>> +	if (!list_is_first(&sg_req->node, &tdc->pending_sg_req))
>>>> +		return 0;
>>>> +
>>>> +	if (tdc->tdma->chip_data->support_separate_wcount_reg)
>>>> +		wcount = tdc_read(tdc, TEGRA_APBDMA_CHAN_WORD_TRANSFER);
>>>> +
>>>> +	status = tdc_read(tdc, TEGRA_APBDMA_CHAN_STATUS);
>>>> +
>>>> +	if (!tdc->tdma->chip_data->support_separate_wcount_reg)
>>>> +		wcount = status;
>>>> +
>>>> +	if (status & TEGRA_APBDMA_STATUS_ISE_EOC)
>>>> +		return sg_req->req_len;
>>>> +
>>>> +	wcount = get_current_xferred_count(tdc, sg_req, wcount);
>>>> +
>>>> +	if (!wcount) {
>>>> +		/*
>>>> +		 * If wcount wasn't ever polled for this SG before, then
>>>> +		 * simply assume that transfer hasn't started yet.
>>>> +		 *
>>>> +		 * Otherwise it's the end of the transfer.
>>>> +		 *
>>>> +		 * The alternative would be to poll the status register
>>>> +		 * until EOC bit is set or wcount goes UP. That's so
>>>> +		 * because EOC bit is getting set only after the last
>>>> +		 * burst's completion and counter is less than the actual
>>>> +		 * transfer size by 4 bytes. The counter value wraps around
>>>> +		 * in a cyclic mode before EOC is set(!), so we can't easily
>>>> +		 * distinguish start of transfer from its end.
>>>> +		 */
>>>> +		if (sg_req->words_xferred)
>>>> +			wcount = sg_req->req_len - 4;
>>>> +
>>>> +	} else if (wcount < sg_req->words_xferred) {
>>>> +		/*
>>>> +		 * This case shall not ever happen because EOC bit
>>>> +		 * must be set once next cyclic transfer is started.
>>>
>>> Should this still be cyclic here?
>>
>> Do you mean the "comment" by "here"?
>>
>> It will be absolutely terrible if this case happens for oneshot transfer, assume
>> kernel/hardware is on fire.
> 
> Or more likely a SW bug :-)
> 
> Yes should never happen for either sg or cyclic, but there is no mention
> of sg transfers. Maybe the sg case is more obvious but in general this
> case should never happen for any transfer.

Alright, so what the change you are proposing? Or is it fine now?

I can certainly change the comment's wording, just please tell me what you want it to be.

/*
 * This case shall not ever happen because EOC bit
 * must be set once transfer is actually finished.

Does this sound better?

Powered by blists - more mailing lists