[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABb+yY3TcVr=0Po0N7TLCkQ9TmMJuvQhKJj3aQ1bP9+68T+Ogw@mail.gmail.com>
Date: Tue, 25 Sep 2012 18:47:09 +0530
From: Jassi Brar <jassisinghbrar@...il.com>
To: Inderpal Singh <inderpal.singh@...aro.org>
Cc: linux-samsung-soc@...r.kernel.org, linux-kernel@...r.kernel.org,
boojin.kim@...sung.com, vinod.koul@...el.com, patches@...aro.org,
kgene.kim@...sung.com
Subject: Re: [PATCH 3/3] DMA: PL330: Balance module remove function with probe
On Tue, Sep 25, 2012 at 2:27 PM, Inderpal Singh
<inderpal.singh@...aro.org> wrote:
> Since peripheral channel resources are not being allocated at probe,
> no need to flush the channels and free the resources in remove function.
>
> Signed-off-by: Inderpal Singh <inderpal.singh@...aro.org>
> ---
> drivers/dma/pl330.c | 8 +-------
> 1 file changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
> index 04d83e6..6f06080 100644
> --- a/drivers/dma/pl330.c
> +++ b/drivers/dma/pl330.c
> @@ -3012,16 +3012,10 @@ static int __devexit pl330_remove(struct amba_device *adev)
>
> /* Idle the DMAC */
> list_for_each_entry_safe(pch, _p, &pdmac->ddma.channels,
> - chan.device_node) {
> -
> + chan.device_node)
> /* Remove the channel */
> list_del(&pch->chan.device_node);
>
> - /* Flush the channel */
> - pl330_control(&pch->chan, DMA_TERMINATE_ALL, 0);
> - pl330_free_chan_resources(&pch->chan);
> - }
> -
> while (!list_empty(&pdmac->desc_pool)) {
> desc = list_entry(pdmac->desc_pool.next,
> struct dma_pl330_desc, node);
I am not sure about this patch. The DMA_TERMINATE_ALL is only called
by the client and if the pl330 module is forced unloaded while some
client is queued, we have to manually do DMA_TERMINATE_ALL.
A better option could be to simply fail pl330_remove if some client is
queued on the DMAC.
-jassi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists