[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1314619909.1606.130.camel@vkoul-udesk3>
Date: Mon, 29 Aug 2011 17:41:49 +0530
From: Vinod Koul <vinod.koul@...ux.intel.com>
To: Guennadi Liakhovetski <g.liakhovetski@....de>
Cc: linux-kernel@...r.kernel.org, linux-sh@...r.kernel.org,
Dan Williams <dan.j.williams@...el.com>,
Paul Mundt <lethal@...ux-sh.org>
Subject: Re: [PATCH] dma: shdma: transfer based runtime PM
On Fri, 2011-08-26 at 01:11 +0200, Guennadi Liakhovetski wrote:
> On Thu, 25 Aug 2011, Koul, Vinod wrote:
> > Wont it be easy to to do:
> > - pm_runtime_get() in each submit
> > - pm_runtime_put() in each callback
> > Normal case above would work just fine
> > - In terminate case, count the number of issued transactions, and call
> > pm_runtime_put() for each canceled transaction
> > (i am assuming that for each timeout error, the client will call
> > terminate)
>
> As I said, this won't be very easy to do this in a robust way. You'd have
> to scan your list of DMA blocks and see, which of them belong to one
> descriptor, and once you reach the end of that descriptor, issue a put().
> Perhaps, this can be done, but my choice went to the currently presented
> solution.
If you count the number of descriptor submitted in your submitted list
and call _put for each, I see no reason why it wont be simple and better
than current approach.
Something like:
/* since callback is set for last descriptor of chain, we call runtime
* put for that desc alone
*/
list_for_each_entry_safe(desc, __desc, sh_chan->ld_queue, node) {
if (desc->async_tx.callback)
pm_runtime_put(device);
If i read shdma correctly, descriptors are put into ld_queue of channel
and any pending should be checked in this list alone.
For normal case again you check for the callback to decide if you need
to call pm_runtime_put() or not.
--
~Vinod
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists