[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F9626AF.9030203@st.com>
Date: Tue, 24 Apr 2012 09:36:07 +0530
From: Viresh Kumar <viresh.kumar@...com>
To: Vinod Koul <vinod.koul@...ux.intel.com>
Cc: "ciminaghi@...dd.com" <ciminaghi@...dd.com>,
Linus Walleij <linus.walleij@...aro.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"dan.j.williams@...el.com" <dan.j.williams@...el.com>,
"pgeninatti@...t-in.com" <pgeninatti@...t-in.com>,
"acolosimo@...t-in.com" <acolosimo@...t-in.com>,
"alarosa@...nintellect.eu" <alarosa@...nintellect.eu>
Subject: Re: [PATCH] dmaengine/amba-pl08x : reset phychan_hold on terminate
all
On 4/23/2012 6:12 PM, Vinod Koul wrote:
> On Thu, 2012-04-19 at 12:20 +0200, ciminaghi@...dd.com wrote:
>> > From: Davide Ciminaghi <ciminaghi@...dd.com>
>> >
>> > When a client calls pl08x_control with DMA_TERMINATE_ALL, it is correct
>> > to terminate and release the phy channel currently in use (if one is in use),
>> > but the phychan_hold counter must also be reset (otherwise it could get
>> > trapped in an unbalanced state).
>> >
>> > Signed-off-by: Davide Ciminaghi <ciminaghi@...dd.com>
>> > ---
>> > drivers/dma/amba-pl08x.c | 1 +
>> > 1 files changed, 1 insertions(+), 0 deletions(-)
>> >
>> > diff --git a/drivers/dma/amba-pl08x.c b/drivers/dma/amba-pl08x.c
>> > index c301a8e..3d704ab 100644
>> > --- a/drivers/dma/amba-pl08x.c
>> > +++ b/drivers/dma/amba-pl08x.c
>> > @@ -1429,6 +1429,7 @@ static int pl08x_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
>> > * signal
>> > */
>> > release_phy_channel(plchan);
>> > + plchan->phychan_hold = 0;
>> > }
>> > /* Dequeue jobs and free LLIs */
>> > if (plchan->at) {
> Linus, Viresh... Any tested-by for this before I apply to fixes.
Looks good.
Reviewed-by: Viresh Kumar <viresh.kumar@...com>
--
viresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists