[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1457391064-6660-164-git-send-email-kamal@canonical.com>
Date: Mon, 7 Mar 2016 14:49:14 -0800
From: Kamal Mostafa <kamal@...onical.com>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org,
kernel-team@...ts.ubuntu.com
Cc: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Vinod Koul <vinod.koul@...el.com>,
Kamal Mostafa <kamal@...onical.com>
Subject: [PATCH 4.2.y-ckt 163/273] dmaengine: dw: disable BLOCK IRQs for non-cyclic xfer
4.2.8-ckt5 -stable review patch. If anyone has any objections, please let me know.
---8<------------------------------------------------------------
From: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
commit ee1cdcdae59563535485a5f56ee72c894ab7d7ad upstream.
The commit 2895b2cad6e7 ("dmaengine: dw: fix cyclic transfer callbacks")
re-enabled BLOCK interrupts with regard to make cyclic transfers work. However,
this change becomes a regression for non-cyclic transfers as interrupt counters
under stress test had been grown enormously (approximately per 4-5 bytes in the
UART loop back test).
Taking into consideration above enable BLOCK interrupts if and only if channel
is programmed to perform cyclic transfer.
Fixes: 2895b2cad6e7 ("dmaengine: dw: fix cyclic transfer callbacks")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
Acked-by: Mans Rullgard <mans@...sr.com>
Tested-by: Mans Rullgard <mans@...sr.com>
Acked-by: Viresh Kumar <viresh.kumar@...aro.org>
Signed-off-by: Vinod Koul <vinod.koul@...el.com>
Signed-off-by: Kamal Mostafa <kamal@...onical.com>
---
drivers/dma/dw/core.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
index a00d72f..f1c9e21 100644
--- a/drivers/dma/dw/core.c
+++ b/drivers/dma/dw/core.c
@@ -156,7 +156,6 @@ static void dwc_initialize(struct dw_dma_chan *dwc)
/* Enable interrupts */
channel_set_bit(dw, MASK.XFER, dwc->mask);
- channel_set_bit(dw, MASK.BLOCK, dwc->mask);
channel_set_bit(dw, MASK.ERROR, dwc->mask);
dwc->initialized = true;
@@ -588,6 +587,9 @@ static void dwc_handle_cyclic(struct dw_dma *dw, struct dw_dma_chan *dwc,
spin_unlock_irqrestore(&dwc->lock, flags);
}
+
+ /* Re-enable interrupts */
+ channel_set_bit(dw, MASK.BLOCK, dwc->mask);
}
/* ------------------------------------------------------------------------- */
@@ -618,11 +620,8 @@ static void dw_dma_tasklet(unsigned long data)
dwc_scan_descriptors(dw, dwc);
}
- /*
- * Re-enable interrupts.
- */
+ /* Re-enable interrupts */
channel_set_bit(dw, MASK.XFER, dw->all_chan_mask);
- channel_set_bit(dw, MASK.BLOCK, dw->all_chan_mask);
channel_set_bit(dw, MASK.ERROR, dw->all_chan_mask);
}
@@ -1256,6 +1255,7 @@ static void dwc_free_chan_resources(struct dma_chan *chan)
int dw_dma_cyclic_start(struct dma_chan *chan)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+ struct dw_dma *dw = to_dw_dma(chan->device);
unsigned long flags;
if (!test_bit(DW_DMA_IS_CYCLIC, &dwc->flags)) {
@@ -1264,7 +1264,12 @@ int dw_dma_cyclic_start(struct dma_chan *chan)
}
spin_lock_irqsave(&dwc->lock, flags);
+
+ /* Enable interrupts to perform cyclic transfer */
+ channel_set_bit(dw, MASK.BLOCK, dwc->mask);
+
dwc_dostart(dwc, dwc->cdesc->desc[0]);
+
spin_unlock_irqrestore(&dwc->lock, flags);
return 0;
--
2.7.0
Powered by blists - more mailing lists