[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160301234535.959931468@linuxfoundation.org>
Date: Tue, 01 Mar 2016 23:55:07 +0000
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: <linux-kernel@...r.kernel.org>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
<stable@...r.kernel.org>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Mans Rullgard <mans@...sr.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
Vinod Koul <vinod.koul@...el.com>
Subject: [PATCH 4.4 249/342] dmaengine: dw: disable BLOCK IRQs for non-cyclic xfer
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
commit ee1cdcdae59563535485a5f56ee72c894ab7d7ad upstream.
The commit 2895b2cad6e7 ("dmaengine: dw: fix cyclic transfer callbacks")
re-enabled BLOCK interrupts with regard to make cyclic transfers work. However,
this change becomes a regression for non-cyclic transfers as interrupt counters
under stress test had been grown enormously (approximately per 4-5 bytes in the
UART loop back test).
Taking into consideration above enable BLOCK interrupts if and only if channel
is programmed to perform cyclic transfer.
Fixes: 2895b2cad6e7 ("dmaengine: dw: fix cyclic transfer callbacks")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
Acked-by: Mans Rullgard <mans@...sr.com>
Tested-by: Mans Rullgard <mans@...sr.com>
Acked-by: Viresh Kumar <viresh.kumar@...aro.org>
Signed-off-by: Vinod Koul <vinod.koul@...el.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/dma/dw/core.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
--- a/drivers/dma/dw/core.c
+++ b/drivers/dma/dw/core.c
@@ -156,7 +156,6 @@ static void dwc_initialize(struct dw_dma
/* Enable interrupts */
channel_set_bit(dw, MASK.XFER, dwc->mask);
- channel_set_bit(dw, MASK.BLOCK, dwc->mask);
channel_set_bit(dw, MASK.ERROR, dwc->mask);
dwc->initialized = true;
@@ -588,6 +587,9 @@ static void dwc_handle_cyclic(struct dw_
spin_unlock_irqrestore(&dwc->lock, flags);
}
+
+ /* Re-enable interrupts */
+ channel_set_bit(dw, MASK.BLOCK, dwc->mask);
}
/* ------------------------------------------------------------------------- */
@@ -618,11 +620,8 @@ static void dw_dma_tasklet(unsigned long
dwc_scan_descriptors(dw, dwc);
}
- /*
- * Re-enable interrupts.
- */
+ /* Re-enable interrupts */
channel_set_bit(dw, MASK.XFER, dw->all_chan_mask);
- channel_set_bit(dw, MASK.BLOCK, dw->all_chan_mask);
channel_set_bit(dw, MASK.ERROR, dw->all_chan_mask);
}
@@ -1256,6 +1255,7 @@ static void dwc_free_chan_resources(stru
int dw_dma_cyclic_start(struct dma_chan *chan)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+ struct dw_dma *dw = to_dw_dma(chan->device);
unsigned long flags;
if (!test_bit(DW_DMA_IS_CYCLIC, &dwc->flags)) {
@@ -1264,7 +1264,12 @@ int dw_dma_cyclic_start(struct dma_chan
}
spin_lock_irqsave(&dwc->lock, flags);
+
+ /* Enable interrupts to perform cyclic transfer */
+ channel_set_bit(dw, MASK.BLOCK, dwc->mask);
+
dwc_dostart(dwc, dwc->cdesc->desc[0]);
+
spin_unlock_irqrestore(&dwc->lock, flags);
return 0;
Powered by blists - more mailing lists