[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20150919171748.638925350@linuxfoundation.org>
Date: Sat, 19 Sep 2015 10:28:23 -0700
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Krzysztof Kozlowski <k.kozlowski@...sung.com>,
Robert Baldyga <r.baldyga@...sung.com>
Subject: [PATCH 4.1 071/102] serial: samsung: fix DMA for FIFO smaller than cache line size
4.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Robert Baldyga <r.baldyga@...sung.com>
commit 736cd79f483fd7a1e0b71e6eaddf01d8d87fbbbb upstream.
So far DMA mode were activated when only number of bytes to send was
equal or greater than min_dma_size. Due to requirement that DMA transaction
buffer should be aligned to cache line size, the excessive bytes were
written to FIFO before starting DMA transaction. The problem occurred
when FIFO size were smaller than cache alignment, because writing all
excessive bytes to FIFO would fail. It happened in DMA mode when PIO
interrupts disabled, which caused driver hung.
The solution is to test if buffer is alligned to cache line size before
activating DMA mode, and if it's not, running PIO mode to align buffer
and then starting DMA transaction. In PIO mode, when interrupts are
enabled, lack of space in FIFO isn't the problem, so buffer aligning
will always finish with success.
Reported-by: Krzysztof Kozlowski <k.kozlowski@...sung.com>
Signed-off-by: Robert Baldyga <r.baldyga@...sung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/tty/serial/samsung.c | 36 +++++++++++++++++++++---------------
1 file changed, 21 insertions(+), 15 deletions(-)
--- a/drivers/tty/serial/samsung.c
+++ b/drivers/tty/serial/samsung.c
@@ -295,15 +295,6 @@ static int s3c24xx_serial_start_tx_dma(s
if (ourport->tx_mode != S3C24XX_TX_DMA)
enable_tx_dma(ourport);
- while (xmit->tail & (dma_get_cache_alignment() - 1)) {
- if (rd_regl(port, S3C2410_UFSTAT) & ourport->info->tx_fifofull)
- return 0;
- wr_regb(port, S3C2410_UTXH, xmit->buf[xmit->tail]);
- xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
- port->icount.tx++;
- count--;
- }
-
dma->tx_size = count & ~(dma_get_cache_alignment() - 1);
dma->tx_transfer_addr = dma->tx_addr + xmit->tail;
@@ -343,7 +334,8 @@ static void s3c24xx_serial_start_next_tx
}
if (!ourport->dma || !ourport->dma->tx_chan ||
- count < ourport->min_dma_size)
+ count < ourport->min_dma_size ||
+ xmit->tail & (dma_get_cache_alignment() - 1))
s3c24xx_serial_start_tx_pio(ourport);
else
s3c24xx_serial_start_tx_dma(ourport, count);
@@ -737,7 +729,7 @@ static irqreturn_t s3c24xx_serial_tx_cha
struct uart_port *port = &ourport->port;
struct circ_buf *xmit = &port->state->xmit;
unsigned long flags;
- int count;
+ int count, dma_count = 0;
spin_lock_irqsave(&port->lock, flags);
@@ -745,8 +737,12 @@ static irqreturn_t s3c24xx_serial_tx_cha
if (ourport->dma && ourport->dma->tx_chan &&
count >= ourport->min_dma_size) {
- s3c24xx_serial_start_tx_dma(ourport, count);
- goto out;
+ int align = dma_get_cache_alignment() -
+ (xmit->tail & (dma_get_cache_alignment() - 1));
+ if (count-align >= ourport->min_dma_size) {
+ dma_count = count-align;
+ count = align;
+ }
}
if (port->x_char) {
@@ -767,14 +763,24 @@ static irqreturn_t s3c24xx_serial_tx_cha
/* try and drain the buffer... */
- count = port->fifosize;
- while (!uart_circ_empty(xmit) && count-- > 0) {
+ if (count > port->fifosize) {
+ count = port->fifosize;
+ dma_count = 0;
+ }
+
+ while (!uart_circ_empty(xmit) && count > 0) {
if (rd_regl(port, S3C2410_UFSTAT) & ourport->info->tx_fifofull)
break;
wr_regb(port, S3C2410_UTXH, xmit->buf[xmit->tail]);
xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
port->icount.tx++;
+ count--;
+ }
+
+ if (!count && dma_count) {
+ s3c24xx_serial_start_tx_dma(ourport, dma_count);
+ goto out;
}
if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) {
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists