[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1358503572-5057-3-git-send-email-sebastian@breakpoint.cc>
Date: Fri, 18 Jan 2013 11:06:11 +0100
From: Sebastian Andrzej Siewior <sebastian@...akpoint.cc>
To: netdev@...r.kernel.org
Cc: "David S. Miller" <davem@...emloft.net>,
Thomas Gleixner <tglx@...utronix.de>,
Rakesh Ranjan <rakesh.ranjan@....in>,
Bruno Bittner <Bruno.Bittner@...k.com>,
Holger Dengler <dengler@...utronix.de>,
Jan Altenberg <jan@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: [PATCH 3/4] net: ethernet: ti cpsw: Split up DMA descriptor pool
From: Thomas Gleixner <tglx@...utronix.de>
Split the buffer pool into a RX and a TX block so neither of the
channels can influence the other. It is possible to fillup the pool by
sending a lot of large packets on a slow half-duplex link.
Cc: Rakesh Ranjan <rakesh.ranjan@....in>
Cc: Bruno Bittner <Bruno.Bittner@...k.com>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
[dengler: patch description]
Signed-off-by: Holger Dengler <dengler@...utronix.de>
[jan: forward ported]
Signed-off-by: Jan Altenberg <jan@...utronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
---
drivers/net/ethernet/ti/davinci_cpdma.c | 35 +++++++++++++++++++++++++++---
1 files changed, 31 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c
index 709c437..70325cd 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -217,16 +217,41 @@ desc_from_phys(struct cpdma_desc_pool *pool, dma_addr_t dma)
}
static struct cpdma_desc __iomem *
-cpdma_desc_alloc(struct cpdma_desc_pool *pool, int num_desc)
+cpdma_desc_alloc(struct cpdma_desc_pool *pool, int num_desc, bool is_rx)
{
unsigned long flags;
int index;
struct cpdma_desc __iomem *desc = NULL;
+ static int last_index = 4096;
spin_lock_irqsave(&pool->lock, flags);
- index = bitmap_find_next_zero_area(pool->bitmap, pool->num_desc, 0,
- num_desc, 0);
+ /*
+ * The pool is split into two areas rx and tx. So we make sure
+ * that we can't run out of pool buffers for RX when TX has
+ * tons of stuff queued.
+ */
+ if (is_rx) {
+ index = bitmap_find_next_zero_area(pool->bitmap,
+ pool->num_desc/2, 0, num_desc, 0);
+ } else {
+ if (last_index >= pool->num_desc)
+ last_index = pool->num_desc / 2;
+
+ index = bitmap_find_next_zero_area(pool->bitmap,
+ pool->num_desc, last_index, num_desc, 0);
+
+ if (!(index < pool->num_desc)) {
+ index = bitmap_find_next_zero_area(pool->bitmap,
+ pool->num_desc, pool->num_desc/2, num_desc, 0);
+ }
+
+ if (index < pool->num_desc)
+ last_index = index + 1;
+ else
+ last_index = pool->num_desc / 2;
+ }
+
if (index < pool->num_desc) {
bitmap_set(pool->bitmap, index, num_desc);
desc = pool->iomap + pool->desc_size * index;
@@ -660,6 +685,7 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data,
unsigned long flags;
u32 mode;
int ret = 0;
+ bool is_rx;
spin_lock_irqsave(&chan->lock, flags);
@@ -668,7 +694,8 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data,
goto unlock_ret;
}
- desc = cpdma_desc_alloc(ctlr->pool, 1);
+ is_rx = (chan->rxfree != 0);
+ desc = cpdma_desc_alloc(ctlr->pool, 1, is_rx);
if (!desc) {
chan->stats.desc_alloc_fail++;
ret = -ENOMEM;
--
1.7.6.5
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists