[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <e03e0613e6333d3cd5c322f6486273b98e007882.1457082108.git.jslaby@suse.cz>
Date: Fri, 4 Mar 2016 10:00:58 +0100
From: Jiri Slaby <jslaby@...e.cz>
To: stable@...r.kernel.org
Cc: linux-kernel@...r.kernel.org,
Siva Reddy Kallam <siva.kallam@...adcom.com>,
Michael Chan <mchan@...adcom.com>,
"David S . Miller" <davem@...emloft.net>,
Jiri Slaby <jslaby@...e.cz>
Subject: [PATCH 3.12 013/116] tg3: Fix for tg3 transmit queue 0 timed out when too many gso_segs
From: Siva Reddy Kallam <siva.kallam@...adcom.com>
3.12-stable review patch. If anyone has any objections, please let me know.
===============
[ Upstream commit b7d987295c74500b733a0ba07f9a9bcc4074fa83 ]
tg3_tso_bug() can hit a condition where the entire tx ring is not big
enough to segment the GSO packet. For example, if MSS is very small,
gso_segs can exceed the tx ring size. When we hit the condition, it
will cause tx timeout.
tg3_tso_bug() is called to handle TSO and DMA hardware bugs.
For TSO bugs, if tg3_tso_bug() cannot succeed, we have to drop the packet.
For DMA bugs, we can still fall back to linearize the SKB and let the
hardware transmit the TSO packet.
This patch adds a function tg3_tso_bug_gso_check() to check if there
are enough tx descriptors for GSO before calling tg3_tso_bug().
The caller will then handle the error appropriately - drop or
lineraize the SKB.
v2: Corrected patch description to avoid confusion.
Signed-off-by: Siva Reddy Kallam <siva.kallam@...adcom.com>
Signed-off-by: Michael Chan <mchan@...adcom.com>
Acked-by: Prashant Sreedharan <prashant@...adcom.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Jiri Slaby <jslaby@...e.cz>
---
drivers/net/ethernet/broadcom/tg3.c | 22 ++++++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index fe601e264f94..ea86c17541b3 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -7809,6 +7809,14 @@ static int tigon3_dma_hwbug_workaround(struct tg3_napi *tnapi,
return ret;
}
+static bool tg3_tso_bug_gso_check(struct tg3_napi *tnapi, struct sk_buff *skb)
+{
+ /* Check if we will never have enough descriptors,
+ * as gso_segs can be more than current ring size
+ */
+ return skb_shinfo(skb)->gso_segs < tnapi->tx_pending / 3;
+}
+
static netdev_tx_t tg3_start_xmit(struct sk_buff *, struct net_device *);
/* Use GSO to workaround a rare TSO bug that may be triggered when the
@@ -7910,8 +7918,11 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
* vlan encapsulated.
*/
if (skb->protocol == htons(ETH_P_8021Q) ||
- skb->protocol == htons(ETH_P_8021AD))
- return tg3_tso_bug(tp, skb);
+ skb->protocol == htons(ETH_P_8021AD)) {
+ if (tg3_tso_bug_gso_check(tnapi, skb))
+ return tg3_tso_bug(tp, skb);
+ goto drop;
+ }
if (!skb_is_gso_v6(skb)) {
iph->check = 0;
@@ -7919,8 +7930,11 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
}
if (unlikely((ETH_HLEN + hdr_len) > 80) &&
- tg3_flag(tp, TSO_BUG))
- return tg3_tso_bug(tp, skb);
+ tg3_flag(tp, TSO_BUG)) {
+ if (tg3_tso_bug_gso_check(tnapi, skb))
+ return tg3_tso_bug(tp, skb);
+ goto drop;
+ }
base_flags |= (TXD_FLAG_CPU_PRE_DMA |
TXD_FLAG_CPU_POST_DMA);
--
2.7.2
Powered by blists - more mailing lists