[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1477372421-11656-1-git-send-email-jmaxwell37@gmail.com>
Date: Tue, 25 Oct 2016 16:13:41 +1100
From: Jon Maxwell <jmaxwell37@...il.com>
To: tlfalcon@...ux.vnet.ibm.com
Cc: benh@...nel.crashing.org, paulus@...ba.org, mpe@...erman.id.au,
davem@...emloft.net, tom@...bertland.com, jarod@...hat.com,
hofrat@...dl.org, netdev@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
mleitner@...hat.com, jmaxwell@...hat.com,
Jon Maxwell <jmaxwell37@...il.com>
Subject: [PATCH net-next] ibmveth: calculate correct gso_size and set gso_type
We recently encountered a bug where a few customers using ibmveth on the
same LPAR hit an issue where a TCP session hung when large receive was
enabled. Closer analysis revealed that the session was stuck because the
one side was advertising a zero window repeatedly.
We narrowed this down to the fact the ibmveth driver did not set gso_size
which is translated by TCP into the MSS later up the stack. The MSS is
used to calculate the TCP window size and as that was abnormally large,
it was calculating a zero window, even although the sockets receive buffer
was completely empty.
We were able to reproduce this and worked with IBM to fix this. Thanks Tom
and Marcelo for all your help and review on this.
The patch fixes both our internal reproduction tests and our customers tests.
Signed-off-by: Jon Maxwell <jmaxwell37@...il.com>
---
drivers/net/ethernet/ibm/ibmveth.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
index 29c05d0..3028c33 100644
--- a/drivers/net/ethernet/ibm/ibmveth.c
+++ b/drivers/net/ethernet/ibm/ibmveth.c
@@ -1182,6 +1182,8 @@ static int ibmveth_poll(struct napi_struct *napi, int budget)
int frames_processed = 0;
unsigned long lpar_rc;
struct iphdr *iph;
+ bool large_packet = 0;
+ u16 hdr_len = ETH_HLEN + sizeof(struct tcphdr);
restart_poll:
while (frames_processed < budget) {
@@ -1236,10 +1238,27 @@ static int ibmveth_poll(struct napi_struct *napi, int budget)
iph->check = 0;
iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl);
adapter->rx_large_packets++;
+ large_packet = 1;
}
}
}
+ if (skb->len > netdev->mtu) {
+ iph = (struct iphdr *)skb->data;
+ if (be16_to_cpu(skb->protocol) == ETH_P_IP && iph->protocol == IPPROTO_TCP) {
+ hdr_len += sizeof(struct iphdr);
+ skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
+ skb_shinfo(skb)->gso_size = netdev->mtu - hdr_len;
+ } else if (be16_to_cpu(skb->protocol) == ETH_P_IPV6 &&
+ iph->protocol == IPPROTO_TCP) {
+ hdr_len += sizeof(struct ipv6hdr);
+ skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6;
+ skb_shinfo(skb)->gso_size = netdev->mtu - hdr_len;
+ }
+ if (!large_packet)
+ adapter->rx_large_packets++;
+ }
+
napi_gro_receive(napi, skb); /* send it up */
netdev->stats.rx_packets++;
--
1.8.3.1
Powered by blists - more mailing lists