[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180319171830.284544788@linuxfoundation.org>
Date: Mon, 19 Mar 2018 19:05:55 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Davide Caratti <dcaratti@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Sasha Levin <alexander.levin@...rosoft.com>
Subject: [PATCH 3.18 17/68] sched: act_csum: dont mangle TCP and UDP GSO packets
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Davide Caratti <dcaratti@...hat.com>
[ Upstream commit add641e7dee31b36aee83412c29e39dd1f5e0c9c ]
after act_csum computes the checksum on skbs carrying GSO TCP/UDP packets,
subsequent segmentation fails because skb_needs_check(skb, true) returns
true. Because of that, skb_warn_bad_offload() is invoked and the following
message is displayed:
WARNING: CPU: 3 PID: 28 at net/core/dev.c:2553 skb_warn_bad_offload+0xf0/0xfd
<...>
[<ffffffff8171f486>] skb_warn_bad_offload+0xf0/0xfd
[<ffffffff8161304c>] __skb_gso_segment+0xec/0x110
[<ffffffff8161340d>] validate_xmit_skb+0x12d/0x2b0
[<ffffffff816135d2>] validate_xmit_skb_list+0x42/0x70
[<ffffffff8163c560>] sch_direct_xmit+0xd0/0x1b0
[<ffffffff8163c760>] __qdisc_run+0x120/0x270
[<ffffffff81613b3d>] __dev_queue_xmit+0x23d/0x690
[<ffffffff81613fa0>] dev_queue_xmit+0x10/0x20
Since GSO is able to compute checksum on individual segments of such skbs,
we can simply skip mangling the packet.
Signed-off-by: Davide Caratti <dcaratti@...hat.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Sasha Levin <alexander.levin@...rosoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
net/sched/act_csum.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
--- a/net/sched/act_csum.c
+++ b/net/sched/act_csum.c
@@ -176,6 +176,9 @@ static int tcf_csum_ipv4_tcp(struct sk_b
struct tcphdr *tcph;
const struct iphdr *iph;
+ if (skb_is_gso(skb) && skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4)
+ return 1;
+
tcph = tcf_csum_skb_nextlayer(skb, ihl, ipl, sizeof(*tcph));
if (tcph == NULL)
return 0;
@@ -197,6 +200,9 @@ static int tcf_csum_ipv6_tcp(struct sk_b
struct tcphdr *tcph;
const struct ipv6hdr *ip6h;
+ if (skb_is_gso(skb) && skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
+ return 1;
+
tcph = tcf_csum_skb_nextlayer(skb, ihl, ipl, sizeof(*tcph));
if (tcph == NULL)
return 0;
@@ -220,6 +226,9 @@ static int tcf_csum_ipv4_udp(struct sk_b
const struct iphdr *iph;
u16 ul;
+ if (skb_is_gso(skb) && skb_shinfo(skb)->gso_type & SKB_GSO_UDP)
+ return 1;
+
/*
* Support both UDP and UDPLITE checksum algorithms, Don't use
* udph->len to get the real length without any protocol check,
@@ -273,6 +282,9 @@ static int tcf_csum_ipv6_udp(struct sk_b
const struct ipv6hdr *ip6h;
u16 ul;
+ if (skb_is_gso(skb) && skb_shinfo(skb)->gso_type & SKB_GSO_UDP)
+ return 1;
+
/*
* Support both UDP and UDPLITE checksum algorithms, Don't use
* udph->len to get the real length without any protocol check,
Powered by blists - more mailing lists