[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <5b1f0292.IdMQh83ac/EN53Sl%fllinden@amazon.com>
Date: Mon, 11 Jun 2018 23:15:30 +0000
From: Frank van der Linden <fllinden@...zon.com>
To: <edumazet@...gle.com>, <netdev@...r.kernel.org>
CC: <fllinden@...zon.com>
Subject: [PATCH] tcp: verify the checksum of the first data segment in a
new connection
commit 079096f103fa ("tcp/dccp: install syn_recv requests into ehash
table") introduced an optimization for the handling of child sockets
created for a new TCP connection.
But this optimization passes any data associated with the last ACK of the
connection handshake up the stack without verifying its checksum, because it
calls tcp_child_process(), which in turn calls tcp_rcv_state_process()
directly. These lower-level processing functions do not do any checksum
verification.
Insert a tcp_checksum_complete call in the TCP_NEW_SYN_RECEIVE path to
fix this.
Signed-off-by: Frank van der Linden <fllinden@...zon.com>
---
net/ipv4/tcp_ipv4.c | 8 +++++++-
net/ipv6/tcp_ipv6.c | 8 +++++++-
2 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index f70586b50838..1ec4c0d4aba5 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -1703,7 +1703,13 @@ int tcp_v4_rcv(struct sk_buff *skb)
th = (const struct tcphdr *)skb->data;
iph = ip_hdr(skb);
tcp_v4_fill_cb(skb, iph, th);
- nsk = tcp_check_req(sk, skb, req, false, &req_stolen);
+
+ if (tcp_checksum_complete(skb)) {
+ __TCP_INC_STATS(net, TCP_MIB_CSUMERRORS);
+ } else {
+ nsk = tcp_check_req(sk, skb, req, false,
+ &req_stolen);
+ }
}
if (!nsk) {
reqsk_put(req);
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 6d664d83cd16..a12b694d3d1e 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1486,7 +1486,13 @@ static int tcp_v6_rcv(struct sk_buff *skb)
th = (const struct tcphdr *)skb->data;
hdr = ipv6_hdr(skb);
tcp_v6_fill_cb(skb, hdr, th);
- nsk = tcp_check_req(sk, skb, req, false, &req_stolen);
+
+ if (tcp_checksum_complete(skb)) {
+ __TCP_INC_STATS(net, TCP_MIB_CSUMERRORS);
+ } else {
+ nsk = tcp_check_req(sk, skb, req, false,
+ &req_stolen);
+ }
}
if (!nsk) {
reqsk_put(req);
--
2.14.4
Powered by blists - more mailing lists