[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1473063823-25430-1-git-send-email-sona.sarmadi@enea.com>
Date: Mon, 5 Sep 2016 10:23:43 +0200
From: Sona Sarmadi <sona.sarmadi@...a.com>
To: <davem@...emloft.net>, <kuznet@....inr.ac.ru>, <jmorris@...ei.org>,
<kaber@...sh.net>
CC: <linux-kernel@...r.kernel.org>
Subject: [PATCH] tcp: make challenge acks less predictable
From: Eric Dumazet <edumazet@...gle.com>
[ Upstream commit 75ff39ccc1bd5d3c455b6822ab09e533c551f758 ]
Yue Cao claims that current host rate limiting of challenge ACKS
(RFC 5961) could leak enough information to allow a patient attacker
to hijack TCP sessions. He will soon provide details in an academic
paper.
This patch increases the default limit from 100 to 1000, and adds
some randomization so that the attacker can no longer hijack
sessions without spending a considerable amount of probes.
Based on initial analysis and patch from Linus.
Note that we also have per socket rate limiting, so it is tempting
to remove the host limit in the future.
v2: randomize the count of challenge acks per second, not the period.
Fixes CVE-2016-5696.
[backport of 3.14 commit 860c53258e634c54f70252c352bae7bac30724a9]
Fixes: 282f23c6ee34 ("tcp: implement RFC 5961 3.2")
Reported-by: Yue Cao <ycao009@....edu>
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Suggested-by: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Yuchung Cheng <ycheng@...gle.com>
Cc: Neal Cardwell <ncardwell@...gle.com>
Acked-by: Neal Cardwell <ncardwell@...gle.com>
Acked-by: Yuchung Cheng <ycheng@...gle.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Sona Sarmadi <sona.sarmadi@...a.com>
---
diff -Nurp a/include/linux/random.h b/include/linux/random.h
--- a/include/linux/random.h 2016-08-31 14:01:17.311438431 +0200
+++ b/include/linux/random.h 2016-09-02 11:45:26.275741639 +0200
@@ -33,6 +33,23 @@ void prandom_seed(u32 seed);
u32 prandom_u32_state(struct rnd_state *);
void prandom_bytes_state(struct rnd_state *state, void *buf, int nbytes);
+/**
+ * prandom_u32_max - returns a pseudo-random number in interval [0, ep_ro)
+ * @ep_ro: right open interval endpoint
+ *
+ * Returns a pseudo-random number that is in interval [0, ep_ro). Note
+ * that the result depends on PRNG being well distributed in [0, ~0U]
+ * u32 space. Here we use maximally equidistributed combined Tausworthe
+ * generator, that is, prandom_u32(). This is useful when requesting a
+ * random index of an array containing ep_ro elements, for example.
+ *
+ * Returns: pseudo-random number in interval [0, ep_ro)
+ */
+static inline u32 prandom_u32_max(u32 ep_ro)
+{
+ return (u32)(((u64) prandom_u32() * ep_ro) >> 32);
+}
+
/*
* Handle minimum values for seeds
*/
diff -Nurp a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
--- a/net/ipv4/tcp_input.c 2016-08-31 14:01:18.571384580 +0200
+++ b/net/ipv4/tcp_input.c 2016-09-02 11:45:26.295740789 +0200
@@ -87,7 +87,7 @@ int sysctl_tcp_adv_win_scale __read_most
EXPORT_SYMBOL(sysctl_tcp_adv_win_scale);
/* rfc5961 challenge ack rate limiting */
-int sysctl_tcp_challenge_ack_limit = 100;
+int sysctl_tcp_challenge_ack_limit = 1000;
int sysctl_tcp_stdurg __read_mostly;
int sysctl_tcp_rfc1337 __read_mostly;
@@ -3243,12 +3243,18 @@ static void tcp_send_challenge_ack(struc
static u32 challenge_timestamp;
static unsigned int challenge_count;
u32 now = jiffies / HZ;
+ u32 count;
if (now != challenge_timestamp) {
+ u32 half = (sysctl_tcp_challenge_ack_limit + 1) >> 1;
+
challenge_timestamp = now;
- challenge_count = 0;
+ challenge_count = half +
+ prandom_u32_max(sysctl_tcp_challenge_ack_limit);
}
- if (++challenge_count <= sysctl_tcp_challenge_ack_limit) {
+ count = challenge_count;
+ if (count > 0) {
+ challenge_count = count - 1;
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPCHALLENGEACK);
tcp_send_ack(sk);
}
Powered by blists - more mailing lists