[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK6E8=cq1z4jAqjpFdNtbAH_MrcFds8m2PNEb0G5vaBN_TaZfw@mail.gmail.com>
Date: Mon, 11 Jul 2016 09:40:20 -0700
From: Yuchung Cheng <ycheng@...gle.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Yue Cao <ycao009@....edu>, David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Zhiyun Qian <zhiyunq@...ucr.edu>,
Linus Torvalds <torvalds@...ux-foundation.org>,
NealCardwell <ncardwell@...gle.com>
Subject: Re: [PATCH v2 net] tcp: make challenge acks less predictable
On Sun, Jul 10, 2016 at 1:04 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> From: Eric Dumazet <edumazet@...gle.com>
>
> Yue Cao claims that current host rate limiting of challenge ACKS
> (RFC 5961) could leak enough information to allow a patient attacker
> to hijack TCP sessions. He will soon provide details in an academic
> paper.
>
> This patch increases the default limit from 100 to 1000, and adds
> some randomization so that the attacker can no longer hijack
> sessions without spending a considerable amount of probes.
>
> Based on initial analysis and patch from Linus.
>
> Note that we also have per socket rate limiting, so it is tempting
> to remove the host limit in the future.
>
> v2: randomize the count of challenge acks per second, not the period.
>
> Fixes: 282f23c6ee34 ("tcp: implement RFC 5961 3.2")
> Reported-by: Yue Cao <ycao009@....edu>
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Suggested-by: Linus Torvalds <torvalds@...ux-foundation.org>
> Cc: Yuchung Cheng <ycheng@...gle.com>
> Cc: Neal Cardwell <ncardwell@...gle.com>
> ---
Acked-by: Yuchung Cheng <ycheng@...gle.com>
Nice fix. I like v2 a lot.
> net/ipv4/tcp_input.c | 15 ++++++++++-----
> 1 file changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> index d6c8f4cd0800..91868bb17818 100644
> --- a/net/ipv4/tcp_input.c
> +++ b/net/ipv4/tcp_input.c
> @@ -87,7 +87,7 @@ int sysctl_tcp_adv_win_scale __read_mostly = 1;
> EXPORT_SYMBOL(sysctl_tcp_adv_win_scale);
>
> /* rfc5961 challenge ack rate limiting */
> -int sysctl_tcp_challenge_ack_limit = 100;
> +int sysctl_tcp_challenge_ack_limit = 1000;
>
> int sysctl_tcp_stdurg __read_mostly;
> int sysctl_tcp_rfc1337 __read_mostly;
> @@ -3458,7 +3458,7 @@ static void tcp_send_challenge_ack(struct sock *sk, const struct sk_buff *skb)
> static u32 challenge_timestamp;
> static unsigned int challenge_count;
> struct tcp_sock *tp = tcp_sk(sk);
> - u32 now;
> + u32 count, now;
>
> /* First check our per-socket dupack rate limit. */
> if (tcp_oow_rate_limited(sock_net(sk), skb,
> @@ -3466,13 +3466,18 @@ static void tcp_send_challenge_ack(struct sock *sk, const struct sk_buff *skb)
> &tp->last_oow_ack_time))
> return;
>
> - /* Then check the check host-wide RFC 5961 rate limit. */
> + /* Then check host-wide RFC 5961 rate limit. */
> now = jiffies / HZ;
> if (now != challenge_timestamp) {
> + u32 half = (sysctl_tcp_challenge_ack_limit + 1) >> 1;
> +
> challenge_timestamp = now;
> - challenge_count = 0;
> + WRITE_ONCE(challenge_count, half +
> + prandom_u32_max(sysctl_tcp_challenge_ack_limit));
> }
> - if (++challenge_count <= sysctl_tcp_challenge_ack_limit) {
> + count = READ_ONCE(challenge_count);
> + if (count > 0) {
> + WRITE_ONCE(challenge_count, count - 1);
> NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPCHALLENGEACK);
> tcp_send_ack(sk);
> }
>
>
Powered by blists - more mailing lists