lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 9 Jul 2018 04:54:54 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     joakim.misund@...il.com, Florian Westphal <fw@...len.de>
Cc:     Eric Dumazet <edumazet@...gle.com>,
        "David S. Miller" <davem@...emloft.net>,
        Alexey Kuznetsov <kuznet@....inr.ac.ru>,
        Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] tcp: Added check of destination specific CC before
 sending syn/ack



On 07/09/2018 04:25 AM, joakim.misund@...il.com wrote:
> From: Joakim Misund <joakim.misund@...il.com>
> 
> Issue:
> Currently TCP stack does not check for a destination specific CC before responding to a syn with a syn/ack.
> The system wide default CC is used. If the default CC does not need ECN, but the destination specific does,
> the syn/ack will not carry ECT(0) which makes it eligible to drop instead of being marked at routers.
> In an ECN-based network ECN marks are frequent (likely) and packet not carrying ECT(0) are likely to be dropped.
> This leads to slow connection establishment, and in worst case the establishment can fail.
> 
> Signed-off-by: Joakim Misund <joakim.misund@...il.com>
> ---
>  include/net/tcp.h     | 1 +
>  net/ipv4/tcp_input.c  | 2 ++
>  net/ipv4/tcp_output.c | 2 +-
>  3 files changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/include/net/tcp.h b/include/net/tcp.h
> index af3ec72d5d41..347c59ac0a72 100644
> --- a/include/net/tcp.h
> +++ b/include/net/tcp.h
> @@ -545,6 +545,7 @@ void tcp_send_loss_probe(struct sock *sk);
>  bool tcp_schedule_loss_probe(struct sock *sk, bool advancing_rto);
>  void tcp_skb_collapse_tstamp(struct sk_buff *skb,
>  			     const struct sk_buff *next_skb);
> +void tcp_ca_dst_init(struct sock *sk, const struct dst_entry *dst);
>  
>  /* tcp_input.c */
>  void tcp_rearm_rto(struct sock *sk);
> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> index 8e5522c6833a..973e3b5b0516 100644
> --- a/net/ipv4/tcp_input.c
> +++ b/net/ipv4/tcp_input.c
> @@ -6401,6 +6401,8 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
>  		isn = af_ops->init_seq(skb);
>  	}
>  
> +	tcp_ca_dst_init(sk, dst);


At this point sk is not locked (and will not be)

Multiple cpus can handle SYN packets in // so this would be racy.

You need to solve this problem in another way.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ