lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 25 Aug 2022 10:31:15 +0200
From:   Jakub Sitnicki <jakub@...udflare.com>
To:     Cong Wang <xiyou.wangcong@...il.com>
Cc:     netdev@...r.kernel.org, bpf@...r.kernel.org,
        Cong Wang <cong.wang@...edance.com>,
        Eric Dumazet <edumazet@...gle.com>,
        John Fastabend <john.fastabend@...il.com>
Subject: Re: [Patch net v3 2/4] tcp: fix tcp_cleanup_rbuf() for tcp_read_skb()

On Wed, Aug 17, 2022 at 12:54 PM -07, Cong Wang wrote:
> From: Cong Wang <cong.wang@...edance.com>
>
> tcp_cleanup_rbuf() retrieves the skb from sk_receive_queue, it
> assumes the skb is not yet dequeued. This is no longer true for
> tcp_read_skb() case where we dequeue the skb first.
>
> Fix this by introducing a helper __tcp_cleanup_rbuf() which does
> not require any skb and calling it in tcp_read_skb().
>
> Fixes: 04919bed948d ("tcp: Introduce tcp_read_skb()")
> Cc: Eric Dumazet <edumazet@...gle.com>
> Cc: John Fastabend <john.fastabend@...il.com>
> Cc: Jakub Sitnicki <jakub@...udflare.com>
> Signed-off-by: Cong Wang <cong.wang@...edance.com>
> ---
>  net/ipv4/tcp.c | 24 ++++++++++++++----------
>  1 file changed, 14 insertions(+), 10 deletions(-)
>
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 05da5cac080b..181a0d350123 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -1567,17 +1567,11 @@ static int tcp_peek_sndq(struct sock *sk, struct msghdr *msg, int len)
>   * calculation of whether or not we must ACK for the sake of
>   * a window update.
>   */
> -void tcp_cleanup_rbuf(struct sock *sk, int copied)
> +static void __tcp_cleanup_rbuf(struct sock *sk, int copied)
>  {
>  	struct tcp_sock *tp = tcp_sk(sk);
>  	bool time_to_ack = false;
>  
> -	struct sk_buff *skb = skb_peek(&sk->sk_receive_queue);
> -
> -	WARN(skb && !before(tp->copied_seq, TCP_SKB_CB(skb)->end_seq),
> -	     "cleanup rbuf bug: copied %X seq %X rcvnxt %X\n",
> -	     tp->copied_seq, TCP_SKB_CB(skb)->end_seq, tp->rcv_nxt);
> -
>  	if (inet_csk_ack_scheduled(sk)) {
>  		const struct inet_connection_sock *icsk = inet_csk(sk);
>  
> @@ -1623,6 +1617,17 @@ void tcp_cleanup_rbuf(struct sock *sk, int copied)
>  		tcp_send_ack(sk);
>  }
>  
> +void tcp_cleanup_rbuf(struct sock *sk, int copied)
> +{
> +	struct sk_buff *skb = skb_peek(&sk->sk_receive_queue);
> +	struct tcp_sock *tp = tcp_sk(sk);
> +
> +	WARN(skb && !before(tp->copied_seq, TCP_SKB_CB(skb)->end_seq),
> +	     "cleanup rbuf bug: copied %X seq %X rcvnxt %X\n",
> +	     tp->copied_seq, TCP_SKB_CB(skb)->end_seq, tp->rcv_nxt);
> +	__tcp_cleanup_rbuf(sk, copied);
> +}
> +
>  static void tcp_eat_recv_skb(struct sock *sk, struct sk_buff *skb)
>  {
>  	__skb_unlink(skb, &sk->sk_receive_queue);
> @@ -1771,20 +1776,19 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
>  		copied += used;
>  
>  		if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) {
> -			consume_skb(skb);
>  			++seq;
>  			break;
>  		}
> -		consume_skb(skb);
>  		break;
>  	}
> +	consume_skb(skb);
>  	WRITE_ONCE(tp->copied_seq, seq);
>  
>  	tcp_rcv_space_adjust(sk);
>  
>  	/* Clean up data we have read: This will do ACK frames. */
>  	if (copied > 0)
> -		tcp_cleanup_rbuf(sk, copied);
> +		__tcp_cleanup_rbuf(sk, copied);
>  
>  	return copied;
>  }

This seems to be fixing 2 different problems, but the commit description
mentions just one.

consume_skb() got pulled out of the `while' body. And thanks to that we
are not leaving a dangling skb ref if recv_actor, sk_psock_verdict_recv
in this case, returns 0.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ