lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87a5tesd8n.fsf@cloudflare.com>
Date: Fri, 22 Sep 2023 12:23:53 +0200
From: Jakub Sitnicki <jakub@...udflare.com>
To: John Fastabend <john.fastabend@...il.com>
Cc: daniel@...earbox.net, ast@...nel.org, andrii@...nel.org,
 bpf@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH bpf 2/3] bpf: sockmap, do not inc copied_seq when PEEK
 flag set

On Wed, Sep 20, 2023 at 04:27 PM -07, John Fastabend wrote:
> When data is peek'd off the receive queue we shouldn't considered it
> copied from tcp_sock side. When we increment copied_seq this will confuse
> tcp_data_ready() because copied_seq can be arbitrarily increased. From]
> application side it results in poll() operations not waking up when
> expected.
>
> Notice tcp stack without BPF recvmsg programs also does not increment
> copied_seq.
>
> We broke this when we moved copied_seq into recvmsg to only update when
> actual copy was happening. But, it wasn't working correctly either before
> because the tcp_data_ready() tried to use the copied_seq value to see
> if data was read by user yet. See fixes tags.
>
> Fixes: e5c6de5fa0258 ("bpf, sockmap: Incorrectly handling copied_seq")
> Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()")
> Signed-off-by: John Fastabend <john.fastabend@...il.com>
> ---
>  net/ipv4/tcp_bpf.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
> index 81f0dff69e0b..327268203001 100644
> --- a/net/ipv4/tcp_bpf.c
> +++ b/net/ipv4/tcp_bpf.c
> @@ -222,6 +222,7 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
>  				  int *addr_len)
>  {
>  	struct tcp_sock *tcp = tcp_sk(sk);
> +	int peek = flags & MSG_PEEK;
>  	u32 seq = tcp->copied_seq;
>  	struct sk_psock *psock;
>  	int copied = 0;
> @@ -311,7 +312,8 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
>  		copied = -EAGAIN;
>  	}
>  out:
> -	WRITE_ONCE(tcp->copied_seq, seq);
> +	if (!peek)
> +		WRITE_ONCE(tcp->copied_seq, seq);
>  	tcp_rcv_space_adjust(sk);
>  	if (copied > 0)
>  		__tcp_cleanup_rbuf(sk, copied);

I was surprised to see that we recalculate TCP buffer space and ACK
frames when peeking at the receive queue. But tcp_recvmsg seems to do
the same.

Reviewed-by: Jakub Sitnicki <jakub@...udflare.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ