lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <6255da425c4ad_57e1208f9@john.notmuch> Date: Tue, 12 Apr 2022 13:00:02 -0700 From: John Fastabend <john.fastabend@...il.com> To: Cong Wang <xiyou.wangcong@...il.com>, netdev@...r.kernel.org Cc: Cong Wang <cong.wang@...edance.com>, Eric Dumazet <edumazet@...gle.com>, John Fastabend <john.fastabend@...il.com>, Daniel Borkmann <daniel@...earbox.net>, Jakub Sitnicki <jakub@...udflare.com> Subject: RE: [Patch bpf-next v1 1/4] tcp: introduce tcp_read_skb() Cong Wang wrote: > From: Cong Wang <cong.wang@...edance.com> > > This patch inroduces tcp_read_skb() based on tcp_read_sock(), > a preparation for the next patch which actually introduces > a new sock ops. > > TCP is special here, because it has tcp_read_sock() which is > mainly used by splice(). tcp_read_sock() supports partial read > and arbitrary offset, neither of them is needed for sockmap. > > Cc: Eric Dumazet <edumazet@...gle.com> > Cc: John Fastabend <john.fastabend@...il.com> > Cc: Daniel Borkmann <daniel@...earbox.net> > Cc: Jakub Sitnicki <jakub@...udflare.com> > Signed-off-by: Cong Wang <cong.wang@...edance.com> > --- Thanks for doing this Cong comment/question inline. [...] > +int tcp_read_skb(struct sock *sk, read_descriptor_t *desc, > + sk_read_actor_t recv_actor) > +{ > + struct sk_buff *skb; > + struct tcp_sock *tp = tcp_sk(sk); > + u32 seq = tp->copied_seq; > + u32 offset; > + int copied = 0; > + > + if (sk->sk_state == TCP_LISTEN) > + return -ENOTCONN; > + while ((skb = tcp_recv_skb(sk, seq, &offset, true)) != NULL) { I'm trying to see why we might have an offset here if we always consume the entire skb. There is a comment in tcp_recv_skb around GRO packets, but not clear how this applies here if it does at all to me yet. Will read a bit more I guess. If the offset can be >0 than we also need to fix the recv_actor to account for the extra offset in the skb. As is the bpf prog might see duplicate data. This is a problem on the stream parser now. Then another fallout is if offset is zero than we could just do a skb_dequeue here and skip the tcp_recv_skb bool flag addition and upate. I'll continue reading after a few other things I need to get sorted this afternoon, but maybe you have the answer on hand. > + if (offset < skb->len) { > + int used; > + size_t len; > + > + len = skb->len - offset; > + used = recv_actor(desc, skb, offset, len); > + if (used <= 0) { > + if (!copied) > + copied = used; > + break; > + } > + if (WARN_ON_ONCE(used > len)) > + used = len; > + seq += used; > + copied += used; > + offset += used; > + > + if (offset != skb->len) > + continue; > + } > + if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) { > + kfree_skb(skb); > + ++seq; > + break; > + } > + kfree_skb(skb); > + if (!desc->count) > + break; > + WRITE_ONCE(tp->copied_seq, seq); > + } > + WRITE_ONCE(tp->copied_seq, seq); > + > + tcp_rcv_space_adjust(sk); > + > + /* Clean up data we have read: This will do ACK frames. */ > + if (copied > 0) > + tcp_cleanup_rbuf(sk, copied); > + > + return copied; > +} > +EXPORT_SYMBOL(tcp_read_skb); > +
Powered by blists - more mailing lists