lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 25 Apr 2022 23:27:21 -0700
From:   John Fastabend <john.fastabend@...il.com>
To:     Cong Wang <xiyou.wangcong@...il.com>,
        John Fastabend <john.fastabend@...il.com>
Cc:     Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Cong Wang <cong.wang@...edance.com>,
        Eric Dumazet <edumazet@...gle.com>,
        Daniel Borkmann <daniel@...earbox.net>,
        Jakub Sitnicki <jakub@...udflare.com>
Subject: Re: [Patch bpf-next v1 1/4] tcp: introduce tcp_read_skb()

Cong Wang wrote:
> On Tue, Apr 12, 2022 at 1:00 PM John Fastabend <john.fastabend@...il.com> wrote:
> >
> > Cong Wang wrote:
> > > From: Cong Wang <cong.wang@...edance.com>
> > >
> > > This patch inroduces tcp_read_skb() based on tcp_read_sock(),
> > > a preparation for the next patch which actually introduces
> > > a new sock ops.
> > >
> > > TCP is special here, because it has tcp_read_sock() which is
> > > mainly used by splice(). tcp_read_sock() supports partial read
> > > and arbitrary offset, neither of them is needed for sockmap.
> > >
> > > Cc: Eric Dumazet <edumazet@...gle.com>
> > > Cc: John Fastabend <john.fastabend@...il.com>
> > > Cc: Daniel Borkmann <daniel@...earbox.net>
> > > Cc: Jakub Sitnicki <jakub@...udflare.com>
> > > Signed-off-by: Cong Wang <cong.wang@...edance.com>
> > > ---
> >
> > Thanks for doing this Cong comment/question inline.
> >
> > [...]
> >
> > > +int tcp_read_skb(struct sock *sk, read_descriptor_t *desc,
> > > +              sk_read_actor_t recv_actor)
> > > +{
> > > +     struct sk_buff *skb;
> > > +     struct tcp_sock *tp = tcp_sk(sk);
> > > +     u32 seq = tp->copied_seq;
> > > +     u32 offset;
> > > +     int copied = 0;
> > > +
> > > +     if (sk->sk_state == TCP_LISTEN)
> > > +             return -ENOTCONN;
> > > +     while ((skb = tcp_recv_skb(sk, seq, &offset, true)) != NULL) {
> >
> > I'm trying to see why we might have an offset here if we always
> > consume the entire skb. There is a comment in tcp_recv_skb around
> > GRO packets, but not clear how this applies here if it does at all
> > to me yet. Will read a bit more I guess.
> >
> > If the offset can be >0 than we also need to fix the recv_actor to
> > account for the extra offset in the skb. As is the bpf prog might
> > see duplicate data. This is a problem on the stream parser now.
> >
> > Then another fallout is if offset is zero than we could just do
> > a skb_dequeue here and skip the tcp_recv_skb bool flag addition
> > and upate.
> 
> I think it is mainly for splice(), and of course strparser, but none of
> them is touched by my patchset.
> 
> >
> > I'll continue reading after a few other things I need to get
> > sorted this afternoon, but maybe you have the answer on hand.
> >
> 
> Please let me know if you have any other comments.

For now no more comments. If its not used then we can drop the offset
logic in this patch and the code looks much simpler.

> 
> Thanks.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ