lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALOAHbD4cdjStf=5EkRkMqi_8OTL8y3vQdSntdhwmAosAcvzdg@mail.gmail.com>
Date:   Wed, 13 Feb 2019 10:10:50 +0800
From:   Yafang Shao <laoar.shao@...il.com>
To:     Eric Dumazet <eric.dumazet@...il.com>
Cc:     Daniel Borkmann <daniel@...earbox.net>, ast@...nel.org,
        Yonghong Song <yhs@...com>, brakmo@...com,
        Eric Dumazet <edumazet@...gle.com>,
        David Miller <davem@...emloft.net>,
        netdev <netdev@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>, shaoyafang@...iglobal.com
Subject: Re: [bpf-next 2/2] bpf: add BPF_SOCK_OPS_STATS_CB for tcp_stats()

On Tue, Feb 12, 2019 at 11:11 PM Eric Dumazet <eric.dumazet@...il.com> wrote:
>
>
>
> On 02/12/2019 03:31 AM, Yafang Shao wrote:
> > Introuce this new op BPF_SOCK_OPS_STATS_CB for tcp_stats() such that it
> > can be traced via BPF on a per socket basis.
> > There's one argument in BPF_SOCK_OPS_STATS_CB, which is Linux MIB index
> > LINUX_MIB_* to indicate the TCP event.
> > All these Linux MIBs are defined in include/uapi/linux/snmp.h.
> >
> > Signed-off-by: Yafang Shao <laoar.shao@...il.com>
> > ---
> >  include/uapi/linux/bpf.h | 5 +++++
> >  net/ipv4/tcp_input.c     | 1 +
> >  2 files changed, 6 insertions(+)
> >
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 1777fa0..0314ddd 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -2894,6 +2894,11 @@ enum {
> >       BPF_SOCK_OPS_TCP_LISTEN_CB,     /* Called on listen(2), right after
> >                                        * socket transition to LISTEN state.
> >                                        */
> > +     BPF_SOCK_OPS_STATS_CB,          /*
> > +                                      * Called on tcp_stats().
> > +                                      * Arg1: Linux MIB index
> > +                                      *       LINUX_MIB_*
> > +                                      */
> >  };
> >
> >  /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
> > diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> > index 88deb1f..4acf458 100644
> > --- a/net/ipv4/tcp_input.c
> > +++ b/net/ipv4/tcp_input.c
> > @@ -3557,6 +3557,7 @@ static u32 tcp_newly_delivered(struct sock *sk, u32 prior_delivered, int flag)
> >  static void tcp_stats(struct sock *sk, int mib_idx)
> >  {
> >       NET_INC_STATS(sock_net(sk), mib_idx);
> > +     tcp_call_bpf(sk, BPF_SOCK_OPS_STATS_CB, 1, &mib_idx);
> >  }
> >
> >  /* This routine deals with incoming acks, but not outgoing ones. */
> >
>
> If the plan is to add to all NET_INC_STATS() calls in TCP an additional tcp_call_bpf()
> I will say no.
>

I have no plan to place it in fast path.
Because what I concerned is the TCP abnomal events, which should be
not in the fast path.
However if we want to place it in the fast path, the code should be as bellow,

if (BPF_SOCK_OPS_TEST_FLAG(tcp_sk(sk), BPF_SOCK_OPS_STATS_CB_FLAG))
    tcp_call_bpf(sk, BPF_SOCK_OPS_STATS_CB, 1, &mib_idx);

> So far, tcp_call_bpf() has not been used in fast path.
>

That's why I do it like this.

Thanks
Yafang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ