lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 12 Feb 2019 07:11:20 -0800
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Yafang Shao <laoar.shao@...il.com>, daniel@...earbox.net,
        ast@...nel.org
Cc:     yhs@...com, brakmo@...com, edumazet@...gle.com,
        davem@...emloft.net, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org, shaoyafang@...iglobal.com
Subject: Re: [bpf-next 2/2] bpf: add BPF_SOCK_OPS_STATS_CB for tcp_stats()



On 02/12/2019 03:31 AM, Yafang Shao wrote:
> Introuce this new op BPF_SOCK_OPS_STATS_CB for tcp_stats() such that it
> can be traced via BPF on a per socket basis.
> There's one argument in BPF_SOCK_OPS_STATS_CB, which is Linux MIB index
> LINUX_MIB_* to indicate the TCP event.
> All these Linux MIBs are defined in include/uapi/linux/snmp.h.
> 
> Signed-off-by: Yafang Shao <laoar.shao@...il.com>
> ---
>  include/uapi/linux/bpf.h | 5 +++++
>  net/ipv4/tcp_input.c     | 1 +
>  2 files changed, 6 insertions(+)
> 
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 1777fa0..0314ddd 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -2894,6 +2894,11 @@ enum {
>  	BPF_SOCK_OPS_TCP_LISTEN_CB,	/* Called on listen(2), right after
>  					 * socket transition to LISTEN state.
>  					 */
> +	BPF_SOCK_OPS_STATS_CB,		/*
> +					 * Called on tcp_stats().
> +					 * Arg1: Linux MIB index
> +					 * 	 LINUX_MIB_*
> +					 */
>  };
>  
>  /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> index 88deb1f..4acf458 100644
> --- a/net/ipv4/tcp_input.c
> +++ b/net/ipv4/tcp_input.c
> @@ -3557,6 +3557,7 @@ static u32 tcp_newly_delivered(struct sock *sk, u32 prior_delivered, int flag)
>  static void tcp_stats(struct sock *sk, int mib_idx)
>  {
>  	NET_INC_STATS(sock_net(sk), mib_idx);
> +	tcp_call_bpf(sk, BPF_SOCK_OPS_STATS_CB, 1, &mib_idx);
>  }
>  
>  /* This routine deals with incoming acks, but not outgoing ones. */
> 

If the plan is to add to all NET_INC_STATS() calls in TCP an additional tcp_call_bpf()
I will say no.

So far, tcp_call_bpf() has not been used in fast path.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ