[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67b0ae562fc79_36e344294ab@willemb.c.googlers.com.notmuch>
Date: Sat, 15 Feb 2025 10:10:14 -0500
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Jason Xing <kerneljasonxing@...il.com>,
davem@...emloft.net,
edumazet@...gle.com,
kuba@...nel.org,
pabeni@...hat.com,
dsahern@...nel.org,
willemdebruijn.kernel@...il.com,
willemb@...gle.com,
ast@...nel.org,
daniel@...earbox.net,
andrii@...nel.org,
martin.lau@...ux.dev,
eddyz87@...il.com,
song@...nel.org,
yonghong.song@...ux.dev,
john.fastabend@...il.com,
kpsingh@...nel.org,
sdf@...ichev.me,
haoluo@...gle.com,
jolsa@...nel.org,
horms@...nel.org
Cc: bpf@...r.kernel.org,
netdev@...r.kernel.org,
Jason Xing <kerneljasonxing@...il.com>
Subject: Re: [PATCH bpf-next v11 11/12] bpf: support selective sampling for
bpf timestamping
Jason Xing wrote:
> Add the bpf_sock_ops_enable_tx_tstamp kfunc to allow BPF programs to
> selectively enable TX timestamping on a skb during tcp_sendmsg().
>
> For example, BPF program will limit tracking X numbers of packets
> and then will stop there instead of tracing all the sendmsgs of
> matched flow all along. It would be helpful for users who cannot
> afford to calculate latencies from every sendmsg call probably
> due to the performance or storage space consideration.
>
> Signed-off-by: Jason Xing <kerneljasonxing@...il.com>
> ---
> kernel/bpf/btf.c | 1 +
> net/core/filter.c | 33 ++++++++++++++++++++++++++++++++-
> 2 files changed, 33 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 9433b6467bbe..740210f883dc 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -8522,6 +8522,7 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
> case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
> case BPF_PROG_TYPE_CGROUP_SOCKOPT:
> case BPF_PROG_TYPE_CGROUP_SYSCTL:
> + case BPF_PROG_TYPE_SOCK_OPS:
> return BTF_KFUNC_HOOK_CGROUP;
> case BPF_PROG_TYPE_SCHED_ACT:
> return BTF_KFUNC_HOOK_SCHED_ACT;
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 7f56d0bbeb00..3b4c1e7b1470 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -12102,6 +12102,27 @@ __bpf_kfunc int bpf_sk_assign_tcp_reqsk(struct __sk_buff *s, struct sock *sk,
> #endif
> }
>
> +__bpf_kfunc int bpf_sock_ops_enable_tx_tstamp(struct bpf_sock_ops_kern *skops,
> + u64 flags)
> +{
> + struct sk_buff *skb;
> + struct sock *sk;
> +
> + if (skops->op != BPF_SOCK_OPS_TS_SND_CB)
> + return -EOPNOTSUPP;
> +
> + if (flags)
> + return -EINVAL;
> +
> + skb = skops->skb;
> + sk = skops->sk;
nit: not used
> + skb_shinfo(skb)->tx_flags |= SKBTX_BPF;
> + TCP_SKB_CB(skb)->txstamp_ack |= TSTAMP_ACK_BPF;
> + skb_shinfo(skb)->tskey = TCP_SKB_CB(skb)->seq + skb->len - 1;
Can this overwrite the seqno previously calculated by tcp_tx_timestamp?
I suppose that that is safe as long as both calculate the same value.
But good to have explicit.
> +
> + return 0;
> +}
> +
Powered by blists - more mailing lists