[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250408010143.11193-1-kuniyu@amazon.com>
Date: Mon, 7 Apr 2025 18:00:35 -0700
From: Kuniyuki Iwashima <kuniyu@...zon.com>
To: <leitao@...ian.org>
CC: <davem@...emloft.net>, <dsahern@...nel.org>, <edumazet@...gle.com>,
<horms@...nel.org>, <kernel-team@...a.com>, <kuba@...nel.org>,
<kuniyu@...zon.com>, <linux-kernel@...r.kernel.org>,
<linux-trace-kernel@...r.kernel.org>, <mathieu.desnoyers@...icios.com>,
<mhiramat@...nel.org>, <ncardwell@...gle.com>, <netdev@...r.kernel.org>,
<pabeni@...hat.com>, <rostedt@...dmis.org>, <song@...nel.org>,
<yonghong.song@...ux.dev>
Subject: Re: [PATCH net-next v2 2/2] trace: tcp: Add tracepoint for tcp_sendmsg_locked()
From: Breno Leitao <leitao@...ian.org>
Date: Mon, 07 Apr 2025 06:40:44 -0700
> Add a tracepoint to monitor TCP send operations, enabling detailed
> visibility into TCP message transmission.
>
> Create a new tracepoint within the tcp_sendmsg_locked function,
> capturing traditional fields along with size_goal, which indicates the
> optimal data size for a single TCP segment. Additionally, a reference to
> the struct sock sk is passed, allowing direct access for BPF programs.
> The implementation is largely based on David's patch and suggestions.
>
> The implementation is largely based on David's patch[1] and suggestions.
nit: duplicate sentences.
>
> Link: https://lore.kernel.org/all/70168c8f-bf52-4279-b4c4-be64527aa1ac@kernel.org/ [1]
> Signed-off-by: Breno Leitao <leitao@...ian.org>
> ---
> include/trace/events/tcp.h | 24 ++++++++++++++++++++++++
> net/ipv4/tcp.c | 2 ++
> 2 files changed, 26 insertions(+)
>
> diff --git a/include/trace/events/tcp.h b/include/trace/events/tcp.h
> index 1a40c41ff8c30..cab25504c4f9d 100644
> --- a/include/trace/events/tcp.h
> +++ b/include/trace/events/tcp.h
> @@ -259,6 +259,30 @@ TRACE_EVENT(tcp_retransmit_synack,
> __entry->saddr_v6, __entry->daddr_v6)
> );
>
> +TRACE_EVENT(tcp_sendmsg_locked,
> + TP_PROTO(const struct sock *sk, const struct msghdr *msg,
> + const struct sk_buff *skb, int size_goal),
> +
> + TP_ARGS(sk, msg, skb, size_goal),
> +
> + TP_STRUCT__entry(
> + __field(const void *, skb_addr)
> + __field(int, skb_len)
> + __field(int, msg_left)
> + __field(int, size_goal)
> + ),
> +
> + TP_fast_assign(
> + __entry->skb_addr = skb;
> + __entry->skb_len = skb ? skb->len : 0;
> + __entry->msg_left = msg_data_left(msg);
> + __entry->size_goal = size_goal;
> + ),
> +
> + TP_printk("skb_addr %p skb_len %d msg_left %d size_goal %d",
> + __entry->skb_addr, __entry->skb_len, __entry->msg_left,
> + __entry->size_goal));
> +
> DECLARE_TRACE(tcp_cwnd_reduction_tp,
> TP_PROTO(const struct sock *sk, int newly_acked_sacked,
> int newly_lost, int flag),
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index ea8de00f669d0..270ce2c8c2d54 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -1160,6 +1160,8 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
> if (skb)
> copy = size_goal - skb->len;
>
> + trace_tcp_sendmsg_locked(sk, msg, skb, size_goal);
skb could be NULL, so I think raw_tp_null_args[] needs to be updated.
Maybe try attaching a bpf prog that dereferences skb unconditionally
and see if the bpf verifier rejects it.
See this commit for the similar issue:
commit 5da7e15fb5a12e78de974d8908f348e279922ce9
Author: Kuniyuki Iwashima <kuniyu@...zon.com>
Date: Fri Jan 31 19:01:42 2025 -0800
net: Add rx_skb of kfree_skb to raw_tp_null_args[].
> +
> if (copy <= 0 || !tcp_skb_can_collapse_to(skb)) {
> bool first_skb;
>
>
> --
> 2.47.1
Powered by blists - more mailing lists