[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+Wh2krOy4YFWvBsEx-s_JgQ0HixHAVJwGw18dVPeyiqw@mail.gmail.com>
Date: Tue, 10 Jan 2023 12:49:30 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Yunhui Cui <cuiyunhui@...edance.com>
Cc: rostedt@...dmis.org, mhiramat@...nel.org, davem@...emloft.net,
kuba@...nel.org, pabeni@...hat.com, kuniyu@...zon.com,
xiyou.wangcong@...il.com, duanxiongchun@...edance.com,
linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
netdev@...r.kernel.org, dust.li@...ux.alibaba.com
Subject: Re: [PATCH v5] sock: add tracepoint for send recv length
On Tue, Jan 10, 2023 at 10:15 AM Yunhui Cui <cuiyunhui@...edance.com> wrote:
>
> Add 2 tracepoints to monitor the tcp/udp traffic
> of per process and per cgroup.
>
> Regarding monitoring the tcp/udp traffic of each process, there are two
> existing solutions, the first one is https://www.atoptool.nl/netatop.php.
> The second is via kprobe/kretprobe.
>
> Netatop solution is implemented by registering the hook function at the
> hook point provided by the netfilter framework.
>
> These hook functions may be in the soft interrupt context and cannot
> directly obtain the pid. Some data structures are added to bind packets
> and processes. For example, struct taskinfobucket, struct taskinfo ...
>
> Every time the process sends and receives packets it needs multiple
> hashmaps,resulting in low performance and it has the problem fo inaccurate
> tcp/udp traffic statistics(for example: multiple threads share sockets).
>
> We can obtain the information with kretprobe, but as we know, kprobe gets
> the result by trappig in an exception, which loses performance compared
> to tracepoint.
>
> We compared the performance of tracepoints with the above two methods, and
> the results are as follows:
>
> ab -n 1000000 -c 1000 -r http://127.0.0.1/index.html
> without trace:
> Time per request: 39.660 [ms] (mean)
> Time per request: 0.040 [ms] (mean, across all concurrent requests)
>
> netatop:
> Time per request: 50.717 [ms] (mean)
> Time per request: 0.051 [ms] (mean, across all concurrent requests)
>
> kr:
> Time per request: 43.168 [ms] (mean)
> Time per request: 0.043 [ms] (mean, across all concurrent requests)
>
> tracepoint:
> Time per request: 41.004 [ms] (mean)
> Time per request: 0.041 [ms] (mean, across all concurrent requests
>
> It can be seen that tracepoint has better performance.
>
> Signed-off-by: Yunhui Cui <cuiyunhui@...edance.com>
> Signed-off-by: Xiongchun Duan <duanxiongchun@...edance.com>
> ---
> include/trace/events/sock.h | 44 +++++++++++++++++++++++++++++++++++++
> net/socket.c | 36 ++++++++++++++++++++++++++----
> 2 files changed, 76 insertions(+), 4 deletions(-)
>
...
> +static noinline void call_trace_sock_recv_length(struct sock *sk, int ret, int flags)
> +{
> + trace_sock_recv_length(sk, !(flags & MSG_PEEK) ? ret :
> + (ret < 0 ? ret : 0), flags);
Maybe we should only 'fast assign' the two fields (ret and flags),
and let this logic happen later at 'print' time ?
This would reduce storage by one integer, and make fast path really fast.
This also could potentially remove the need for the peculiar construct with
these noinline helpers.
> +}
> +
> static inline int sock_recvmsg_nosec(struct socket *sock, struct msghdr *msg,
> int flags)
> {
> - return INDIRECT_CALL_INET(sock->ops->recvmsg, inet6_recvmsg,
> - inet_recvmsg, sock, msg, msg_data_left(msg),
> - flags);
> + int ret = INDIRECT_CALL_INET(sock->ops->recvmsg, inet6_recvmsg,
> + inet_recvmsg, sock, msg,
> + msg_data_left(msg), flags);
> +
> + if (trace_sock_recv_length_enabled())
> + call_trace_sock_recv_length(sock->sk, !(flags & MSG_PEEK) ?
> + ret : (ret < 0 ? ret : 0), flags);
> + return ret;
> }
Maybe you meant :
if (trace_sock_recv_length_enabled())
call_trace_sock_recv_length(sock->sk, ret, flags);
?
Please make sure to test your patches.
Powered by blists - more mailing lists