[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAH0uvoi_Soj=b1YdsqN=RhHMf340r1YZm72JgkyAyUi-Rox7_g@mail.gmail.com>
Date: Wed, 19 Mar 2025 16:39:17 -0700
From: Howard Chu <howardchu95@...il.com>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>, Ian Rogers <irogers@...gle.com>,
Kan Liang <kan.liang@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>, Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>, LKML <linux-kernel@...r.kernel.org>,
linux-perf-users@...r.kernel.org, bpf@...r.kernel.org,
Song Liu <song@...nel.org>
Subject: Re: [PATCH v2] perf trace: Implement syscall summary in BPF
Hi Namhyung,
I haven't finished the code review yet, but here are something that I
found interesting to share:
## 1
You used sudo ./perf trace -as --bpf-summary --summary-mode=total -- sleep 1 as
an example
If I use perf trace --bpf-summary without the '-a', that is to record
the process / task of 'sleep 1':
sudo ./perf trace -s --bpf-summary --summary-mode=total -- sleep 1
It won't be recording this one process. So there should be a sentence
saying that bpf-summary only does system wide summaries.
## 2
there is a bug in min section, which made min always 0
you can see it in the sample output you provided above:
syscall calls errors total min avg
max stddev
(msec) (msec) (msec)
(msec) (%)
--------------- -------- ------ -------- --------- ---------
--------- ------
futex 372 18 4373.773 0.000 11.757
997.715 660.42%
poll 241 0 2757.963 0.000 11.444
997.758 580.34%
epoll_wait 161 0 2460.854 0.000 15.285
325.189 260.73%
ppoll 19 0 1298.652 0.000 68.350
667.172 281.46%
clock_nanosleep 1 0 1000.093 0.000 1000.093
1000.093 0.00%
epoll_pwait 16 0 192.787 0.000 12.049
173.994 348.73%
nanosleep 6 0 50.926 0.000 8.488
10.210 43.96%
clock_nanosleep has only 1 call so min can never be 0, it has to be
equal to the max and the mean.
This can be resolved by adding this line (same as what you did in the BPF code):
diff --git a/tools/perf/util/bpf-trace-summary.c
b/tools/perf/util/bpf-trace-summary.c
index 5ae9feca244d..eb98db7d6e33 100644
--- a/tools/perf/util/bpf-trace-summary.c
+++ b/tools/perf/util/bpf-trace-summary.c
@@ -243,7 +243,7 @@ static int update_total_stats(struct hashmap
*hash, struct syscall_key *map_key,
if (stat->max_time < map_data->max_time)
stat->max_time = map_data->max_time;
- if (stat->min_time > map_data->min_time)
+ if (stat->min_time > map_data->min_time || !stat->min_time)
stat->min_time = map_data->min_time;
return 0;
(sorry for the poor formatting from the gmail browser app)
## 3
Apologies for misunderstanding how the calculation of the 'standard
deviation of mean' works. You can decide what to do with it. :) Thanks
for the explanation in the thread of the previous version.
Thanks,
Howard
On Wed, Mar 19, 2025 at 3:19 PM Howard Chu <howardchu95@...il.com> wrote:
>
> Hi Namhyung,
>
> On Wed, Mar 19, 2025 at 3:07 PM Namhyung Kim <namhyung@...nel.org> wrote:
> >
> > Hello Howard,
> >
> > On Wed, Mar 19, 2025 at 12:00:10PM -0700, Howard Chu wrote:
> > > Hello Namhyung,
> > >
> > > Can you please rebase it? I cannot apply it, getting:
> > >
> > > perf $ git apply --reject --whitespace=fix
> > > ./v2_20250317_namhyung_perf_trace_implement_syscall_summary_in_bpf.mbx
> > > Checking patch tools/perf/Documentation/perf-trace.txt...
> > > Checking patch tools/perf/Makefile.perf...
> > > Hunk #1 succeeded at 1198 (offset -8 lines).
> > > Checking patch tools/perf/builtin-trace.c...
> > > error: while searching for:
> > > bool hexret;
> > > };
> > >
> > > enum summary_mode {
> > > SUMMARY__NONE = 0,
> > > SUMMARY__BY_TOTAL,
> > > SUMMARY__BY_THREAD,
> > > };
> > >
> > > struct trace {
> > > struct perf_tool tool;
> > > struct {
> > >
> > > error: patch failed: tools/perf/builtin-trace.c:140
> >
> > Oops, I think I forgot to say it's on top of Ian's change.
> > Please try this first. Sorry for the confusion.
> >
> > https://lore.kernel.org/r/20250319050741.269828-1-irogers@google.com
>
> Yep, with Ian's patches it successfully applied. :)
>
> Thanks,
> Howard
> >
> > Thanks,
> > Namhyung
> >
Powered by blists - more mailing lists