[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAP-5=fV9J9PVBw3qCvNZkeNMCzJGPy4WgMDw-6ppCMgSOH1XpQ@mail.gmail.com>
Date: Wed, 7 Jan 2026 11:03:22 -0800
From: Ian Rogers <irogers@...gle.com>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>, Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>, James Clark <james.clark@...aro.org>,
Thomas Falcon <thomas.falcon@...el.com>, Thomas Richter <tmricht@...ux.ibm.com>,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 0/2] Add procfs based memory and network tool events
On Wed, Jan 7, 2026 at 12:08 AM Namhyung Kim <namhyung@...nel.org> wrote:
>
> Hi Ian,
>
> On Sat, Jan 03, 2026 at 05:17:36PM -0800, Ian Rogers wrote:
> > Add events for memory use and network activity based on data readily
> > available in /prod/pid/statm, /proc/pid/smaps_rollup and
> > /proc/pid/net/dev. For example the network usage of chrome processes
> > on a system may be gathered with:
> > ```
> > $ perf stat -e net_rx_bytes,net_rx_compressed,net_rx_drop,net_rx_errors,net_rx_fifo,net_rx_frame,net_rx_multicast,net_rx_packets,net_tx_bytes,net_tx_carrier,net_tx_colls,net_tx_compressed,net_tx_drop,net_tx_errors,net_tx_fifo,net_tx_packets -p $(pidof -d, chrome) -I 1000
> > 1.001023475 0 net_rx_bytes
> > 1.001023475 0 net_rx_compressed
> > 1.001023475 42,647,328 net_rx_drop
> > 1.001023475 463,069,152 net_rx_errors
> > 1.001023475 0 net_rx_fifo
> > 1.001023475 0 net_rx_frame
> > 1.001023475 0 net_rx_multicast
> > 1.001023475 423,195,831,744 net_rx_packets
> > 1.001023475 0 net_tx_bytes
> > 1.001023475 0 net_tx_carrier
> > 1.001023475 0 net_tx_colls
> > 1.001023475 0 net_tx_compressed
> > 1.001023475 0 net_tx_drop
> > 1.001023475 0 net_tx_errors
> > 1.001023475 0 net_tx_fifo
> > 1.001023475 0 net_tx_packets
> > ```
>
> Interesting.
Thanks.
> >
> > As the events are in the tool_pmu they can be used in metrics. The
> > json descriptions they are exposed in `perf list` and the events can
> > be seen in the python ilist application.
> >
> > Note, if a process terminates then the count reading returns an error
> > and this can expose what appear to be latent bugs in the aggregation
> > and display code.
>
> How do you handle system-wide mode and sampling (perf record)?
So tool events don't support `perf record` and fail at opening due to
the invalid PMU type and config. This is the same as if you did `perf
record -e duration_time` with a perf today, which looks like:
```
$ perf record -e duration_time -a sleep 1
Error:
Failure to open event 'duration_time' on PMU 'tool' which will be removed.
No fallback found for 'duration_time' for error 0
Error:
Failure to open any events for recording.
```
For system-wide the behavior is hopefully intuitive in that the memory
and network counts are for the whole system rather than the given
processes. For the memory events the proc directory is scanned and all
processes counts aggregated. For network data /proc/net/dev is read
rather than /proc/pid/net/dev. There is more detail in the individual
commit messages on this.
Thanks,
Ian
> Thanks,
> Namhyung
>
> >
> > Ian Rogers (2):
> > perf tool_pmu: Add memory events
> > perf tool_pmu: Add network events
> >
> > tools/perf/builtin-stat.c | 10 +-
> > .../pmu-events/arch/common/common/tool.json | 266 ++++++++-
> > tools/perf/pmu-events/empty-pmu-events.c | 312 +++++++----
> > tools/perf/util/tool_pmu.c | 514 +++++++++++++++++-
> > tools/perf/util/tool_pmu.h | 44 ++
> > 5 files changed, 1026 insertions(+), 120 deletions(-)
> >
> > --
> > 2.52.0.351.gbe84eed79e-goog
> >
Powered by blists - more mailing lists