[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210625071826.608504-1-namhyung@kernel.org>
Date: Fri, 25 Jun 2021 00:18:22 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: Arnaldo Carvalho de Melo <acme@...nel.org>,
Jiri Olsa <jolsa@...hat.com>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Andi Kleen <ak@...ux.intel.com>,
Ian Rogers <irogers@...gle.com>,
Stephane Eranian <eranian@...gle.com>,
Song Liu <songliubraving@...com>
Subject: [PATCHSET v4 0/4] perf stat: Enable BPF counters with --for-each-cgroup
Hello,
This is to add BPF support for --for-each-cgroup to handle many cgroup
events on big machines. You can use the --bpf-counters to enable the
new behavior.
* changes in v4
- convert cgrp_readings to a per-cpu array map
- remove now-unused cpu_idx map
- move common functions to a header file
- reuse bpftool bootstrap binary
- fix build error in the cgroup code
* changes in v3
- support cgroup hierarchy with ancestor ids
- add and trigger raw_tp BPF program
- add a build rule for vmlinux.h
* changes in v2
- remove incorrect use of BPF_F_PRESERVE_ELEMS
- add missing map elements after lookup
- handle cgroup v1
Basic idea is to use a single set of per-cpu events to count
interested events and aggregate them to each cgroup. I used bperf
mechanism to use a BPF program for cgroup-switches and save the
results in a matching map element for given cgroups.
Without this, we need to have separate events for cgroups, and it
creates unnecessary multiplexing overhead (and PMU programming) when
tasks in different cgroups are switched. I saw this makes a big
difference on 256 cpu machines with hundreds of cgroups.
Actually this is what I wanted to do it in the kernel [1], but we can
do the job using BPF!
Thanks,
Namhyung
[1] https://lore.kernel.org/lkml/20210413155337.644993-1-namhyung@kernel.org/
Namhyung Kim (4):
perf tools: Add read_cgroup_id() function
perf tools: Add cgroup_is_v2() helper
perf tools: Move common bpf functions to bpf_counter.h
perf stat: Enable BPF counter with --for-each-cgroup
tools/perf/Makefile.perf | 17 +-
tools/perf/util/Build | 1 +
tools/perf/util/bpf_counter.c | 57 +---
tools/perf/util/bpf_counter.h | 52 ++++
tools/perf/util/bpf_counter_cgroup.c | 299 ++++++++++++++++++++
tools/perf/util/bpf_skel/bperf_cgroup.bpf.c | 191 +++++++++++++
tools/perf/util/cgroup.c | 46 +++
tools/perf/util/cgroup.h | 12 +
8 files changed, 622 insertions(+), 53 deletions(-)
create mode 100644 tools/perf/util/bpf_counter_cgroup.c
create mode 100644 tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
--
2.32.0.93.g670b81a890-goog
Powered by blists - more mailing lists