[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yyy2HAxVRy6TuTHQ@kernel.org>
Date: Thu, 22 Sep 2022 20:23:08 +0100
From: Arnaldo Carvalho de Melo <acme@...nel.org>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Jiri Olsa <jolsa@...nel.org>, Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>,
linux-perf-users@...r.kernel.org, Song Liu <songliubraving@...com>,
bpf@...r.kernel.org
Subject: Re: [PATCH v2] perf tools: Get a perf cgroup more portably in BPF
Em Wed, Sep 21, 2022 at 09:40:23PM -0700, Namhyung Kim escreveu:
> The perf_event_cgrp_id can be different on other configurations.
> To be more portable as CO-RE, it needs to get the cgroup subsys id
> using the bpf_core_enum_value() helper.
>
> Suggested-by: Ian Rogers <irogers@...gle.com>
> Signed-off-by: Namhyung Kim <namhyung@...nel.org>
Applying, Ian, can I have your Reviewed-by?
- Arnaldo
> ---
> v2 changes)
> * fix off_cpu.bpf.c too
> * get perf_subsys_id only once
>
> tools/perf/util/bpf_skel/bperf_cgroup.bpf.c | 6 +++++-
> tools/perf/util/bpf_skel/off_cpu.bpf.c | 12 ++++++++----
> 2 files changed, 13 insertions(+), 5 deletions(-)
>
> diff --git a/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> index 292c430768b5..9223e4b87fe9 100644
> --- a/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> +++ b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> @@ -48,6 +48,7 @@ const volatile __u32 num_cpus = 1;
>
> int enabled = 0;
> int use_cgroup_v2 = 0;
> +int perf_subsys_id = -1;
>
> static inline int get_cgroup_v1_idx(__u32 *cgrps, int size)
> {
> @@ -58,7 +59,10 @@ static inline int get_cgroup_v1_idx(__u32 *cgrps, int size)
> int level;
> int cnt;
>
> - cgrp = BPF_CORE_READ(p, cgroups, subsys[perf_event_cgrp_id], cgroup);
> + if (perf_subsys_id == -1)
> + perf_subsys_id = bpf_core_enum_value(enum cgroup_subsys_id, perf_event_cgrp_id);
> +
> + cgrp = BPF_CORE_READ(p, cgroups, subsys[perf_subsys_id], cgroup);
> level = BPF_CORE_READ(cgrp, level);
>
> for (cnt = 0; i < MAX_LEVELS; i++) {
> diff --git a/tools/perf/util/bpf_skel/off_cpu.bpf.c b/tools/perf/util/bpf_skel/off_cpu.bpf.c
> index c4ba2bcf179f..e917ef7b8875 100644
> --- a/tools/perf/util/bpf_skel/off_cpu.bpf.c
> +++ b/tools/perf/util/bpf_skel/off_cpu.bpf.c
> @@ -94,6 +94,8 @@ const volatile bool has_prev_state = false;
> const volatile bool needs_cgroup = false;
> const volatile bool uses_cgroup_v1 = false;
>
> +int perf_subsys_id = -1;
> +
> /*
> * Old kernel used to call it task_struct->state and now it's '__state'.
> * Use BPF CO-RE "ignored suffix rule" to deal with it like below:
> @@ -119,11 +121,13 @@ static inline __u64 get_cgroup_id(struct task_struct *t)
> {
> struct cgroup *cgrp;
>
> - if (uses_cgroup_v1)
> - cgrp = BPF_CORE_READ(t, cgroups, subsys[perf_event_cgrp_id], cgroup);
> - else
> - cgrp = BPF_CORE_READ(t, cgroups, dfl_cgrp);
> + if (!uses_cgroup_v1)
> + return BPF_CORE_READ(t, cgroups, dfl_cgrp, kn, id);
> +
> + if (perf_subsys_id == -1)
> + perf_subsys_id = bpf_core_enum_value(enum cgroup_subsys_id, perf_event_cgrp_id);
>
> + cgrp = BPF_CORE_READ(t, cgroups, subsys[perf_subsys_id], cgroup);
> return BPF_CORE_READ(cgrp, kn, id);
> }
>
> --
> 2.37.3.968.ga6b4b080e4-goog
--
- Arnaldo
Powered by blists - more mailing lists