[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YzhHo0dJEijtGNzZ@krava>
Date: Sat, 1 Oct 2022 15:58:59 +0200
From: Jiri Olsa <olsajiri@...il.com>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>,
Johannes Weiner <hannes@...xchg.org>, cgroups@...r.kernel.org,
Arnaldo Carvalho de Melo <acme@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-perf-users@...r.kernel.org, Song Liu <songliubraving@...com>,
bpf@...r.kernel.org
Subject: Re: [PATCH] perf stat: Support old kernels for bperf cgroup counting
On Wed, Sep 21, 2022 at 09:14:35PM -0700, Namhyung Kim wrote:
> The recent change in the cgroup will break the backward compatiblity in
> the BPF program. It should support both old and new kernels using BPF
> CO-RE technique.
>
> Like the task_struct->__state handling in the offcpu analysis, we can
> check the field name in the cgroup struct.
>
> Signed-off-by: Namhyung Kim <namhyung@...nel.org>
lgtm
Acked-by: Jiri Olsa <jolsa@...nel.org>
jirka
> ---
> Arnaldo, I think this should go through the cgroup tree since it depends
> on the earlier change there. I don't think it'd conflict with other
> perf changes but please let me know if you see any trouble, thanks!
>
> tools/perf/util/bpf_skel/bperf_cgroup.bpf.c | 29 ++++++++++++++++++++-
> 1 file changed, 28 insertions(+), 1 deletion(-)
>
> diff --git a/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> index 488bd398f01d..4fe61043de04 100644
> --- a/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> +++ b/tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> @@ -43,12 +43,39 @@ struct {
> __uint(value_size, sizeof(struct bpf_perf_event_value));
> } cgrp_readings SEC(".maps");
>
> +/* new kernel cgroup definition */
> +struct cgroup___new {
> + int level;
> + struct cgroup *ancestors[];
> +} __attribute__((preserve_access_index));
> +
> +/* old kernel cgroup definition */
> +struct cgroup___old {
> + int level;
> + u64 ancestor_ids[];
> +} __attribute__((preserve_access_index));
> +
> const volatile __u32 num_events = 1;
> const volatile __u32 num_cpus = 1;
>
> int enabled = 0;
> int use_cgroup_v2 = 0;
>
> +static inline __u64 get_cgroup_v1_ancestor_id(struct cgroup *cgrp, int level)
> +{
> + /* recast pointer to capture new type for compiler */
> + struct cgroup___new *cgrp_new = (void *)cgrp;
> +
> + if (bpf_core_field_exists(cgrp_new->ancestors)) {
> + return BPF_CORE_READ(cgrp_new, ancestors[level], kn, id);
> + } else {
> + /* recast pointer to capture old type for compiler */
> + struct cgroup___old *cgrp_old = (void *)cgrp;
> +
> + return BPF_CORE_READ(cgrp_old, ancestor_ids[level]);
> + }
> +}
> +
> static inline int get_cgroup_v1_idx(__u32 *cgrps, int size)
> {
> struct task_struct *p = (void *)bpf_get_current_task();
> @@ -70,7 +97,7 @@ static inline int get_cgroup_v1_idx(__u32 *cgrps, int size)
> break;
>
> // convert cgroup-id to a map index
> - cgrp_id = BPF_CORE_READ(cgrp, ancestors[i], kn, id);
> + cgrp_id = get_cgroup_v1_ancestor_id(cgrp, i);
> elem = bpf_map_lookup_elem(&cgrp_idx, &cgrp_id);
> if (!elem)
> continue;
> --
> 2.37.3.968.ga6b4b080e4-goog
>
Powered by blists - more mailing lists